profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/hpedrorodrigues/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Pedro Rodrigues hpedrorodrigues PID 1 https://hpedrorodrigues.com Software Engineer

hpedrorodrigues/dx 10

:whale: A simple command-line tool to help you manage local Docker resources faster

hpedrorodrigues/sphynx 8

:hammer_and_wrench: A monorepo including CLI, dotfiles, workspace setup scripts among other things

hpedrorodrigues/ApiRedirect 3

An api redirect for all web projects :)

hpedrorodrigues/ImageSearch 3

A small image search app

hpedrorodrigues/dag-modules 2

An Android Gradle plugin to show an adjacency list of internal modules in a multi-module project

hpedrorodrigues/TimeRunner 2

A simple game made in Lua language with Corona SDK

hpedrorodrigues/ADBe 1

A simple CLI to extend some ADB commands on multiple devices

hpedrorodrigues/dlq-x9 1

DLQ-X9 sends a message in a Slack channel every time it detects a new message in an SQS DLQ.

hpedrorodrigues/kx 1

A tiny project to help me learn more about kubectl internals

hpedrorodrigues/PingLang 1

A small language project to help me learn more about compilers (Formal language theory)

startedprompt-toolkit/python-prompt-toolkit

started time in 13 minutes

startedimbushuo/mac-precision-touchpad

started time in 15 minutes

startedpostlight/mercury-parser

started time in 37 minutes

startedhoneycombio/refinery

started time in 2 hours

issue commentkubernetes/kubectl

Support selectors (label query) in kubectl rollout restart

@aramperes: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. </details>

aramperes

comment created time in 3 hours

issue openedkubernetes/kubectl

Support selectors (label query) in kubectl rollout restart

<!-- Please only use this template for submitting enhancement requests -->

What would you like to be added:

Adding the -l / --selector flag to the kubectl rollout restart command (and other applicable rollout subcommands).

Why is this needed:

At the moment, kubectl rollout restart can either

  • restart all resources of a given type (e.g. kubectl rollout restart deployment)
  • restart one resource (e.g. kubectl rollout restart deployment/something)

I think it makes sense that it should be able to restart multiple resources matching a label expression, within a resource type. For example:

  • kubectl rollout restart deployment -l=tier=frontend

This would restart all tier=frontend deployments.

created time in 3 hours

created repositoryalexellis/mqtt-s3-example

mqtt-s3-example

created time in 4 hours

startedjorgebucaran/hydro

started time in 5 hours

startedudem-dlteam/mimosa

started time in 8 hours

issue commentkubernetes/kubectl

Support for setting resourceVersion in kubectl get

is there any update?

lbernail

comment created time in 10 hours

issue commentkubernetes/kubectl

"get ingress" still shows port 80 even though it is HTTPS only.

		{Name: "Ports", Type: "string", Description: "Ports of TLS configurations that open"},

Ports here only means if there is TLS configurations.

  • if it has TLSs, it return 80, 443
  • if it doesn't has TLSs, it return 80

Most likely, port 80 for HTTP, port 443 for TLS

https://github.com/kubernetes/kubernetes/blob/release-1.20/pkg/printers/internalversion/printers.go#L1193-L1210

ahmetb

comment created time in 11 hours

startedmdlayher/consrv

started time in 13 hours

issue commentkubernetes/kubectl

"get ingress" still shows port 80 even though it is HTTPS only.

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

ahmetb

comment created time in 14 hours

fork smarterclayton/ci-search-functions

Google Cloud Functions for indexing CI results

fork in 19 hours

push eventkubernetes/kubectl

Benjamin Elder

commit sha 13ecb713fceae5f69152f7643dafc87c91ee3865

hack/update-bazel.sh Kubernetes-commit: 56e092e382038b01c61fff96efb5982a2cb137cb

view details

pacoxu

commit sha aedad91395143ca0e0cc4abe08b8e2f342543054

fix: will logs the default container only even --all-containers is specified Signed-off-by: pacoxu <paco.xu@daocloud.io> Kubernetes-commit: 71db08d15a580cee1592eb699a3ff448f5fe4fe1

view details

Kubernetes Publisher

commit sha fdeacde94ca9a93cab75fd3d99cf07d29f62d2ec

Merge pull request #99561 from BenTheElder/remove-bazel Remove Bazel Kubernetes-commit: 5498ee641b3459a0da1d4b2d42d502a318194189

view details

Kubernetes Publisher

commit sha 6bd2b56331dd685ec433b23aec38b5261c1b114b

Merge pull request #99569 from pacoxu/default-container/kep-1 kubectl logs: don't check default container annotation if --all-containers is specified Kubernetes-commit: b032ebac8e0402f900d87d48a533b192e16c2f72

view details

push time in a day

startedcurv3d/curv

started time in a day

startedmarccampbell/graviton-scheduler-extender

started time in a day

startedstellar/stellar-etl

started time in a day

starteddmitryikh/leaves

started time in a day

startedreviewdog/action-misspell

started time in a day

fork dims/imgcrypt

OCI Image Encryption Package

fork in a day

fork dims/zfs

ZFS snapshotter plugin for containerd

fork in a day

issue commentkubernetes/kubectl

`kubectl get ... --namespace non-existent` should not exit with 0

/remove-lifecycle stale

wknapik

comment created time in a day

issue commentkubernetes/kubectl

custom-columns stops abruptly when hitting a non-existent array index

I tried to fix it in https://github.com/kubernetes/kubernetes/pull/99579

fwolter

comment created time in a day

issue commentkubernetes/kubectl

Improve documentation of kubectl top

Some links that may help

  • https://github.com/kubernetes-sigs/metrics-server/issues/193#issuecomment-451309811
  • https://stackoverflow.com/questions/45043489/kubernetes-understanding-memory-usage-for-kubectl-top-node
serathius

comment created time in 2 days

push eventkubernetes/kubectl

Jordan Liggitt

commit sha 31a13e236e92e8221dc416f441064dadeb088374

Fix PipeWriter#CloseWithError race on go1.13 Kubernetes-commit: 399ab7aadca3879e040a5f2c4fde5451f838eb66

view details

Kubernetes Publisher

commit sha edc40ebb29a5e1bc5df29d9032447110bd374290

Merge pull request #99421 from liggitt/kubectl-logs-race Fix PipeWriter#CloseWithError race on go1.13 Kubernetes-commit: 0d1ea31251c1338265f62f51d6f0300510d507dc

view details

push time in 2 days

fork dims/zeitgeist

Zeitgeist: the language-agnostic dependency checker

https://godoc.org/sigs.k8s.io/zeitgeist

fork in 2 days

issue commentkubernetes/kubectl

kubectl edit or apply can not update .status when status sub resource is enabled

Created https://github.com/kubernetes/kubernetes/pull/99556 as a POC to get initial feedback

nightfury1204

comment created time in 2 days

issue commentkubernetes/kubectl

git-like blame for kubectl

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

soltysh

comment created time in 2 days

fork thiagokokada/nixos-hardware

A collection of NixOS modules covering hardware quirks.

fork in 2 days