profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/ichekrygin/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Illya Chekrygin ichekrygin Seattle, WA illya-chekrygin.com Engineer | Kubernetes | Certified Kubernetes Administrator (CKA) | Certified Kubernetes Application Developer (CKAD)

ichekrygin/kubebuilder-demo 2

Demo writing an operator using kubebuilder

ichekrygin/afero 0

A FileSystem Abstraction System for Go

ichekrygin/aws-cli 0

AWS CLI in Docker

ichekrygin/cert-manager 0

Automatically provision and manage TLS certificates in Kubernetes

ichekrygin/cloud-run-hello 0

Sample Cloud Run application

ichekrygin/cobra 0

A Commander for modern Go CLI interactions

ichekrygin/controller-runtime 0

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)

ichekrygin/controller-tools 0

Tools to use with the controller-runtime libraries

issue commentgroundnuty/k8s-wait-for

Fails to run on GKE because of permission issue

For anyone coming here, you may create a ServiceAccount with limited permissions. Here's an example of one that only has access to pod1.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  labels:
    {{- include "my-service-account.labels" . | nindent 4 }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: my-service-account
  labels:
    {{- include "my-service-account.labels" . | nindent 4 }}
rules:
  - apiGroups: [""]
    resources: ["pods"]
    resourceNames: ["pod1", "pod2"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: my-service-account
  labels:
    {{- include "my-service-account.labels" . | nindent 4 }}
subjects:
  - kind: ServiceAccount
    name: my-service-account
roleRef:
  kind: Role
  name: my-service-account
  apiGroup: rbac.authorization.k8s.io

Then, simply use the service account in your deployment template spec.

avnersorek

comment created time in a day

pull request commentgroundnuty/k8s-wait-for

Does not accept failed jobs if less then 10 pods failed (#21)

🙏 🙏 It would be great if this PR can be merged. It's a critical bug fix.

robinvandenbogaard

comment created time in 8 days

fork justinsb/kubetest2

Kubetest2 is the framework for launching and running end-to-end tests on Kubernetes.

fork in 14 days

created repositorydlsniper/whatsnewingoland

What's New in GoLand

created time in 22 days

fork justinsb/angular

One framework. Mobile & desktop.

https://angular.io

fork in a month

fork justinsb/gvisor

Application Kernel for Containers

https://gvisor.dev

fork in a month

created repositoryjustinsb/prototest

created time in 2 months

pull request commentgroundnuty/k8s-wait-for

Does not accept failed jobs if less then 10 pods failed (#21)

For anyone else (like me) waiting for this to get merged/released I have done so and published here: https://hub.docker.com/r/mfinelli/k8s-wait-for. It's not versioned (latest only) because I don't plan on maintaining it after this PR gets merged and released here.

I can also confirm that it does fix the problem

robinvandenbogaard

comment created time in 2 months

startedChristopheCVB/TouchPortalPluginSDK

started time in 2 months

issue closedgroundnuty/k8s-wait-for

support one container wait multiple pod or job

now one container only can wait one pod or job. if support multiple such as: wait_for.sh pod -lapp=app -lapp=app2

closed time in 2 months

lyzhang1999

issue openedgroundnuty/k8s-wait-for

support one container wait multiple pod or job

now one container only can wait one pod or job. if support multiple such as: wait_for.sh pod -lapp=app -lapp=app2

created time in 2 months

PR opened groundnuty/k8s-wait-for

Fix not found jobs are ready

Problem

I am using job selector with labels. If selector not found, wait-for treat it as ready (But it doesn't exists!). I found check and fixed this case i guess

Additional info

Debug logs image No resources found is get_job_state_output output, but script tries to extract job status from it

Selector:

args: [job, "-l app.kubernetes.io/name in (test-name), app.kubernetes.io/version in (0.0.1)"]
env:
  - name: DEBUG
    value: "3"
+1 -1

0 comment

1 changed file

pr created time in 2 months

startedkahole/edamagit

started time in 2 months

startedfluxcd/webui

started time in 3 months

pull request commentgroundnuty/k8s-wait-for

Does not accept failed jobs if less then 10 pods failed (#21)

yep, I will get to it this month - just waiting for some free time

On Wed, 2 Dec 2020 at 16:43, Valentin Rodygin notifications@github.com wrote:

@groundnuty https://github.com/groundnuty any chance to get this merged?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/groundnuty/k8s-wait-for/pull/22#issuecomment-737313260, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB25JWUDJMQII2LEOEA2U3SSZODXANCNFSM4OPTG7JQ .

robinvandenbogaard

comment created time in 3 months

pull request commentgroundnuty/k8s-wait-for

Does not accept failed jobs if less then 10 pods failed (#21)

@groundnuty any chance to get this merged?

robinvandenbogaard

comment created time in 3 months