profile
viewpoint

a-hat/charts 0

Curated applications for Kubernetes

a-hat/helm-diff 0

A helm plugin that shows a diff explaining what a helm upgrade would change

a-hat/helm-x-kustomize-jsonpatch 0

Test Case for helm x

a-hat/helmfile 0

Deploy Kubernetes Helm Charts

a-hat/helmsman 0

A Helm Charts as Code tool.

a-hat/terraform 0

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

pull request commentargoproj/argo-cd

feat: Introduce sync-option SkipDryRunOnMissingResource=true (#2873)

Hey @alexmt! What do you think about this change? This is a feature on our customers wish list. Thanks a lot for your time!

a-hat

comment created time in 2 days

create barnchsyncier/argo-cd

branch : v1.4.2-patch

created branch time in 12 days

push eventsyncier/argo-cd

Andreas Kappler

commit sha 2a7dc38b9dac37c887e618c71b83533e638809f5

formatting

view details

push time in 14 days

PullRequestEvent

PR closed argoproj/argo-cd

feat: Introduce sync-option SkipDryRunOnMissingResource=true (#2873)
  • Annotation argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true can be set on any resource
  • When set, then the dry run for this resource will be skipped when syncing, but only if the Group and Kind are currently not present in the cluster.
  • This is analogous to the current handling of missing CRDs, but when the CRD is part of the same sync.
  • Supports the use case where a CRD is created by a kubernetes controller inside the cluster in response to another CRD in the same sync.

Checklist:

  • [x] Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
  • [ ] The title of the PR states what changed and the related issues number (used for the release note).
  • [ ] I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
  • [ ] Optional. My organization is added to USERS.md.
  • [ ] I've signed the CLA and my build is green (troubleshooting builds).
+118 -2

2 comments

3 changed files

a-hat

pr closed time in 14 days

pull request commentargoproj/argo-cd

feat: Introduce sync-option SkipDryRunOnMissingResource=true (#2873)

Sorry, I used the wrong e-mail address

a-hat

comment created time in 14 days

PR opened argoproj/argo-cd

feat: Introduce sync-option SkipDryRunOnMissingResource=true (#2873)
  • Annotation argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true can be set on any resource
  • When set, then the dry run for this resource will be skipped when syncing, but only if the Group and Kind are currently not present in the cluster.
  • This is analogous to the current handling of missing CRDs, but when the CRD is part of the same sync.
  • Supports the use case where a CRD is created by a kubernetes controller inside the cluster in response to another CRD in the same sync.

Checklist:

  • [ ] Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
  • [ ] The title of the PR states what changed and the related issues number (used for the release note).
  • [ ] I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
  • [ ] Optional. My organization is added to USERS.md.
  • [ ] I've signed the CLA and my build is green (troubleshooting builds).
+118 -2

0 comment

3 changed files

pr created time in 14 days

create barnchsyncier/argo-cd

branch : feature-skip-dry-run-on-missing-resource

created branch time in 14 days

push eventsyncier/argo-cd

khhirani

commit sha a8b6282b156b76a68c7f0c670912b6212d73456a

improvement: Surface failure reasons for Rollouts/AnalysisRuns (#3219) * Modify AnalysisRun error messages. Return hard-coded value if AnalysisRun status doesn't contain message * Create tests

view details

bergur88

commit sha e13bb795782d4448408d870816c1696441b6b58b

add docs on mapping different scopes for microsoft (#3224)

view details

Alexander Matyushentsev

commit sha d5d01eca3ed8f797b6df6e221c947e7b25c33ba3

fix: upgrade argoproj/pkg version to fix leaked sensitive information in logs (#3230)

view details

Alexander Matyushentsev

commit sha ebb06b8c897941a105c3d6412e0a4c0c0334ec0c

fix: app reconciliation fails with panic: index out of (#3233)

view details

Alexander Matyushentsev

commit sha 5cd12a3943329a5a771a80453b16b0922ce94c68

fix: 'requires pruning' is not rendered on app details page for resources that should be pruned (#3234)

view details

Alexander Matyushentsev

commit sha e2358cabc964653fa9764ba731e1eab8d4bc7b0b

refactor: use http forwarders from argoproj/pkg repository (#3235)

view details

Alexander Matyushentsev

commit sha 9d1a378ce8660e6d41861ae972295c7765320621

fix: fix broken URL regex expression (#3236)

view details

jannfis

commit sha bbb925cb63953f3797da7112165167cbaaf43bf4

Update testify to v1.5.1 (#3209)

view details

jannfis

commit sha 0378819c54ba2e6d1ec5ed573e9c146751a1bfc4

Test for nil to prevent nil pointer dereference (#3237)

view details

Conlan Cesar

commit sha 487d6647d5a7d2b85139579d93692d92e6014de3

Add missing parentheses to Webhook docs (#3239)

view details

Jesse Suen

commit sha 476b09cbbf2730927236de39b16067dd2cff8099

feat: improve api-server and controller performance (#3222) * group read comparison settings during app reconciliation * Reduce lock contention in clusterInfo::ensureSynced(). Add getRepoObj stats * Remove additional source of lock contention * Exclude the coordination.k8s.io/Lease resource Co-authored-by: Alexander Matyushentsev <amatyushentsev@gmail.com>

view details

Alexander Matyushentsev

commit sha b3f8e7a02cb9e25fffe47b1210e3ad75b2ad125e

docs: add v1.5 change log (#3244)

view details

push time in 14 days

issue commentargoproj/argo-cd

support CRDs created by controllers

To support CRDs created by k8s controller in one commit, we would like to introduce the following enhancement to ArgCD:

  • Add a new option to the annotation argocd.argoproj.io/sync-options
  • The option will be named SkipDryRunOnMissingResource, and can be used on a resource in the following way: argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
  • When SkipDryRunOnMissingResource is set, then the dry run for this resource will be skipped when syncing, but only if the Group and Kind are currently not present in the cluster.
  • This is analogous to the current handling of missing CRDs.

Any feedback is appreciated.

torstenwalter

comment created time in 15 days

issue commentelastic/cloud-on-k8s

Create an Helm chart for this

A helm chart would really simplify the integration of the ECK operator into existing infrastructures which are already utilising helm. In our setup, the ECK operator is the only component we need to install via a shell script.

Additionally it would provide a standardized way of customizing the installation, there is already a requirement for this in https://github.com/elastic/cloud-on-k8s/issues/2406

@LaurentGoderre I had a look at your PR. It seems to me, that this helm chart packages an instance of a Elasticsearch, not of the operator itself?

staticdev

comment created time in 2 months

startedcnabio/cnab-spec

started time in 2 months

startedphilc/vimium

started time in 2 months

startedavivace/dotfiles

started time in 2 months

startedmkdocs/mkdocs

started time in 3 months

startedgrafana/tanka

started time in 3 months

startedheckelson/i3-and-kde-plasma

started time in 3 months

PR opened roboll/helmfile

do not pass --api-versions to "helm diff"

https://github.com/roboll/helmfile/pull/1046 introduced the apiVersions configuration option in helmfile.yaml. The api versions were passed via --api-versions to helm template and helm diff.

However helm diff does not implement the --api-versions parameter, and does not plan to do so (see https://github.com/databus23/helm-diff/pull/175). Instead, with helm 3.1.0 helm diff will render templates against the actual cluster, removing the need to specify api versions manually.

This PR removes passing API versions to helm diff (which caused helm diff to fail, if apiVersions was specified in helmfile). The API versions are still used when running helmfile template, which may make sense for some use cases.

+2 -23

0 comment

3 changed files

pr created time in 3 months

create barncha-hat/helmfile

branch : api-versions

created branch time in 3 months

delete branch a-hat/helmfile

delete branch : api-versions

delete time in 3 months

pull request commentdatabus23/helm-diff

app option to pass apiVersions to helm

@a-hat does helmfile execute diff behind the scenes during template or otherwise?

@travisghansen helmfile apply first executes helm diff to determine if changes need to be applied. helmfile template executes helm template.

a-hat

comment created time in 3 months

PR closed databus23/helm-diff

app option to pass apiVersions to helm

This PR adds a new flag --api-versions which values are passed to helm template.

The flag can be used to explicitly specify the API Versions which are expected to be supported by the cluster, in order to avoid getting unexpected results with helm charts checking for Capabilities.APIVersions.

This feature in helm-diff will enable the same option flag in helmfile (see https://github.com/roboll/helmfile/issues/1014 and https://github.com/roboll/helmfile/pull/1046)

+6 -2

7 comments

3 changed files

a-hat

pr closed time in 3 months

pull request commentdatabus23/helm-diff

app option to pass apiVersions to helm

@databus23 Thanks for your feedback. So if I understand correctly, with helm 3.1.0, helm diff will work correctly with Capabilities.APIVersions. This is fine for me, I will close this PR.

a-hat

comment created time in 3 months

issue commentroboll/helmfile

.Capabilities.APIVersions returns wrong results

Hey @mumoshu!

Just in case you missed it, the apiVersions configuration option in helmfile is unfortunately broken, see my comments here: https://github.com/roboll/helmfile/pull/1046.

I am waiting for https://github.com/databus23/helm-diff/pull/175 to be merged, which will fix it.

a-hat

comment created time in 3 months

pull request commentdatabus23/helm-diff

app option to pass apiVersions to helm

Hey @databus23! Would you mind having a look at this PR? The corresponding feature in https://github.com/roboll/helmfile is dependent on this, it would be highly appreciated if this can be merged. Thanks!

a-hat

comment created time in 3 months

issue commentstarship/starship

Error building with AUR

Hey @chipbuster I am using AWS on this computer, but AWS_REGION is not set in my shell. I also tried to build v0.32.2, but get the same error message :(

a-hat

comment created time in 3 months

pull request commentroboll/helmfile

feat: Option to pass apiVersions to `helm diff` and `helm template`

Here is the PR in helm-diff

https://github.com/databus23/helm-diff/pull/175

a-hat

comment created time in 3 months

PR opened databus23/helm-diff

app option to pass apiVersions to helm

This PR adds a new flag --api-versions which values are passed to helm template.

The flag can be used to explicitly specify the API Versions which are expected to be supported by the cluster, in order to avoid getting unexpected results with helm charts checking for Capabilities.APIVersions.

This feature in helm-diff will enable the same option flag in helmfile (see https://github.com/roboll/helmfile/issues/1014 and https://github.com/roboll/helmfile/pull/1046)

+6 -2

0 comment

3 changed files

pr created time in 3 months

create barncha-hat/helm-diff

branch : api-versions

created branch time in 3 months

fork a-hat/helm-diff

A helm plugin that shows a diff explaining what a helm upgrade would change

fork in 3 months

pull request commentroboll/helmfile

feat: Option to pass apiVersions to `helm diff` and `helm template`

Hi @mumoshu

Apparently helm-diff does not support the --api-versions flag. I only tested with helm template, sorry for not catching that.

I will try to submit a PR to helm-diff for supporting the flag.

Currently helmfile apply or helmfile diff fail when apiVersions is set in helmfile. Should we revert this code until helm-diff supports the flag?

a-hat

comment created time in 3 months

issue commentstarship/starship

Error building with AUR

Hey @chipbuster

Just tried again with 0.32.1-1, same issue

a-hat

comment created time in 3 months

pull request commentroboll/helmfile

app option to pass apiVersions to helm diff and helm template

Hi @mumoshu

Thanks for your feedback. Yes, this makes sense. I moved the apiVersions to the root of helmfile.

a-hat

comment created time in 3 months

push eventa-hat/helmfile

a-hat

commit sha e1f87cae506ec1cf7a61b794dbdcb9636e5c51a6

move apiVersions yaml element to root level

view details

push time in 3 months

PR opened roboll/helmfile

app option to pass apiVersions to helm diff and helm template

resolves #1014

This makes it possible to pass the API Capabilities to helmfile when executing a task that does not render against an actual cluster (diff, template, apply).

The list of api versions are currently passed via helmDefaults. Maybe it would also make sense to override in specific environments, but this is not implemented.

This is my first contribution to a golang project, any feedback will be appreciated :)

+98 -0

0 comment

3 changed files

pr created time in 3 months

push eventa-hat/helmfile

a-hat

commit sha 3dd5e60415a4cf8649a7404b3f8841463197f794

app option to pass apiVersions to helm diff and helm template resolves #1014

view details

push time in 3 months

create barncha-hat/helmfile

branch : api-versions

created branch time in 3 months

push eventa-hat/helmfile

a-hat

commit sha 6dfc9be0d1c5559d23cde458ed53f8f14a2a4e9f

doc: Update documentation about layering release values (#837) Closes #836

view details

KUOKA Yusuke

commit sha 4e4f1bee59873a94ce92f61ba8214c1f69b6c7a0

feat: Experimental Helm v3 mode (#841) Set `HELMFILE_HELM3=1` and run `helmfile` like `HELMFILE_HELM3=1 helmfile ...`. When `HELMFILE_HELM3` is set, `test`, `template`, `delete`, `destroy` behave differently so that it works with Helm 3. Note that `helmfile diff` doesn't work as `helm-diff` called under the hood doesn't support Helm v3 yet. Ref #668

view details

KUOKA Yusuke

commit sha 4bbd09ccb25757f87d3780e93121df448f60fc2c

fix(cmd/deps): make `helmfile deps` to work w/ helm 3.0.0-beta.3 (#842) Ref https://github.com/roboll/helmfile/issues/668#issuecomment-529054165

view details

Shane Starcher

commit sha 5488198d6d2b27c0f0ee995b788e9e6e571be2fa

fix: allow empty pattern matching and move on (#827) Ref #778

view details

matthias-kloeckner

commit sha 267e0fa1fe29350f8cbc93c5ff3625ab7518fe91

adding info about kloeckner-i to users list (#843)

view details

刘相轩

commit sha cbf5b8b1e7b4c6de1eed70af5a12955c91638e73

Fix helm2 lock file does not get updated (#847) Ref: https://github.com/helm/helm/issues/2731

view details

KUOKA Yusuke

commit sha fb2041555e98a77aca45793a1113373864e14b58

feat(diff,apply): --no-color for removing color from output (#848) Resolves #788

view details

KUOKA Yusuke

commit sha 94a6fcfb9f048ac76b3b3791dc7247dbcb6bd324

feat(diff,apply): --context=N for limiting diff context (#849) Resolves #787

view details

KUOKA Yusuke

commit sha f79db2ec8dc598b453288999c5d0fbc1f1848e8f

feat(diff,apply,lint,sync,template): `--set k=v` for setting adhoc chart values (#850) Resolves #840

view details

KUOKA Yusuke

commit sha 9d851cda3be9ee077afa6859f58410c761babf2c

feat: `--skip-repos` for `helmfile deps` (#851) Resolves #661

view details

eddycharly

commit sha fd0133e10a44ae05de25001b371fb7bc557e1299

Update documentation and tests for .Values (#839) Resolves #816

view details

art kon

commit sha 06b0c99a0b2eb3b9aa90f6dd46668a5bcf8ddf10

Fix recursion for helmfiles pulled from git (#854)

view details

KUOKA Yusuke

commit sha ef63a055139b9d07c57983247502fbb965d82fd3

fix(helm3): delete/destroy/apply/sync unable to detect releases to be deleted (#857) Fixes #853

view details

Theo Meneau

commit sha 216c228c0beb138909719faebc1e1287df0c5f42

feat: `helm repo add --ca-file` via repositories definition (#856) Resolves #855

view details

Mike Splain

commit sha b762ab0b780e830f5739a21823c02e274d48f54f

Fix delete/destroy (#859)

view details

KUOKA Yusuke

commit sha 2e98e907b0af2c61cd03946547d2e954e3e53df1

fix: invalid duration passed to helm 3 upgrade (#864) Fixes #863

view details

art kon

commit sha ba2e52261741008d6da3ed3e6b572efb07b9f2e4

doc: Added some detail on how to use override values in helmfiles section (#861) * Added some detail on how to use override values in helmfiles section Co-Authored-By: KUOKA Yusuke <ykuoka@gmail.com>

view details

Rajat Goyal

commit sha 10a9a16f3d81ce2359a8fb3866d2797d68dff1a6

Fix: Change use of `tmpl` to `gotmpl` in README (#870) This adds clarity in docs by: - Changing references to the supported file extension - Previously, using `values.tmpl` in helmfile.yaml would throw errors. `values.gotmpl` gives expected output

view details

Aaron Batilo

commit sha 921f69bae7b441d698d5aaf3af4086fb5d8eca1f

Warn users when no repositories are defined (#879) At the moment, if you have a helmfile.yaml like so: ``` releases: - name: metrics-server namespace: kube-system chart: stable/metrics-server ``` If you try to run `helmfile deps`, you will get a 0 exit code and no log output at whatsoever, signaling that there weren't any problems, but no lock file will get created. For example: ``` root@316073d4a104:/# helmfile deps root@316073d4a104:/# ``` This behavior doesn't appear to be documented and is unintuitive to the user. This change adds a warning output for this same use case: ``` root@316073d4a104:/# helmfile deps There are no repositories defined in your helmfile.yaml. This means helmfile cannot update your dependencies or create a lock file. See https://github.com/roboll/helmfile/issues/878 for more information. root@316073d4a104:/# ``` Fixes #878

view details

bitsofinfo

commit sha cf9bbc7603c27dde59e6839b28204dc84fdf528f

upgrade sprig 2.22.0 #883 (#884)

view details

push time in 3 months

issue openedgrafana/loki

Missing permissions for DynamoDB

https://github.com/grafana/loki/blob/master/docs/operations/storage/README.md lists the IAM permissions required for the DynamoDB backend.

I added all those permission, but loki fails to operate, unless I create these additional permissions:

dynamodb:CreateTable on arn:aws:dynamodb:<aws_region>:<aws_account_id>:table/<prefix>* dynamodb:ListTables on arn:aws:dynamodb:<aws_region>:<aws_account_id>:table/*

CreateTable makes sense to me, and should probably be added to the documentation.

I am not sure, why loki needs ListTables without the table prefix. In the documentation this permission is listed as a requirement for auto-scaling, however I am using the loki helm chart, and auto-scaling seems to be disabled by default.

created time in 3 months

startedgrafana/loki

started time in 3 months

issue openedstarship/starship

Error building with AUR

Bug Report

Current Behavior

I tried to install starship via AUR, but does not build

The following error messages are shown:

--- aws::no_region_set stdout ----
thread 'aws::no_region_set' panicked at 'assertion failed: `(left == right)`
  left: `""`,
 right: `"on \u{1b}[1;33m☁\u{fe0f}  eu-central-1\u{1b}[0m "`', tests/testsuite/aws.rs:16:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

---- aws::region_not_set_with_display_region stdout ----
thread 'aws::region_not_set_with_display_region' panicked at 'assertion failed: `(left == right)`
  left: `""`,
 right: `"on \u{1b}[1;33m☁\u{fe0f}  eu-central-1\u{1b}[0m "`', tests/testsuite/aws.rs:232:5


failures:
    aws::no_region_set
    aws::region_not_set_with_display_region

test result: FAILED. 87 passed; 2 failed; 85 ignored; 0 measured; 0 filtered out

Environment

  • Starship version: v.0.30.1

created time in 3 months

startedmumoshu/terraform-provider-helmfile

started time in 4 months

issue commentroboll/helmfile

.Capabilities.APIVersions returns wrong results

Hi @mumoshu,

So the flag api-versions already exists, thanks for the pointer.

I agree that manually reading the Capabilities from the API server would lead to confusing behaviour in your use case. Then the solution you proposed seems to be the best option :+1:

a-hat

comment created time in 4 months

issue commentroboll/helmfile

.Capabilities.APIVersions returns wrong results

@mumoshu There is an issue in the helm repo about adding an option to provide the Capabilities to helm template, however this seems to be stale.

If it is possible to implement this feature in helmfile / helm-diff, this would be a good solution. Maybe it would be possible to add an option to retrieve the actual Capabilities and passing the result to helm template, this would be even better because it avoids having to maintain a list of Capabilities in helmfile.

a-hat

comment created time in 4 months

issue commentroboll/helmfile

.Capabilities.APIVersions returns wrong results

I just stumbled upon this again. My helm release is currently installed correctly. But when I run helmfile apply, helmfile says it will apply some changes, based on the incorrect output of helm diff. Those changes are actually not applied, because helm upgrade executes the Capabilities check correctly. This behaviour is really confusing.

a-hat

comment created time in 4 months

issue commentroboll/helmfile

.Capabilities.APIVersions returns wrong results

Upon further investigation, helm template seems to be the culprit.

The documentation of this command says:

Any values that would normally be looked up or retrieved in-cluster will be
faked locally. Additionally, none of the server-side testing of chart validity
(e.g. whether an API is supported) is done.

In my case, I was running helmfile apply, which (if I understand correctly) first executes helm diff to detect if there are any changes, which probably is based on helm template. In the chart I was installing, there is a check on the API Versions, and this check did not return the expected results, so no changes were detected by helmfile. If I run helmfile sync, the chart gets installed with the correctly rendered templates.

Any ideas how the user experience could be improved for this use case?

a-hat

comment created time in 4 months

pull request commenthelm/charts

[stable/prometheus-cloudwatch-exporter] remove api caps check for ServiceMonitor creation

Thanks for your feedback @asherf

I am using helm 3.0.0 and prometheus-operator 8.2.2 (appVersion 0.34.0)

My further investigation revealed, that Capabilities.APIVersions works correctly with helm in my cluster. I am using helmfile for release management, which seems to be the cause of the problem. This does not seem to be an issue with this chart, so I created https://github.com/roboll/helmfile/issues/1014

For your consideration: Other charts with a ServiceMonitor do not check for the Capabilities, e.g. stable/traefik or stable/postgresql.

Would it make sense to remove the check, so the decision to install a ServiceMonitor is made by the user when setting serviceMonitor.enabled?

a-hat

comment created time in 4 months

issue openedroboll/helmfile

.Capabilities.APIVersions returns wrong results

The helm built-in object Capabilities.APIVersions returns incorrect results when running with helmfile in my cluster.

For demonstration purposes, I have the following test.yaml file, which prints the results from APIVersions using incubator/raw:

templates:
- |
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: test
    labels:
      test: |
        {{ .Capabilities.APIVersions }}

Here is a helmfile.yaml using this template:

repositories:
- name: incubator
  url: https://kubernetes-charts-incubator.storage.googleapis.com
  
releases:
- name: test
  namespace: monitoring
  chart: incubator/raw
  version: 0.2.3
  values:
  - test.yaml

When I run helm install test incubator/raw -f test.yaml --dry-run I get the correct list of API Versions available in my cluster. helmfile template outputs a different set of Versions with many entries missing.

helmfile version v0.94.1
# kubectl version (AWS EKS)
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b8860f", GitCommit:"b8860f6c40640897e52c143f1b9f011a503d6e46", GitTreeState:"clean", BuildDate:"2019-11-25T00:55:38Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
# helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}

created time in 4 months

PR opened helm/charts

[stable/prometheus-cloudwatch-exporter] remove api caps check for ServiceMonitor creation

Is this a new chart

No

What this PR does / why we need it:

The ServiceMonitor does not get created in my cluster, even though the API version monitoring.coreos.com/v1 exists. I still have to investigate the reason why the check with .Capabilities.APIVersions.Has does not work.

However I think that this check is redundant, there is already a helm property serviceMonitor.enabled which must be enabled by the user explicitly. In case the API version is not supported in the cluster, I would rather like to see an error message when installing the chart, instead of not having the ServiceMonitor created without any warnings.

Special notes for your reviewer:

Checklist

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]

  • [x] DCO signed
  • [x] Chart Version bumped
  • [x] Variables are documented in the README.md
  • [x] Title of the PR starts with chart name (e.g. [stable/mychartname])
+2 -2

0 comment

2 changed files

pr created time in 4 months

create barncha-hat/charts

branch : cloudwatch

created branch time in 4 months

push eventa-hat/charts

a-hat

commit sha c32a5482e4ca4b25b47ba2ae52dfedea87026ded

[stable/prometheus-cloudwatch-exporter] remove api caps check for ServiceMonitor creation Signed-off-by: a-hat <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

a-hat

commit sha 9dd5d1085fa973a5e18792327a912635b2b77b53

[stable/prometheus-cloudwatch-exporter] remove api caps check for ServiceMonitor creation Signed-off-by: a-hat <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

David Dieulivol

commit sha 40db32aec36ea7e43db47c26955867c3dac8cb3d

[stable/prometheus-operator] Fix fullnameOverride variable in README (#19077) Signed-off-by: David Dieulivol <david.dieulivol@welldev.io>

view details

Ilya Dmitrichenko

commit sha dbab31584038a984cb6d17596f1825744f595094

[stable/weave-cloud] enable error reporting (#18987) Signed-off-by: Ilya Dmitrichenko <errordeveloper@gmail.com>

view details

Simon Rüegg

commit sha 3c9e29d6139bcb00fd2caed1330583fa032a034c

[sealed-secrets ] Upgrade to v0.9.5 (#18980) * [stable/sealed-secrets] Add srueg as chart reviewer Signed-off-by: Simon Rüegg <simon.ruegg@vshn.ch> * [stable/sealed-secrets] Bump sealed-secrets to v0.9.5 Signed-off-by: Simon Rüegg <simon.ruegg@vshn.ch>

view details

Bitnami Bot

commit sha 859be777301c77a15d4fc3b5e64e45f474c2461d

[stable/phabricator] Release 9.0.2 updating components versions (#19083) Signed-off-by: Bitnami Containers <containers@bitnami.com>

view details

Carlos Rodríguez Hernández

commit sha d9aada4ff3dea858425ac28a565dea0b5bc12128

[stable/mongodb] Change pullPolicy because immutable tag is used in exporter (#19082) Signed-off-by: Carlos Rodriguez Hernandez <crhernandez@bitnami.com>

view details

Carlos Rodríguez Hernández

commit sha ea9cfbd08e942913dbca4ea106919e24c49242fd

[stable/redis] Unify values and values-production (#19086) * Add changes implemented in #18179 also in production Signed-off-by: Carlos Rodriguez Hernandez <crhernandez@bitnami.com> * Unify values and values-production Signed-off-by: Carlos Rodriguez Hernandez <crhernandez@bitnami.com> * Bump chart version Signed-off-by: Carlos Rodriguez Hernandez <crhernandez@bitnami.com>

view details

ATolkachev

commit sha d064b1c7cb9c15810fd5368a0b93d6cb414a5216

Add nodeSelector parameter to documentation (#19088) Signed-off-by: Alexander Tolkachev <alexander.e.tolkachev@gmail.com>

view details

Andrey Voronkov

commit sha 71d3a6b700e5be2d76808efeab01c9436d539ae5

[stable/sentry] Fix `.Values.web.podLabels` usage. (#19072) Signed-off-by: Andrey Voronkov <avoronkov@enapter.com>

view details

branttaylor

commit sha 7b0750d0a87e0878c1de13c7a2f0e251a5d48b4e

[stable/velero] add priority class helper (#18684) * add priority class helper Signed-off-by: Brant Taylor <brant.taylor@lifeway.com> * add priority class helper Signed-off-by: Brant Taylor <brant.taylor@lifeway.com> * bump chart Signed-off-by: Brant Taylor <brant.taylor@lifeway.com> * bump chart version Signed-off-by: Brant Taylor <brant.taylor@lifeway.com>

view details

Dave Slinn

commit sha baedcbd106a10ed9ffee1be4697bafe861f20bbb

[stable/prometheus-operator] Rename relabeling target_label to targetLabel (#19089) * Rename relabeling target_label to targetLabel To match changes to coreos/kube-operator as noted here: https://github.com/coreos/prometheus-operator/issues/2503#issuecomment-478520117 Signed-off-by: Dave Slinn <dslinn@gms.ca> * bump version number Signed-off-by: Dave Slinn <dslinn@gms.ca>

view details

Zsolt Szeberenyi

commit sha afe9e3c99ad8782e930b77cf7d684d05872fc482

Add the acme http challenge entrypoint to the values (#17149) Signed-off-by: Zsolt Szeberenyi <zsolt@szeberenyi.com>

view details

Bitnami Bot

commit sha e91450f60bb5f437e5bb6b924d54a00a2c820fe6

[stable/phpmyadmin] Release 4.2.4 updating components versions (#19087) Signed-off-by: Bitnami Containers <containers@bitnami.com>

view details

Naoki Oketani

commit sha d874c5022006529c76fb0bc9bfe0d8f94ed91401

[stable/falco] migrate API versions from deprecated, removed versions (#17339) Signed-off-by: Naoki Oketani <okepy.naoki@gmail.com>

view details

Arief Rahmansyah

commit sha b5c738a13dd98b15ac05f1ad28e612fb0a4a1a05

[stable/pgadmin] Fix default value for pgAdmin username (#18096) * Fix default value for pgAdmin username Fix default value for pgAdmin username Signed-off-by: Arief Rahmansyah <ariefrahmansyah@hotmail.com> * Bump chart version Bump chart version Signed-off-by: Arief Rahmansyah <ariefrahmansyah@hotmail.com> * Bump version Signed-off-by: Arief Rahmansyah <ariefrahmansyah@hotmail.com>

view details

Andrey Izotikov

commit sha cf188f69a24702d038779f48e2b9ea5a63f725a7

[stable/redis-ha] Fix double scraping redis pods by serviceMonitor (#18957) Add third label to select only one redis service (exclude redis-announce-* targets from scraping) Signed-off-by: Andrey Izotikov <andrsp@gmail.com>

view details

Daniel Nordstrøm Carlsen

commit sha e5dee616c93db78555627badf0aef415c9848e9c

Fixed master using worker resources definition (#19076) Signed-off-by: Daniel Nordstrøm Carlsen <daniel.n.carlsen@gmail.com>

view details

Victor-Emmanuel Fourneau

commit sha b30b2fc25f6d73b7009e817eeb118f68d60b2d61

Adding path option to vault ingress (#19085) Signed-off-by: Victor-Emmanuel Fourneau <ve.fourneau@linkbynet.com>

view details

lebenitza

commit sha d2066ea0fe456e23451a78e49dbbd19d9f6c8791

Make helm hook job object have the same command as the CronJob (#19090) Signed-off-by: Mihai Anei <mihai.anei@gmail.com>

view details

Juan Ariza Toledano

commit sha 8617f4fb1298c9179af53c70014f29758b0f4537

[stable/rabbitmq] Use k8s_domain on NOTES.txt (#19104) Signed-off-by: juan131 <juan@bitnami.com>

view details

shangwang

commit sha 3907cebc7042f452506a7471f912d6d0c8380e51

Add IPC_LOCK to system-probe container capabilities (#18599) Signed-off-by: Shang Wang <shang.wang@datadoghq.com>

view details

push time in 4 months

pull request commenthelm/charts

[stable/traefik] config option for endpoint of prometheus ServiceMonitor

/unassign bismarck /unassign vsliouniaev

a-hat

comment created time in 4 months

pull request commenthelm/charts

[stable/prometheus-operator] clarify usage of additionalScrapeConfigsExternal

/unassign bismarck /unassign vsliouniaev

natebrennand

comment created time in 4 months

pull request commenthelm/charts

[stable/traefik] config option for endpoint of prometheus ServiceMonitor

/assign bismarck /assign vsliouniaev

a-hat

comment created time in 4 months

pull request commenthelm/charts

[stable/traefik] config option for endpoint of prometheus ServiceMonitor

/assign @ldez

a-hat

comment created time in 4 months

push eventa-hat/charts

a-hat

commit sha 667797694f39d7a5dba75e059293747df6a1a80c

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: a-hat <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

a-hat

commit sha e4e5b88dcd3374667a32ec603c3be8fb521e9d45

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: a-hat <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

a-hat

commit sha 9857dd43fa76289d58e4d53c183da36b91e66764

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: a-hat <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

a-hat

commit sha 3a39b693257597afd9761136f03d51aded661397

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: a-hat <github@andreaskappler.de>

view details

push time in 4 months

PR opened helm/charts

[stable/traefik] config option for endpoint of prometheus ServiceMonitor

What this PR does / why we need it:

Enable additional configuration of the prometheus ServiceMonitor via helm values.

Example use case: Re-labeling metrics generated by the traefik prometheus exporter by specifying metricRelabelings configuration, as described in the ServiceMonitor CRD

Checklist

  • [x] DCO signed
  • [ ] Chart Version bumped
  • [ ] Variables are documented in the README.md
  • [x] Title of the PR starts with chart name (e.g. [stable/mychartname])
+5 -0

0 comment

2 changed files

pr created time in 4 months

push eventa-hat/charts

a-hat

commit sha ba78047c6d646f8fc6826d1ab169b0513cf2a180

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: Andreas Kappler <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

a-hat

commit sha 1ab0a616741aa427fedbd80e004d63c3d76c372d

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: Andreas Kappler <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

a-hat

commit sha 24ada001b44b74c72fb3aaf0d4e3009997aee448

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: Andreas Kappler <github@andreaskappler.de>

view details

push time in 4 months

push eventa-hat/charts

a-hat

commit sha 76c11f5dd32afedb2a34ea28cb8f2866565816b3

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: Andreas Kappler <andreas.kappler@proxora.com>

view details

push time in 4 months

push eventa-hat/charts

Andreas Kappler

commit sha 1d970cd7e6341f88fe696c4130d5ff0da50ff256

[stable/traefik] config option for endpoint of prometheus ServiceMonitor Signed-off-by: Andreas Kappler <andreas.kappler@proxora.com>

view details

push time in 4 months

push eventa-hat/charts

andik

commit sha 326813e61f8c3a688a786a0eaa8c07d1b1f40096

[stable/traefik] config option for endpoint of prometheus ServiceMonitor

view details

push time in 4 months

fork a-hat/charts

Curated applications for Kubernetes

fork in 4 months

starteddrone/drone

started time in 4 months

startedroboll/helmfile

started time in 4 months

issue closedelastic/cloud-on-k8s

Elasticsearch stuck in ApplyingChanges

Bug Report

What did you do? Added a secureSettings section to an existing Elasticsearch resource, as described in the docs.

What did you expect to see? The Elasticsearch pods should be configured and restarted, and the keystore should contain the new settings.

What did you see instead? Under which circumstances?

Elasticsearch resource is stuck in phase ApplyingChanges.

Environment

  • ECK version: 1.0.0-beta1

  • Kubernetes information: EKS 1.14

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
  • Logs: If found no relevant entries in the operator logs.

closed time in 5 months

a-hat

issue commentelastic/cloud-on-k8s

Elasticsearch stuck in ApplyingChanges

Yes, makes perfect sense.

Does the operator output a message in the log when it cannot apply the changes to the cluster? If it does, then I missed it. This information would have been very useful.

a-hat

comment created time in 5 months

issue commentelastic/cloud-on-k8s

Elasticsearch stuck in ApplyingChanges

Thanks for your feedback @sebgl

The cluster was in yellow health state, because number_of_replicas was set to 1, and it only had a single node. Could this be the reason why the upgrade did not take place?

Unfortunately I do not have access to the yaml and the logs right now.

a-hat

comment created time in 5 months

issue commentelastic/cloud-on-k8s

Elasticsearch stuck in ApplyingChanges

Apparently the cause of this problem is that the ES cluster had only one node. I suppose the operator always wants to keep one node running to guarantee the availability of the cluster, which is fine. But with version 0.9 of the operator, the behaviour was different if I remember correctly. The operator spawned another node with the new settings, and finally terminated the old node.

I suppose this is due to the implementation with StatefulSets. Will this behaviour change, or is it intended?

a-hat

comment created time in 5 months

issue openedelastic/cloud-on-k8s

Elasticsearch stuck in ApplyingChanges

Bug Report

What did you do? Added a secureSettings section to an existing Elasticsearch resource, as described in the docs.

What did you expect to see? The Elasticsearch pods should be configured and restarted, and the keystore should contain the new settings.

What did you see instead? Under which circumstances?

Elasticsearch resource is stuck in phase ApplyingChanges.

Environment

  • ECK version: 1.0.0-beta1

  • Kubernetes information: EKS 1.14

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
  • Logs: If found no relevant entries in the operator logs.

created time in 5 months

issue closedroboll/helmfile

Accessing nested maps in .Values from helmfile.yaml

Rendering the following template fails when trying to resolve .Values.versions.traefik with the error message executing "stringTemplate" at <.Values.versions.traefik>: map has no entry for key "versions".

Why is it not possible to access the traefik key in the .Values.versions map? Without the nested map it works fine (e.g. .Values.traefikVersion)

releases:
- name: traefik
  namespace: kube-system
  chart: stable/traefik
  version: {{ .Values.versions.traefik }}

values:
  - versions:
      traefik: 1.7

closed time in 5 months

a-hat

issue commentroboll/helmfile

Accessing nested maps in .Values from helmfile.yaml

@mumoshu I see, thank you for the explanation and your fast response.

a-hat

comment created time in 5 months

issue openedroboll/helmfile

Accessing nested maps in .Values from helmfile.yaml

Rendering the following template fails when trying to resolve .Values.versions.traefik with the error message executing "stringTemplate" at <.Values.versions.traefik>: map has no entry for key "versions".

Why is it not possible to access the traefik key in the .Values.versions map? Without the nested map it works fine (e.g. .Values.traefikVersion)

releases:
- name: traefik
  namespace: kube-system
  chart: stable/traefik
  version: {{ .Values.versions.traefik }}

values:
  - versions:
      traefik: 1.7

created time in 5 months

startedoperator-framework/operator-sdk

started time in 6 months

issue commentcoreos/prometheus-operator

Outdated version on operatorhub.io

Hey @LiliC, thanks so much for your answer! :) Good to hear that there is a PR for the new version.

But even if the PR gets merged, the outdated version has been on the operatorhub for over 6 months. So I was hoping for a statement on how often the prometheus-operator team plans to update operatorhub. We would like to use the OLM to stay on top of the operator updates, and this relies on frequent updates from the operator maintainers.

Sorry if this is not the right place to ask!

a-hat

comment created time in 6 months

issue openedcoreos/prometheus-operator

Outdated version on operatorhub.io

The current version of the prometheus-operator on operatorhub.io is from January (v0.27), see the operatorhub github repo.

Is the Operator Lifecycle Manager something you plan to support and update with new versions in the future, or should I rather install the prometheus-operator manually?

created time in 6 months

startedfluent/fluentd

started time in 6 months

issue commentelastic/cloud-on-k8s

Nodes cannot join (ES 6.7.1)

Thanks. Didn't see this mentioned in the documentation.

a-hat

comment created time in 7 months

issue closedelastic/cloud-on-k8s

Nodes cannot join (ES 6.7.1)

Bug Report

I applied this manifest:

apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
kind: Elasticsearch
metadata:
  name: test
spec:
  version: 6.7.1
  nodes:
  - nodeCount: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
    podTemplate:
      spec:
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms1024m -Xmx1024m
          resources:
            requests:
              memory: 2048Mi
            limits:
              memory: 2048Mi
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 40Gi
        storageClassName: gp2-encrypted

Elasticsearch comes up and is operational with one node. Then I edited the Elasticsearch resource and changed nodeCount from 1 to 2. The new node is not able to join the cluster, the new node's pod does not change into Ready state.

I also tried version 6.8.3 of ES, joining new nodes works fine there.

Master Log:

[2019-09-13T12:19:01,936][INFO ][o.e.c.s.MasterService    ] [test-es-l6grv2mzvn] zen-disco-node-failed({test-es-7lzxs4hwps}{InnHL6TuqePyUuWzcgdw}{ezPXeYGsSUK7nldew5l_Ig}{10.0.0.121}{10.0.0.121:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}), reason(failed to ping, tried [3] times, each with maximum [30s] timeout)[{test-es-7lzxs4hwps}{InnHL6TuqePyUuWzcgdw}{ezPXeYGsSUK7nldew5l_Ig}{10.0.0.121}{10.0.0.121:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} failed to ping, tried [3] times, each with maximum [30s] timeout], reason: removed {{test-es-7lzxs4hwps}{Inn__HL6TuqePyUuWzcgdw}{ezPXeYGsSUK7nldew5l_Ig}{10.0.0.121}{10.0.0.121:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}

New Node Log:

2019-09-13T12:22:42,570][WARN ][o.e.t.OutboundHandler    ] [test-es-7lzxs4hwps] send message failed [channel: Netty4TcpChannel{localAddress=/10.0.0.121:9300, remoteAddress=/10.0.0.243:33796}]
javax.net.ssl.SSLException: SSLEngine closed already
	at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source) ~[?:?]

Environment

  • ECK version:

0.9.0

  • Kubernetes information:

AWS EKS

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

closed time in 7 months

a-hat

issue openedelastic/cloud-on-k8s

Nodes cannot join (ES 6.7.1)

Bug Report

I applied this manifest:

apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
kind: Elasticsearch
metadata:
  name: test
spec:
  version: 6.7.1
  nodes:
  - nodeCount: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
    podTemplate:
      spec:
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms1024m -Xmx1024m
          resources:
            requests:
              memory: 2048Mi
            limits:
              memory: 2048Mi
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 40Gi
        storageClassName: gp2-encrypted

Elasticsearch comes up and is operational with one node. Then I edited the Elasticsearch resource and changed nodeCount from 1 to 2. The new node is not able to join the cluster, the new node's pod does not change into Ready state.

I also tried version 6.8.3 of ES, joining new nodes works fine there.

Master Log:

[2019-09-13T12:19:01,936][INFO ][o.e.c.s.MasterService    ] [test-es-l6grv2mzvn] zen-disco-node-failed({test-es-7lzxs4hwps}{InnHL6TuqePyUuWzcgdw}{ezPXeYGsSUK7nldew5l_Ig}{10.0.0.121}{10.0.0.121:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}), reason(failed to ping, tried [3] times, each with maximum [30s] timeout)[{test-es-7lzxs4hwps}{InnHL6TuqePyUuWzcgdw}{ezPXeYGsSUK7nldew5l_Ig}{10.0.0.121}{10.0.0.121:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} failed to ping, tried [3] times, each with maximum [30s] timeout], reason: removed {{test-es-7lzxs4hwps}{Inn__HL6TuqePyUuWzcgdw}{ezPXeYGsSUK7nldew5l_Ig}{10.0.0.121}{10.0.0.121:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}

New Node Log:

2019-09-13T12:22:42,570][WARN ][o.e.t.OutboundHandler    ] [test-es-7lzxs4hwps] send message failed [channel: Netty4TcpChannel{localAddress=/10.0.0.121:9300, remoteAddress=/10.0.0.243:33796}]
javax.net.ssl.SSLException: SSLEngine closed already
	at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source) ~[?:?]

Environment

  • ECK version:

0.9.0

  • Kubernetes information:

AWS EKS

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

created time in 7 months

startedcruise-automation/isopod

started time in 7 months

issue closedevanlucas/fish-kubectl-completions

Speed of completions

Hi @evanlucas! First, thanks for your work on this great project!

I am experiencing some issues with the speed of the completions. It takes about 2-3 times more compared to bash or zsh on the same kubernetes cluster. This makes it about 7-12secs for a completion, which makes it quite unusable unfortunately.

I am using the latest master of this repository.

Do you have any ideas what could be the issue, or how I could debug this?

closed time in 7 months

a-hat

issue commentevanlucas/fish-kubectl-completions

Speed of completions

Hey @evanlucas, thanks for your prompt answer!

Setting FISH_KUBECTL_COMPLETION_COMPLETE_CRDS does help, now the completion speed is comparable to zsh. Thank you!

As a side note, I had already tried to set this variable before, but I was not aware that it apparently does not affect an already running shell. :)

a-hat

comment created time in 7 months

startedvapor-ware/ksync

started time in 7 months

issue openedevanlucas/fish-kubectl-completions

Speed of completions

Hi @evanlucas! First, thanks for your work on this great project!

I am experiencing some issues with the speed of the completions. It takes about 2-3 times more compared to bash or zsh on the same kubernetes cluster. This makes it about 7-12secs for a completion, which makes it quite unusable unfortunately.

I am using the latest master of this repository.

Do you have any ideas what could be the issue, or how I could debug this?

created time in 7 months

more