profile
viewpoint

amnk/dd2tf 74

Export DataDog configuration to Terraform

amnk/fluent-gcp 1

A Helm chart for custom Fluent installation on GCP

amnk/proxysql-watcher 1

Watcher for ProxySQL

amnk/puppet-riak 1

Puppet module to install and configure Riak

amnk/camlistore 0

Camlistore is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

amnk/cassandra-testing 0

Scripts to test Cassandra

issue commentroboll/helmfile

Helmfile builds unrelated dependencies in 0.125.0+

@mumoshu with 0.125.0+ I always see three or more unrelated Building dependency ... doing one single release. If my selector includes several releases, I will see same dependencies.

amnk

comment created time in a month

issue openedroboll/helmfile

Helmfile builds unrelated dependencies in 0.125.0+

First of all - @mumoshu thank for awesome project.

I've been using helmfile for a while now with different versions, but starting with 0.125.0 I see an interesting behavior. Some background: all my installation uses charts stored locally, so I don't have repositories defined at all. Now, with versions earlier than 0.125.0 each my sync builds only relevant charts.

But with 0.125.0+ if I do sync on same helmfile.yaml, I get several Building dependency release=xxx. Moreover, with each run charts that are build as dependencies are different (and their number differs as well).

I saw helmfile/pull/1412 and related issues, but I'm either missing come context (e.g. this is desired behavior), or some open bug (e.g. someone already reported it).

Does the above make sense or I need to provide more details?

created time in a month

push eventamnk/dotfiles

amnk

commit sha 1b6cbf3e2de56038294f6df8a6fdcc676f46d6d0

Update to yadm 2, and add ZSH things

view details

push time in a month

pull request commentamnk/dd2tf

Issue #23

Merged. I must admit I took a pause from this project for a while. But it looks like people are using it, so I will review all outstanding PRs, merge them, and check CI

Technobeats

comment created time in a month

push eventamnk/dd2tf

Jelle Leempoels

commit sha 293c4887a5a9a46e40128da857b86321ce4c72c1

updated templates to add dd_ to the id terraform v12 does not support naming to start with a digit

view details

Jelle Leempoels

commit sha a15ce7bc7b2412d68859637a3e5f08b10a23a757

Merge pull request #1 from Technobeats/update_templates updated templates to add dd_ to the id

view details

Jelle Leempoels

commit sha 56fbc1dd0a8c3ad8e57dcc9bc3d4d5bd474422f2

emplates TFv12 compatible a {} is now a = {}

view details

Jelle Leempoels

commit sha 1daae3cf948a5fa25ba7b87ef4946be970383e27

update readme

view details

Jelle Leempoels

commit sha 35df6d4b6ab9bcb12bf485854b1ba123e1cf0f1b

fix templates

view details

amnk

commit sha 77dfec9130698a95a607829425178b34b6493dcd

Merge pull request #24 from Technobeats/master Issue #23

view details

push time in a month

PR merged amnk/dd2tf

Issue #23

Regarding issue #23 https://github.com/amnk/dd2tf/issues/23

+41 -31

1 comment

5 changed files

Technobeats

pr closed time in a month

issue commentGoogleCloudPlatform/k8s-config-connector

Private SQLInstance and private network cannot be created in the same set of configs.

Yep, I have the same: ready status in Kubernetes but failed - in gcloud. Recreating same instance but with different name several minutes later works fine.

AlexBulankou

comment created time in 2 months

issue commentrancher/terraform-provider-rancher2

Failed to install provider in 0.13

We got behind this by doing 0.13upgrade, but it still fails with


 Error: [ERROR] No api_url provided
 
  on <empty> line 0:

   (source code not available)
pierknueppel

comment created time in 2 months

issue commentkvaps/kubectl-node-shell

Unable to spin shell on a node with taints

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-18T02:59:13Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

and

apiVersion: v1
kind: Node
metadata:
  annotations:
    alpha.kubernetes.io/provided-node-ip: 10.136.206.175
    csi.volume.kubernetes.io/nodeid: '{"dobs.csi.digitalocean.com":"201815603"}'
    io.cilium.network.ipv4-cilium-host: 10.244.4.161
    io.cilium.network.ipv4-health-ip: 10.244.4.194
    io.cilium.network.ipv4-pod-cidr: 10.244.4.128/25
    node.alpha.kubernetes.io/ttl: "0"
    volumes.kubernetes.io/controller-managed-attach-detach: "true"
  creationTimestamp: "2020-07-28T20:59:46Z"
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/instance-type: s-4vcpu-8gb
    beta.kubernetes.io/os: linux
    cluster: python-apps
    doks.digitalocean.com/node-id: a8fe8fea-e6c6-4cd8-a35c-15f836448016
    doks.digitalocean.com/node-pool: python-apps
    doks.digitalocean.com/node-pool-id: 698d5eca-8ecd-4538-8167-bfc193711765
    doks.digitalocean.com/version: 1.18.6-do.0
    failure-domain.beta.kubernetes.io/region: nyc1
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: python-apps-3re6j
    kubernetes.io/os: linux
    node.kubernetes.io/instance-type: s-4vcpu-8gb
    region: nyc1
    topology.kubernetes.io/region: nyc1
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:beta.kubernetes.io/instance-type: {}
          f:failure-domain.beta.kubernetes.io/region: {}
          f:node.kubernetes.io/instance-type: {}
          f:topology.kubernetes.io/region: {}
      f:status:
        f:addresses:
          k:{"type":"ExternalIP"}:
            .: {}
            f:address: {}
            f:type: {}
    manager: digitalocean-cloud-controller-manager
    operation: Update
    time: "2020-07-28T20:59:48Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:node.alpha.kubernetes.io/ttl: {}
      f:spec:
        f:podCIDR: {}
        f:podCIDRs:
          .: {}
          v:"10.244.4.128/25": {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-07-28T20:59:56Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:io.cilium.network.ipv4-cilium-host: {}
          f:io.cilium.network.ipv4-health-ip: {}
          f:io.cilium.network.ipv4-pod-cidr: {}
      f:status:
        f:conditions:
          k:{"type":"NetworkUnavailable"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
    manager: cilium-agent
    operation: Update
    time: "2020-07-28T21:00:04Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:cluster: {}
      f:spec:
        f:taints: {}
    manager: kubectl
    operation: Update
    time: "2020-07-28T21:06:05Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:alpha.kubernetes.io/provided-node-ip: {}
          f:csi.volume.kubernetes.io/nodeid: {}
          f:volumes.kubernetes.io/controller-managed-attach-detach: {}
        f:labels:
          .: {}
          f:beta.kubernetes.io/arch: {}
          f:beta.kubernetes.io/os: {}
          f:doks.digitalocean.com/node-id: {}
          f:doks.digitalocean.com/node-pool: {}
          f:doks.digitalocean.com/node-pool-id: {}
          f:doks.digitalocean.com/version: {}
          f:kubernetes.io/arch: {}
          f:kubernetes.io/hostname: {}
          f:kubernetes.io/os: {}
          f:region: {}
      f:spec:
        f:providerID: {}
      f:status:
        f:addresses:
         .: {}
          k:{"type":"Hostname"}:
            .: {}
            f:address: {}
            f:type: {}
          k:{"type":"InternalIP"}:
            .: {}
            f:address: {}
            f:type: {}
        f:allocatable:
          .: {}
          f:cpu: {}
          f:ephemeral-storage: {}
          f:hugepages-2Mi: {}
          f:memory: {}
          f:pods: {}
        f:capacity:
          .: {}
          f:cpu: {}
          f:ephemeral-storage: {}
          f:hugepages-2Mi: {}
          f:memory: {}
          f:pods: {}
        f:conditions:
          .: {}
          k:{"type":"DiskPressure"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"MemoryPressure"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"PIDPressure"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:daemonEndpoints:
          f:kubeletEndpoint:
         .: {}
          k:{"type":"Hostname"}:
            .: {}
            f:address: {}
            f:type: {}
          k:{"type":"InternalIP"}:
            .: {}
            f:address: {}
            f:type: {}
        f:allocatable:
          .: {}
          f:cpu: {}
          f:ephemeral-storage: {}
          f:hugepages-2Mi: {}
          f:memory: {}
          f:pods: {}
        f:capacity:
          .: {}
          f:cpu: {}
          f:ephemeral-storage: {}
          f:hugepages-2Mi: {}
          f:memory: {}
          f:pods: {}
        f:conditions:
          .: {}
          k:{"type":"DiskPressure"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"MemoryPressure"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"PIDPressure"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastHeartbeatTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:daemonEndpoints:
          f:kubeletEndpoint:
            f:Port: {}
        f:images: {}
        f:nodeInfo:
          f:architecture: {}
          f:bootID: {}
          f:containerRuntimeVersion: {}
          f:kernelVersion: {}
          f:kubeProxyVersion: {}
          f:kubeletVersion: {}
          f:machineID: {}
          f:operatingSystem: {}
          f:osImage: {}
          f:systemUUID: {}
    manager: kubelet
    operation: Update
    time: "2020-07-29T19:24:06Z"
  name: python-apps-3re6j
  resourceVersion: "2462715"
  selfLink: /api/v1/nodes/python-apps-3re6j
  uid: 6a72b4b6-38d7-4a27-bc32-220044c8004e
spec:
  podCIDR: 10.244.4.128/25
  podCIDRs:
  - 10.244.4.128/25
  providerID: digitalocean://201815603
  taints:
  - effect: NoSchedule
    key: node-type
    value: python-apps
  - effect: NoExecute
    key: node-type
    value: python-apps
status:
  addresses:
  - address: python-apps-3re6j
    type: Hostname
  - address: 10.136.206.175
    type: InternalIP
  - address: 104.248.120.49
    type: ExternalIP
  allocatable:
    cpu: "4"
    ephemeral-storage: "152161143761"
    hugepages-2Mi: "0"
    memory: 6694Mi
    pods: "110"
  capacity:
    cpu: "4"
    ephemeral-storage: 165105408Ki
    hugepages-2Mi: "0"
    memory: 8170048Ki
    pods: "110"
  conditions:
  - lastHeartbeatTime: "2020-07-28T21:00:04Z"
    lastTransitionTime: "2020-07-28T21:00:04Z"
    message: Cilium is running on this node
    reason: CiliumIsUp
    status: "False"
    type: NetworkUnavailable
  - lastHeartbeatTime: "2020-07-29T19:24:06Z"
    lastTransitionTime: "2020-07-28T20:59:46Z"
    message: kubelet has sufficient memory available
    reason: KubeletHasSufficientMemory
    status: "False"
    type: MemoryPressure
  - lastHeartbeatTime: "2020-07-29T19:24:06Z"
    lastTransitionTime: "2020-07-28T20:59:46Z"
    message: kubelet has no disk pressure
    reason: KubeletHasNoDiskPressure
    status: "False"
    type: DiskPressure
  - lastHeartbeatTime: "2020-07-29T19:24:06Z"
    lastTransitionTime: "2020-07-28T20:59:46Z"
    message: kubelet has sufficient PID available
    reason: KubeletHasSufficientPID
    status: "False"
    type: PIDPressure
  - lastHeartbeatTime: "2020-07-29T19:24:06Z"
    lastTransitionTime: "2020-07-28T20:59:56Z"
    message: kubelet is posting ready status
    reason: KubeletReady
    status: "True"
    type: Ready
  daemonEndpoints:
    kubeletEndpoint:
      Port: 10250
  nodeInfo:
    architecture: amd64
    bootID: 0df05998-4b67-490a-936e-961ec8156310
    containerRuntimeVersion: docker://18.9.2
    kernelVersion: 4.19.0-0.bpo.6-amd64
    kubeProxyVersion: v1.18.6
    kubeletVersion: v1.18.6
    machineID: fa1b1d9578c74b8583d7a73feb857047
    operatingSystem: linux
    osImage: Debian GNU/Linux 10 (buster)
    systemUUID: fa1b1d95-78c7-4b85-83d7-a73feb857047

node is a droplet on DO. I removed .images[] block

amnk

comment created time in 3 months

issue openedroboll/helmfile

[Question] `needs` ignores namespaced releases

I recently found that helmfile supports DAGs, and now I'm trying to get use of it. But the behavior that I see seems odd, e.g. I don't understand it.

I have a base template, which defines all my services, like this:

templates:
  base: &base
    namespace: {{ $env }}
    chart: .ci/charts/helm-microservice
    needs:
      - {{ $env }}/configconnector

  service: &service
    <<: *base

And I use it then like this:

releases:
  - name: api
    labels:
      app: api
      mask: all
    <<: *service-secret

 - name: configconnector
    labels:
      app: configconnector
      mask: all
    namespace: {{ $env }}
    chart: .ci/charts/config-connector-helper
    installed: true

And my issue is that configconnector never gets created - it seems that needs is ignored. If I change needs to be simple configconnector (without the {{ $env }}), helmfile complains that it can't find that.

created time in 3 months

issue commentkvaps/kubectl-node-shell

Unable to spin shell on a node with taints

@kvaps I have two nodepools at the moment. Tool works perfectly fine on a nodepool without taints, and fails on a pool with taints (at least that is the only different that I can spot). It fails like this:

❯ k node-shell xxxxx
spawning "nsenter-p4kc5e" on "xxxx"
No resources found
Error from server (NotFound): pods "nsenter-p4kc5e" not found

If I can provide any other info - let me know.

amnk

comment created time in 3 months

issue openedkvaps/kubectl-node-shell

Unable to spin shell on a node with taints

First of all - thank you very much for the project. It is elegant and useful!

It saved my day today, but only partially - because it does not support nodes with taints. Would be great to see it being able to spin shells on any type of node.

created time in 3 months

more