profile
viewpoint
Michael Gebetsroither gebi @mgit-at Graz

gebi/jungerl 101

The Jungle of Erlang code

gebi/check-receiver 8

Daemon to receive nagios/icinga/check-mk results pushed through https/http

gebi/debianpaste-clients 6

Client for paste.debian.net Service

gebi/fs-test 6

Filesystem semantic checker

gebi/checkmk-agent-hp 5

Check_MK agent plugins for HP Servers - DORMANT

gebi/checkmk 3

Check_MK plugin repository.

gebi/eunit 3

Git tracking branch of EUnit, a unit testing framework for Erlang

gebi/co2exporter 2

Prometheus exporter for co2 sensors

gebi/container-zoo 2

Various Dockerfiles I use mainly on my laptop

startedcodenotary/immudb

started time in 17 days

issue commentGoogleContainerTools/kaniko

Failed to push image to Docker Hub

@aroq JFYI we updated kaniko to version v0.16.0 in mgit/base:kaniko-executor-debug-stable our integration tests run through without issues and both problems #1209 and #656 are "still" fixed (we have dedicated tests for both problems).

(sorry we are using a mono repo on our side so we can skip releases pretty much on the base images (there are just -latest and -stable, and i'd really like to rather fix that here in upstream, than paper over bugs over bugs in additional layers)

qianzhangxa

comment created time in a month

issue commentGoogleContainerTools/kaniko

Pushing images to dockerhub stopped working

i can verify that for us too the lastest working kaniko version is v0.16.0

v0.20.0 does not build, with the following job output:

$ mkdir -p /kaniko/.docker
$ echo "{\"auths\":{\"index.docker.io\":{\"auth\":\"${DOCKERHUB_AUTH}\"}}}" > /kaniko/.docker/config.json
$ mkdir /docker-tmp
$ echo 'FROM debian:stable' >> /docker-tmp/dockerfile
$ echo 'ENTRYPOINT ["/bin/bash", "-c", "echo hello"]' >> /docker-tmp/dockerfile
$ /kaniko/executor --context /docker-tmp --dockerfile /docker-tmp/dockerfile --destination foo/bar:hello-world-latest
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "foo/bar:hello-world-latest": POST https://index.docker.io/v2/foo/bar/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:mgit/base Type:repository] map[Action:push Class: Name:mgit/base Type:repository]]Running after_script
Uploading artifacts for failed job
ERROR: Job failed: exit code 1
FATAL: exit code 1
gebi

comment created time in a month

issue commentGoogleContainerTools/kaniko

Pushing images to dockerhub stopped working

it seems #1005 describes the same problem with kaniko

gebi

comment created time in a month

issue commentGoogleContainerTools/kaniko

Pushing images to dockerhub stopped working

I don't think so, as in 245 they mention multistage builds and long build time. We have just a short build time, no multistage builds and it worked with the version of kaniko from a few months ago perfectly.

gebi

comment created time in a month

issue commentfluent/fluent-bit

Add S3 bucket Output plugin

It would be awesome to have compression support for upload to s3 too.

amit-uc

comment created time in a month

issue commentGoogleContainerTools/kaniko

Failed to push image to Docker Hub

we just had our kaniko setup stopped working with the same problem error checking push permissions as reported in #1209 .

Our fix was to just use mgit/base:kaniko-executor-debug-stable as image which also fixes #656

Is there anything we can help with regarding stabilizing kaniko?

qianzhangxa

comment created time in a month

issue commentGoogleContainerTools/kaniko

Pushing images to dockerhub stopped working

Ah, sidenote, i confirmed that the credentials still work for pushing images to dockerhub, and they work fine, both manually and as mentioned with the old kaniko version.

For now we have pinned the kaniko version to this one mgit/base:kaniko-executor-debug-stable (which also fixes the problem of kaniko unable to build images on bigger FS because of the included busybox on bigger filesystems due to 64bit inodes and the included busybox not being compiled with large file support #656 )

gebi

comment created time in a month

issue openedGoogleContainerTools/kaniko

Pushing images to dockerhub stopped working

Actual behavior

Kaniko exits with exit code 1 with the following message and and does not build the image:

error checking push permissions -- make sure you entered the correct tag name,
and that you are authenticated correctly, and try again: checking push permission for
"foo/bar:bionic-99": UNAUTHORIZED: authentication required; [map[Action:pull Class:
Name:mgit/clamav Type:repository] map[Action:push Class: Name:foo/bar Type:repository]]

This worked with the same build pipeline and no changes 3 months ago with the following image:

Using Docker executor with image gcr.io/kaniko-project/executor:debug ...
Pulling docker image gcr.io/kaniko-project/executor:debug ...
Using docker image sha256:2aa254b4837c242c7de87956438eaba70f97a2768ab0870819fd20e09df15cf6 for gcr.io/kaniko-project/executor:debug ...

Expected behavior

Kaniko to upload image to dockerhub like the version 3 months ago was able to. There where no changes, and it works if i go back to an older kaniko version.

To Reproduce Steps to reproduce the behavior:

  1. ... with the following pseudo .gitlab-ci.yml
image:
  name: gcr.io/kaniko-project/executor:debug
  entrypoint: [""]

stages:
  - foo

build-foo:
  stage: foo
  script:
    - echo "{\"auths\":{\"index.docker.io\":{\"auth\":\"${CI_DOCKERHUB_AUTH}\"}}}" > /kaniko/.docker/config.json
    - >
      /kaniko/executor --context "${CI_PROJECT_DIR}/foo" --dockerfile "${CI_PROJECT_DIR}/foo/Dockerfile"
      --destination foo/bar:blub-${CI_PIPELINE_IID}"
      --destination foo/bar:blub"
  1. ... build it

Additional Information

  • Dockerfile Please provide either the Dockerfile you're trying to build or one that can reproduce this error.
  • Build Context Please provide or clearly describe any files needed to build the Dockerfile (ADD/COPY commands)
  • Kaniko Image (fully qualified with digest)
Using Docker executor with image gcr.io/kaniko-project/executor:debug ...
Pulling docker image gcr.io/kaniko-project/executor:debug ...
Using docker image sha256:2ec307dcf7f52dcf700ea0fbc65d448f46365cfac69567e8177bf12b80942f54 for gcr.io/kaniko-project/executor:debug ...

Triage Notes for the Maintainers <!-- 🎉🎉🎉 Thank you for an opening an issue !!! 🎉🎉🎉 We are doing our best to get to this. Please help us by helping us prioritize your issue by filling the section below -->

Description Yes/No
Please check if this a new feature you are proposing <ul><li>- [ ] </li></ul>
Please check if the build works in docker but not in kaniko <ul><li>- [x] </li></ul>
Please check if this error is seen when you use --cache flag <ul><li>- [ ] </li></ul>
Please check if your dockerfile is a multistage dockerfile <ul><li>- [ ] </li></ul>

created time in a month

issue commentmarkuslindenberg/co2monitor_exporter

does not work with AIRCO2NTROL COACH

NP, i was equally surprised, because i don't have the smaller version but just the Coach and it required a bit of tinkering to get it to work (i liked the Coach because it has a nice UI even without something "smart" getting the values and displaying it).

I've even a test Coach HW here connected to my laptop, if you have a version ready i can test it without much hassl.

As i don't have the smaller version it would be nice if you could test my exporter on your HW (the smaller one) if it works :), thx!

gebi

comment created time in a month

issue openedmarkuslindenberg/co2monitor_exporter

does not work with AIRCO2NTROL COACH

Hi,

awesome someone made a golang implementation, i was just too far into the python implementation for a rewrite ( https://github.com/gebi/co2exporter ).

Though your exporter does not work with "TFA-Dostmann AIRCO2NTROL Coach CO2 Monitor".

It just outputs...

# ./co2monitor_exporter --device=/dev/hidraw6
INFO[0000] Starting co2monitor_exporter (version=, branch=, revision=)  source="co2monitor_exporter.go:32"
INFO[0000] Build context (go=go1.14.1, user=, date=)     source="co2monitor_exporter.go:33"
INFO[0000] Listening on :9673                            source="co2monitor_exporter.go:70"
FATA[0000] checksum error: 73 f2 1a 7c 43 b1 e8 32       source="co2monitor_exporter.go:57"

The same device works with my exporter

# ./co2exporter.py /dev/hidraw6
Listening on :9672, appending labels: {}
*42: 127D  4733    T: 22.66
*41: 0B8A  2954,  42: 127D  4733    T: 22.66 RH: 29.54
 41: 0B8A  2954,  42: 127D  4733, *53: 0000     0    T: 22.66 RH: 29.54
 41: 0B8A  2954,  42: 127D  4733, *50: 0307   775,  53: 0000     0    CO2:  775 T: 22.66 RH: 29.54
 41: 0B8A  2954, *42: 127D  4733,  50: 0307   775,  53: 0000     0    CO2:  775 T: 22.66 RH: 29.54
*41: 0B8A  2954,  42: 127D  4733,  50: 0307   775,  53: 0000     0    CO2:  775 T: 22.66 RH: 29.54
 41: 0B8A  2954,  42: 127D  4733,  50: 0307   775, *53: 0000     0    CO2:  775 T: 22.66 RH: 29.54

hint the Coach variant of the CO2 monitor does not "encrypt" it's data, so the strategy i'm using is to just check the checksum on the unencrypted data and if it matches use that, otherwise decrypt and try again to verify the checksum, eg. https://github.com/gebi/co2exporter/blob/master/co2exporter.py#L96

created time in 2 months

push eventgebi/nebula

Michael Gebetsroither

commit sha a82382a02f9764e9389c30e31885030fc0ad44ee

also stop the running container after copying build files

view details

push time in 2 months

startedslackhq/nebula

started time in 2 months

issue commentslackhq/nebula

Question: NAT Setup

Awesome, i'll also test as soon as we are allowed to go out again.

btw... as it seems now viable to use nebula i've polished up my debian package building and sent a pull request :) #211

jatsrt

comment created time in 2 months

PR opened slackhq/nebula

Build debian packages for nebula

Hi,

As it seems now viable to use nebula with the last hole punching improvements i've polished our debian package building code :)

This can be used to build various distribution package formats (it uses fpm at it's core) so building rpm and packages for other distributions is easy.

This should not be thought as throwing something over the fence, i would also be willing to help you maintain that part in the future. (i've had no time to play with github actions till now, but maybe that can be included directly)

+122 -0

0 comment

5 changed files

pr created time in 2 months

push eventgebi/nebula

Michael Gebetsroither

commit sha 6e8caecf54e33c6ed0805a7b42a8d51ee077b426

add readme and overview of moving parts of deb package building

view details

push time in 2 months

create barnchgebi/nebula

branch : deb

created branch time in 2 months

fork gebi/nebula

A scalable overlay networking tool with a focus on performance, simplicity and security

fork in 2 months

issue commentslackhq/nebula

Question: NAT Setup

IMHO currently the best example of nat traversal is tailscale, they use a combination of STUN and ICE together with their encrypted relay (DERP).

Awesome... i will re-do the nebula setup get everything up and running again and help you debug if you want :). Even if we have a quarantine currently i'm sure i will get it to not working between two nodes.

btw... one additional nice feature of a relay would be possible support for http proxy (as many corps still use a proxy for internet access).

ps.: should i create an issue with the problem of ip collission i found with the presence of the docker network on mutliple nebula nodes and nebula listening on both nodes on the "same" address? i've "fixed" it partly through firewall rules and different nebula ports on each nodes, which might be an uncommon config for newcommers.

jatsrt

comment created time in 2 months

issue commentslackhq/nebula

Question: NAT Setup

Thx for the feedback! (i've put the whining at the end, sorry)

Yes ultimately realys are necessary, eg. as tailscale puts it

https://github.com/tailscale/tailscale/blob/master/derp/derp.go#L9

// DERP is used by Tailscale nodes to proxy encrypted WireGuard // packets through the Tailscale cloud servers when a direct path // cannot be found or opened. DERP is a last resort. Both sides // between very aggressive NATs, firewalls, no IPv6, etc? Well, DERP.

But relays should not be used unnecessarily, they are just a last resort.

STUN or ICE do a whole lot to get through nats, but an additional idea would also be to use UPNP or NAT-PMP when configured.

<== snip

I really appreciate your honest answer, though i'm inclined to say that "There are some NATs we just don’t handle well yet" might not quite cut it, in my experience it's "Not at all", our home servers where behind some consumer stuff, but also every other network i tested, corporate / hackerspaces / ... nothing worked except connection to lighthouse (thus the connection should have been working in principle).

jatsrt

comment created time in 2 months

issue commentslackhq/nebula

Question: NAT Setup

We had similar problems getting nebula to work. It seems nebula just can't work with "normal" consumer setups (both sides behind NAT).

It's not only me but also 3 collegues that have tried it without success. The common error pattern was that all boxes can reach the lighthouse via nebula, but except if they are on the same network NO nebula node was able to reach any other nebula node (except the lighthouse). I've tested it for over 2 weeks from various different networks with my laptop and could not get a connection working to other nebula nodes other than the lighthouse a single time.

Maybe it would be a good idea to adept the readme, that nebula is more for a server use case, because for consumer it seems to not work for the main usecase.

Btw... I had the interesting problem for nebula that most of the machines nebula runs on have the same network (eg. docker or k8s network) which is also displayed in the lighthouse tables and as nebula runs on the host there is also a nebula running there, just the wrong one (it's speaking with himself). With the config problems mentioned in this thread that i also debugged through i just can't say if this was related to the initial connection problems.

jatsrt

comment created time in 2 months

startedpcm-dpc/COVID-19

started time in 3 months

startedsighupio/permission-manager

started time in 3 months

more