profile
viewpoint
Ethan J. Brown Iristyle Puppet, Inc. Portland, OR

Iristyle/Burden 41

A simple / persistent .NET job queue system based on Rx

Iristyle/BookKeeper 4

Track immutable objects through a history of changesets. Provides a serialization format that is distributed data friendly

Iristyle/Authentic 2

A pluggable .NET based authentication library supporting basic and digest auth

Iristyle/bower-angularAuth 2

A repository for hosting the angular-auth module capable of handling 401 redirects

groundwater/node-foreman 1

A Node.js Version of Foreman

apenney/puppet 0

Server automation framework and application

corkupine/DevTools 0

Developer tool packages for project on-boarding, including Vagrant VMs

Elindalyne/NLog.AirBrake 0

A custom NLog target that will push errors to AirBrake or Errbit

Iristyle/AnalysisRules 0

Static analysis rule files and other tooling

Iristyle/angular-strap 0

Bootstrap directives for Angular

issue openedderailed/popeye

0.8.9 changes exit status behavior in a breaking way

<img src="https://raw.githubusercontent.com/derailed/popeye/master/assets/popeye_feature.png" align="right" width="100" height="auto"/>

<br/> <br/> <br/>

Is your feature request related to a problem? Please describe.

In 0.8.9, https://github.com/derailed/popeye/commit/94da2df5e9ecdabd7565222720cea43a285bcc01 changed the behavior to produce a non-zero exit status with the presence of warnings or errors.

This unfortunately broke our existing CI workflow that did the following:

	docker run --rm -i -v ${HOME}/${CONFIG_PATH}:/root/.kube ${DOCKER_RUN_SWITCHES} quay.io/derailed/popeye:latest --namespace ${KOTS_NAMESPACE} --lint info
	docker run --rm -i -v ${HOME}/${CONFIG_PATH}:/root/.kube ${DOCKER_RUN_SWITCHES} quay.io/derailed/popeye:latest --namespace ${KOTS_NAMESPACE} --lint info --out json | jq '[.popeye.sanitizers[].tally.error] | any (. >= 1) | not' --exit-status

We ran popeye once for CI / informational purposes... and then again to figure out the error count by parsing JSON. We were ignoring warnings.

Now with warnings producing non-zero exit, only the first line runs and CI dies, when it previously continued.

Describe the solution you'd like

There are a couple of things I'd like to see if possible:

  • A cli switch to opt into treating warnings as errors (IMHO, warnings shouldn't generate non-zero exit codes)
  • Related, it would be great to be able to produce multiple outputs in a single run (i.e. report for CI + the json file on disk)

Describe alternatives you've considered

I should be able to work around this new issue with || true on the first line, but a cli switch would be better self-documenting.

Thanks!

created time in a month

issue openedstefanprodan/kube-tools

Add krew?

https://github.com/kubernetes-sigs/krew

created time in a month

push eventpuppetlabs/tlser

Michael Smith

commit sha 136112a2053ae4d187c63413a496d87d6efe2f41

Add support for labels on the managed secret Adds the `-label` argument for declaring labels that should be set on the secret. `-label` can be used repeatedly to add multiple labels. Resolves #6.

view details

Ethan J. Brown

commit sha 6835ed224f1ffbd2835ebd48565779cc4367c6c4

Merge pull request #7 from puppetlabs/add-labels Add support for labels on the managed secret

view details

push time in a month

delete branch puppetlabs/tlser

delete branch : add-labels

delete time in a month

PR merged puppetlabs/tlser

Reviewers
Add support for labels on the managed secret

Adds the -label argument for declaring labels that should be set on the secret. -label can be used repeatedly to add multiple labels.

Resolves #6.

+179 -42

1 comment

8 changed files

MikaelSmith

pr closed time in a month

issue closedpuppetlabs/tlser

Allow for adding labels

To be able to properly flush the secret resource using a label-based filter, there needs to be a way to specify that metadata

In particular, I'm interested in adding app.kubernetes.io/part-of: cd4pe

closed time in a month

Iristyle

pull request commentpuppetlabs/tlser

Add support for labels on the managed secret

I dont really know Go... but... lets :shipit:

MikaelSmith

comment created time in a month

issue openedpuppetlabs/tlser

Allow for adding labels

To be able to properly flush the secret resource using a label-based filter, there needs to be a way to specify that metadata

created time in a month

pull request commentpuppetlabs/puppetdb

Honor env var PUPPETDB_JAVA_ARGS

This is looking very familiar @underscorgan. Ah, because we did a similar thing in pe-puppetdb with copying in /etc/puppetlabs - https://github.com/puppetlabs/pe-puppetdb-extensions/pull/458/commits/6199e62aab2183e0613a3e95c3785f8f51002931

Did we really miss doing the same thing in this container?

rstruber

comment created time in a month

Pull request review commentpuppetlabs/puppetdb

Honor env var PUPPETDB_JAVA_ARGS

 ENV PUPPERWARE_ANALYTICS_ENABLED=false \ # note: LOGDIR cannot be defined in the same ENV block it's used in # this value may be set by users, keeping in mind that some of these values are mandatory # -Djavax.net.debug=ssl may be particularly useful to set for debugging SSL-ENV PUPPETDB_JAVA_ARGS="-Djava.net.preferIPv4Stack=true -Xms256m -Xmx256m -XX:+UseParallelGC -Xlog:gc*:file=$LOGDIR/puppetdb_gc.log::filecount=16,filesize=65536 -Djdk.tls.ephemeralDHKeySize=2048"

Hmmm - we're successfully using that option in the PE version of PuppetDB. We should try and find the equivalent for this version of Java if it's not working here if it's a different Java runtime.

This is the code from pe-puppetdb-extensions

# NOTE: Docker ENV values cannot consume other ENV values, so full path to $LOGDIR is specified for -Xloggc
# this value may be set by users, keeping in mind that some of these values are mandatory
# -Djavax.net.debug=ssl may be particularly useful to set for debugging SSL
    PUPPETDB_JAVA_ARGS="-Djava.net.preferIPv4Stack=true -Xms256m -Xmx256m -XX:+UseParallelGC -Xlog:gc*:file=/opt/puppetlabs/server/data/puppetdb/logs/puppetdb_gc.log::filecount=16,filesize=65536 -Djdk.tls.ephemeralDHKeySize=2048" \

That first note is actually wrong about ENV vars not being able to use other ENV vars... they just have to be defined earlier like this:

ENV foo
ENV bar=$foo

/cc @puppetlabs/puppetdb

rstruber

comment created time in a month

issue closedzegl/kube-score

deployment-has-host-podantiaffinity / statefulset-has-host-podantiaffinity generates false positives

Which version of kube-score are you using?

kube-score version: 1.8.1, commit: cdab99b6ee4d135bb716a92cbb91828ea28ff492, built: 2020-08-11T08:12:42Z

What did you do?

    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                - issuer
            topologyKey: kubernetes.io/hostname

What did you expect to see?

Given a podAntiAffinity like the above (similar to the example at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#always-co-located-in-the-same-node), I would expect the following to meet the guidelines for deployment-has-host-podantiaffinity / statefulset-has-host-podantiaffinity

What did you see instead?

[WARNING] StatefulSet has host PodAntiAffinity
        · StatefulSet does not have a host podAntiAffinity set
            It's recommended to set a podAntiAffinity that stops multiple pods
            from a statefulset from being scheduled on the same node. This
            increases availability in case the node becomes unavailable.

If I instead to switch to what is in the tests then the check passes

    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: issuer
              topologyKey: kubernetes.io/hostname

closed time in a month

Iristyle

issue commentzegl/kube-score

deployment-has-host-podantiaffinity / statefulset-has-host-podantiaffinity generates false positives

You know what -- I think the docs may be outdated. Just trying to deploy, and I'm getting validation errors. This might be my mistake!

Iristyle

comment created time in a month

issue closedzegl/kube-score

Probe checks don't consider timeout values

Which version of kube-score are you using?

kube-score version: 1.8.1, commit: cdab99b6ee4d135bb716a92cbb91828ea28ff492, built: 2020-08-11T08:12:42Z

What did you do?

When defining a livenessProbe that uses the same command as a readinessProbe but with longer timeouts, kube-score generates a CRITICAL

For instance:

livenessProbe:
  httpGet:
    path: /
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 30
readinessProbe:
  httpGet:
    path: /
    port: 8080
  initialDelaySeconds: 5

What did you expect to see?

Based on the recommendations at https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html that it's OK to use the same command, but with longer timeout values, I'd expect this to not be a critical error.

What did you see instead?

    [CRITICAL] Pod Probes
        · Container has the same readiness and liveness probe
            Using the same probe for liveness and readiness is very likely
            dangerous. Generally it's better to avoid the livenessProbe than
            re-using the readinessProbe.
            More information: https://github.com/zegl/kube-score/blob/master/README_PROBES.md

closed time in a month

Iristyle

issue commentzegl/kube-score

Probe checks don't consider timeout values

Thank you for the detailed response. This definitely seems like a topic of debate.

You make a lot of great points, and I totally understand your perspective with how the tooling should behave.

I'm going to close this ticket, but FYI I did notice one quirk with kube-score.com/ignore in a deployment. The ignore applies to initContainers and containers, which is probably undesirable. Not sure there's a good way to fix that though!

Thanks again.

Iristyle

comment created time in a month

issue openedzegl/kube-score

deployment-has-host-podantiaffinity / statefulset-has-host-podantiaffinity generates false positives

Which version of kube-score are you using?

kube-score version: 1.8.1, commit: cdab99b6ee4d135bb716a92cbb91828ea28ff492, built: 2020-08-11T08:12:42Z

What did you do?

    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                - issuer
            topologyKey: kubernetes.io/hostname

What did you expect to see?

Given a podAntiAffinity like the above (similar to the example at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#always-co-located-in-the-same-node), I would expect the following to meet the guidelines for deployment-has-host-podantiaffinity / statefulset-has-host-podantiaffinity

What did you see instead?

[WARNING] StatefulSet has host PodAntiAffinity
        · StatefulSet does not have a host podAntiAffinity set
            It's recommended to set a podAntiAffinity that stops multiple pods
            from a statefulset from being scheduled on the same node. This
            increases availability in case the node becomes unavailable.

If I instead to switch to what is in the tests then the check passes

    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: issuer
              topologyKey: kubernetes.io/hostname

created time in a month

issue openedzegl/kube-score

Probe checks don't consider timeout values

Which version of kube-score are you using?

kube-score version: 1.8.1, commit: cdab99b6ee4d135bb716a92cbb91828ea28ff492, built: 2020-08-11T08:12:42Z

What did you do?

When defining a livenessProbe that uses the same command as a readinessProbe but with longer timeouts, kube-score generates a CRITICAL

For instance:

livenessProbe:
  httpGet:
    path: /
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 30
readinessProbe:
  httpGet:
    path: /
    port: 8080
  initialDelaySeconds: 5

What did you expect to see?

Based on the recommendations at https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html that it's OK to use the same command, but with longer timeout values, I'd expect this to not be a critical error.

What did you see instead?

    [CRITICAL] Pod Probes
        · Container has the same readiness and liveness probe
            Using the same probe for liveness and readiness is very likely
            dangerous. Generally it's better to avoid the livenessProbe than
            re-using the readinessProbe.
            More information: https://github.com/zegl/kube-score/blob/master/README_PROBES.md

created time in a month

issue commentzegl/kube-score

NetworkPolicy rule false positive when podSelector is empty

Wow, thanks for the super fast turnaround @zegl !

Iristyle

comment created time in a month

issue openedzegl/kube-score

NetworkPolicy rule false positive when podSelector is empty

Which version of kube-score are you using?

kube-score version: 1.8.0, commit: 5c3ed1b02ff59a510776a84b7ecadfb21e151e11, built: 2020-08-10T19:29:19Z

What did you do?

I defined a NetworkPolicy applicable to all pods in a namespace.

Per the NetworkPolicy documentation An empty podSelector selects all pods in the namespace - see under https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource. There's even an example in the docs at https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-allow-all-ingress-traffic

The NetworkPolicy in question is:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: default
  name: allow-all-ingress
spec:
  podSelector: {}
  ingress:
  - {}
  policyTypes:
  - Ingress

What did you expect to see?

I didn't expect any failures.

What did you see instead?

The NetworkPolicy check generates false positives when podSelector: {} like:

[CRITICAL] Pod NetworkPolicy
        · The pod does not have a matching network policy
            Create a NetworkPolicy that targets this pod

I can work around the problem by explicitly defining a match based on a label set on all the pods like this:

  podSelector:
    matchLabels:
      app.kubernetes.io/part-of: myapp

But I think the check should be fixed.

created time in a month

PR closed puppetlabs/puppetserver

(maint) Enable Docker builds with buildkit DO NOT MERGE

This is an experiment to see if we can improve Docker build times by using buildkit. Buildkit will enable a few things:

  • Using multiple targets inside of a single Dockerfile and smart evaluation (i.e. no need to have multiple Dockerfiles like base, release, etc)
  • Concurrent build processes where possible, rather than strictly sequential
  • Better caching (not applicable to Travis, but very applicable to Windows / LCOW)

It looks like this is generally a no-go at this point in time due to the LCOW error, which could be a permissions issue (though I tested setting Everyone: (F) for C:\ProgramData\docker\tmp and C:\ProgramData\docker\windowsfilter and didn't see anything stand out in procmon):

failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to mount C:\ProgramData\docker\tmp\buildkit-mount984162069: [{Type:bind Source:C:\ProgramData\docker\windowsfilter\8es1az6gkgzbc4u4w031q6maz Options:[rbind ro]}]: invalid windows mount type: 'bind'

This is something we probably want to evaluate again in the future (or help fix under LCOW) - buildkit currently says Windows is unsupported - https://github.com/moby/buildkit/issues/616

Update: It looks like it may be possible to use buildkit more directly rather than going through docker build. buildctl has a Windows binary and it's possible to run the buildkit daemon in a container.

That's a reasonably different workflow from what we're doing now, but seems like it may plausibly work without much modification. The convention for passing command line args would vary a bit. For instance, --build-arg vcs_ref=$(vcs_ref) would become --opt build-arg:vcs_ref=$(vsc_ref). See https://github.com/moby/buildkit#exploring-dockerfiles for more details.

+56 -22

2 comments

17 changed files

Iristyle

pr closed time in a month

pull request commentpuppetlabs/puppetserver

(maint) Enable Docker builds with buildkit

Buildkit support was merged in https://github.com/puppetlabs/puppetserver/pull/2328

This seems to have been left open by accident

Iristyle

comment created time in a month

pull request commentpuppetlabs/puppetserver

Rewrite Docker health check to avoid making assumptions

Sorry this ended up getting lost in the shuffle.

it makes the assumptions that hostprivkey, hostcert and localcacert are all stored under SSLDIR, even though the documentation explicitly states that these are configurable values.

Not sure I agree. Where is this stated?

I'm also reluctant to add a custom caching solution to an entrypoint. I'll try and put up a PR in the next week to tackle the underlying problem for a different angle. I'll also get the containers updated so that SSLDIR and LOGDIR are not viewed as configurable env vars, given they are not.

runejuhl

comment created time in a month

pull request commentpuppetlabs/puppetserver

Allow executables in Docker entrypoint; avoid chmod; fix run ordering

I think we're actually better off just requiring a user has put a proper +x on files rather than making assumptions.

I think the Postgres container does this right -- https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh#L146-L172

  • if +x, file is executed
  • otherwise, source the script
runejuhl

comment created time in a month

pull request commentpuppetlabs/puppetserver

[Docker container] Add 2 missing directories to fix permissions for

@jay7x we can merge this for now if you rebase / update your PR. We still want to do the path changes I mentioned, but that won't be happening for a bit since we don't have the bandwidth, and this is still a useful change I think?

jay7x

comment created time in a month

PR closed puppetlabs/puppetserver

(maint) Configure more like pe puppetserver DO NOT MERGE

Store things in puppetserver at the same place as pe-puppetserver so upgrades are easy peasy

There's a ton more work to do here

+27 -14

2 comments

3 changed files

Iristyle

pr closed time in a month

pull request commentpuppetlabs/puppetserver

(maint) Configure more like pe puppetserver

We need to eventually come back to this... but it's not happening in the short-term.

Iristyle

comment created time in a month

issue commenthaugene/docker-transmission-openvpn

Transmission caught in crash loop with alpine-latest container

Just to follow up, removing TRANSMISSION_UMASK=002 has allowed the container to start again.

Iristyle

comment created time in a month

issue commenthaugene/docker-transmission-openvpn

Transmission caught in crash loop with alpine-latest container

Thanks - I'll try making these changes shortly, but I don't think either of those suggestions would explain why the container just went into a crash loop between the two latest versions.

Iristyle

comment created time in 2 months

issue openedhaugene/docker-transmission-openvpn

Transmission caught in crash loop with alpine-latest container

Describe the problem

Transmission is stuck in a crash loop and no longer starts properly. I have Ouroboros run every night so that I'm always on latest of all the containers I run. A couple of hours ago it upgraded transmission because it saw a new latest container (and this is what my notification says):

transmission updated from 013d59dc41 to 7b1dc57540

Oddly the SHA I see in docker hub for alpine-latest is not what's being reported on my system:

haugene/transmission-openvpn:latest-alpine@sha256:7b1dc57540291444782d70f3e0ac740808935b46a848bea94cb6c53bca4701fa

Add your docker run command

I don't think the compose file is relevant here as this has worked without changes for months, but I'll include the transmission part anyhow for posterity:

 transmission:
    container_name: transmission
    image: haugene/transmission-openvpn:latest-alpine
    restart: unless-stopped
    environment:
      - PUID=1031 # Transmission user
      - PGID=65536 # Download group
      # Necessary for AirVPN which has user specific configs / passwords
      - OPENVPN_PROVIDER=CUSTOM
      # Doesn't need to be set at all
      - OPENVPN_USERNAME=dummy
      - OPENVPN_PASSWORD=dummy
      - OPENVPN_OPTS=--inactive 3600 --ping 10 --ping-exit 60
      # Allows access to Transmission web page from local network
      - LOCAL_NETWORK=192.168.0.0/24
      - CREATE_TUN_DEVICE=true
      - TRANSMISSION_DOWNLOAD_DIR=/volume1/Media/Downloads/Transmission/completed
      - TRANSMISSION_INCOMPLETE_DIR=/volume1/Media/Downloads/Transmission/incomplete
      - TRANSMISSION_UMASK=002
      - TRANSMISSION_PEER_LIMIT_GLOBAL=400
      - TRANSMISSION_PEER_LIMIT_PER_TORRENT=40
      - TRANSMISSION_PEER_PORT=36226
      - TRANSMISSION_PORT_FORWARDING_ENABLED=true
      - TRANSMISSION_DOWNLOAD_QUEUE_SIZE=100
      - TRANSMISSION_SEED_QUEUE_ENABLED=true
      - TRANSMISSION_SEED_QUEUE_SIZE=5
      # When to stop seeding
      - TRANSMISSION_RATIO_LIMIT=1.1
      - TRANSMISSION_RATIO_LIMIT_ENABLED=true
      - TRANSMISSION_IDLE_SEEDING_LIMIT_ENABLED=true
      # For tools like Sonarr / Radarr to authenticate against
      - TRANSMISSION_RPC_AUTHENTICATION_REQUIRED=true
      - TRANSMISSION_RPC_PASSWORD=REDACTED
      - TRANSMISSION_RPC_PORT=9091
      - TRANSMISSION_RPC_USERNAME=admin
      - TRANSMISSION_WATCH_DIR=/volume1/Media/Downloads/Transmission/watch
      - TRANSMISSION_WATCH_DIR_ENABLED=true
      # Where data like logs / settings are written to
      - TRANSMISSION_HOME=/config
    ports:
      - 8888:8888
      - 9091:9091
    volumes:
      - /volume1/docker/transmissiondata/config:/config
      - /volume1/docker/transmissiondata/resolv.conf:/etc/resolv.conf
      - /volume1/docker/transmissiondata/AirVPN_America_UDP-443.ovpn:/etc/openvpn/custom/default.ovpn
      - /volume1/Media/Downloads/Transmission:/volume1/Media/Downloads/Transmission
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun
    # Synology needs this to function properly!
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0

Logs

It appears that /config/settings.json might be getting created incorrectly:

Up script executed with tun0 1500 1553 10.XXX.XXX.XXX 255.255.255.0 init,
Updating TRANSMISSION_BIND_ADDRESS_IPV4 to the ip of tun0 : 10.XXX.XXX.XXX,
Generating transmission settings.json from env variables,
sed'ing True to true,
Enforcing ownership on transmission config directories,
Applying permissions to transmission config directories,
Setting owner for transmission paths to 1031:65536,
Setting permission for files (644) and directories (755),
Setting permission for watch directory (775) and its files (664),
,
-------------------------------------,
Transmission will run as,
-------------------------------------,
User name:   abc,
User uid:    1031,
User gid:    65536,
-------------------------------------,
,
STARTING TRANSMISSION,
NO PORT UPDATER FOR THIS PROVIDER,
Transmission startup script complete.,
[2020-08-07 06:45:17.393] JSON parse failed in /config/settings.json at pos 2281: INVALID_NUMBER -- remaining text "02,,
    "upload-",
Fri Aug  7 06:45:22 2020 /sbin/ip route add 184.75.XXX.XXX/32 via 172.19.0.1,
Fri Aug  7 06:45:22 2020 /sbin/ip route add 0.0.0.0/1 via 10.XXX.XXX.1,
Fri Aug  7 06:45:22 2020 /sbin/ip route add 128.0.0.0/1 via 10.XXX.XXX.1,
Fri Aug  7 06:45:22 2020 Initialization Sequence Completed,
Fri Aug  7 06:50:03 2020 event_wait : Interrupted system call (code=4),
Fri Aug  7 06:50:03 2020 SIGTERM received, sending exit notification to peer,
Fri Aug  7 06:50:08 2020 /sbin/ip route del 184.75.XXX.XXX/32,
Fri Aug  7 06:50:08 2020 /sbin/ip route del 0.0.0.0/1,
Fri Aug  7 06:50:08 2020 /sbin/ip route del 128.0.0.0/1,
Fri Aug  7 06:50:08 2020 Closing TUN/TAP interface,
Fri Aug  7 06:50:08 2020 /sbin/ip addr del dev tun0 10.XX.XX.XX/24,
Fri Aug  7 06:50:08 2020 /etc/openvpn/tunnelDown.sh tun0 1500 1553 10.XX.XX.XX 255.255.255.0 init,
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec],
Fri Aug  7 06:50:08 2020 SIGTERM[soft,exit-with-notification] received, process exiting,
mknod: /dev/net/tun: File exists,
Using OpenVPN provider: CUSTOM,
No VPN configuration provided. Using default.,
Setting OPENVPN credentials...,
adding route to local network 192.168.0.0/24 via 172.19.0.1 dev eth0,
Fri Aug  7 06:50:12 2020 OpenVPN 2.4.9 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Apr 20 2020,
Fri Aug  7 06:50:12 2020 library versions: OpenSSL 1.1.1g  21 Apr 2020, LZO 2.10,
Fri Aug  7 06:50:12 2020 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts,
Fri Aug  7 06:50:12 2020 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication,
Fri Aug  7 06:50:12 2020 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication,
Fri Aug  7 06:50:12 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]XXX.XXX.XXX.XXX:443,
Fri Aug  7 06:50:12 2020 Socket Buffers: R=[212992->212992] S=[212992->212992],
Fri Aug  7 06:50:12 2020 UDP link local: (not bound),
Fri Aug  7 06:50:12 2020 UDP link remote: [AF_INET]XXX.XXX.XXX.XXX:443,
Fri Aug  7 06:50:12 2020 TLS: Initial packet from [AF_INET]XXX.XXX.XXX.XXX:443, sid=b6ca9b37 29c2a62d,
Fri Aug  7 06:50:13 2020 VERIFY OK: depth=1, C=IT, ST=IT, L=Perugia, O=airvpn.org, CN=airvpn.org CA, emailAddress=info@airvpn.org,
Fri Aug  7 06:50:13 2020 VERIFY KU OK,
Fri Aug  7 06:50:13 2020 Validating certificate extended key usage,
Fri Aug  7 06:50:13 2020 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication,
Fri Aug  7 06:50:13 2020 VERIFY EKU OK,
Fri Aug  7 06:50:13 2020 VERIFY OK: depth=0, C=IT, ST=IT, L=Perugia, O=airvpn.org, CN=Sharatan, emailAddress=info@airvpn.org,
Fri Aug  7 06:50:13 2020 Control Channel: TLSv1.2, cipher TLSv1.2 DHE-RSA-AES256-GCM-SHA384, 4096 bit RSA,
Fri Aug  7 06:50:13 2020 [Sharatan] Peer Connection Initiated with [AF_INET]XXX.XXX.XXX.XXX:443,
Fri Aug  7 06:50:14 2020 SENT CONTROL [Sharatan]: 'PUSH_REQUEST' (status=1),
Fri Aug  7 06:50:14 2020 PUSH: Received control message: 'PUSH_REPLY,comp-lzo no,redirect-gateway  def1 bypass-dhcp,dhcp-option DNS 10.XXX.XX.XXX,route-gateway 10.XXX.XXX.XXX,topology subnet,ping 10,ping-restart 60,ifconfig 10.XXX.XX.XXX 255.255.255.0,peer-id 2,cipher AES-256-GCM',
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: timers and/or timeouts modified,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: compression parms modified,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: --ifconfig/up options modified,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: route options modified,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: route-related options modified,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: peer-id set,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: adjusting link_mtu to 1625,
Fri Aug  7 06:50:14 2020 OPTIONS IMPORT: data channel crypto options modified,
Fri Aug  7 06:50:14 2020 Data Channel: using negotiated cipher 'AES-256-GCM',
Fri Aug  7 06:50:14 2020 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key,
Fri Aug  7 06:50:14 2020 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key,
Fri Aug  7 06:50:14 2020 ROUTE_GATEWAY 172.19.0.1/255.255.0.0 IFACE=eth0 HWADDR=02:42:ac:13:00:09,
Fri Aug  7 06:50:14 2020 TUN/TAP device tun0 opened,
Fri Aug  7 06:50:14 2020 TUN/TAP TX queue length set to 100,
Fri Aug  7 06:50:14 2020 /sbin/ip link set dev tun0 up mtu 1500,
Fri Aug  7 06:50:14 2020 /sbin/ip addr add dev tun0 10.XXX.XXX.XXX/24 broadcast 10.XXX.XXX.XXX,
Fri Aug  7 06:50:14 2020 /etc/openvpn/tunnelUp.sh tun0 1500 1553 10.XXX.XXX.XXX 255.255.255.0 init,

Host system:

Synology

created time in 2 months

issue commentreplicatedhq/kots

Support the use of `--local-path` with `kots install`

We're still having some issues around the current workflow, which we've put in a Makefile like:

kubectl kots install cd4pe/unstable --namespace $(NAMESPACE) --shared-password $(PASSWORD) --port-forward=false --license-file license.yaml --config-values dev/cd4pe-min-config.yaml
kubectl kots download cd4pe --namespace $(NAMESPACE) --dest output
cp cd4pe/* output/cd4pe/upstream/
# use the downloaded license in case it was updated from upstream
POD_NAMESPACE=$(NAMESPACE) kubectl kots pull cd4pe --namespace $(NAMESPACE) --shared-password $(PASSWORD) --local-path output/cd4pe/upstream --rootdir output --exclude-admin-console --license-file output/cd4pe/upstream/userdata/license.yaml --config-values output/cd4pe/upstream/userdata/config.yaml
kubectl kots upload --namespace $(NAMESPACE) --slug cd4pe output/cd4pe

This is a bit clunky... and problematically we have to launch the admin console and then click the install button to actually install our app. We can't seem to automate starting the app with desired config values. FWIW, we don't have a problem when the admin console isn't involved. In those cases, we can automate everything the way we want.

Iristyle

comment created time in 2 months

push eventIristyle/ChocolateyPackages

jtcmedia

commit sha 18207b6e5fe6c8fde2104dee6ee5d6f4b454ceb1

removed Tunnelier pkg

view details

Ethan J. Brown

commit sha 8c9833710577de6db6e8b1db5d9196e19e19d117

Merge pull request #60 from jtcmedia/master removed Tunnelier pkg

view details

push time in 3 months

PR merged Iristyle/ChocolateyPackages

removed Tunnelier pkg

I have removed the Tunnelier pkg as I have taken over package maintenance. Package now located at https://github.com/jtcmedia/chocolatey-packages

+0 -69

0 comment

2 changed files

jtcmedia

pr closed time in 3 months

pull request commentpuppetlabs/puppetserver

Rewrite Docker health check to avoid making assumptions

SSLDIR is actually not intended to (and shouldn't be) user modified. We should change that to avoid confusion, since the wrong message has clearly been sent here. All of the documented configuration values are at https://github.com/puppetlabs/puppetserver/tree/master/docker/puppetserver#configuration

The intent of the container is to be prescriptive about many things to reduce the possible configuration surface area and complexity. I think that philosophically we want to only surface the bare minimum number of possible options to end users -- at times that means hardcoding values where it makes sense (such as SSLDIR and LOGDIR) b/c of how userdata should be mapped to volumes and how upgrades from open source to enterprise containers should happen.

We've actually talked about generating an error on startup if there are extraneous certs in SSLDIR, as (at least in development) it can be caused by accidentally keeping around volumes, but changing the servers certname (which can be problematic). So I'd still like to understand how you ended up with all of those certs in there to see if your workflow is one we should consider.

runejuhl

comment created time in 3 months

pull request commentpuppetlabs/puppetserver

Rewrite Docker health check to avoid making assumptions

We very intentionally moved away from using puppet config print for performance reasons -- see https://github.com/puppetlabs/puppetserver/commit/72eea601719c18c8e5556349f8875ec6b2293300

Why do you have so many certs there? That doesn't seem right...

runejuhl

comment created time in 3 months

push eventpuppetlabs/puppet

Josh Cooper

commit sha 35b920d1dccf028fe271ca41766dc39464229224

(maint) Update console color setting for Windows Commit f40cc71e8 updated the description for the color setting, but that part was lost along the way, likely when the file was reformatted in bf9ac399bd539 and conflicts resolved.

view details

Ethan J. Brown

commit sha b6a667a5d20c3da10c7deee3027e7a6272be718f

Merge pull request #8207 from joshcooper/windows_color_maint (maint) Update console color setting for Windows

view details

push time in 3 months

PR merged puppetlabs/puppet

(maint) Update console color setting for Windows

Commit f40cc71e8 updated the description for the color setting, but that part was lost along the way, likely when the file was reformatted in bf9ac399bd539 and conflicts resolved.

+1 -2

2 comments

1 changed file

joshcooper

pr closed time in 3 months

Pull request review commentpuppetlabs/puppetlabs-cd4pe

(maint) Run configTestVm.sh from a container

+FROM ruby:2.4.1+RUN apt-get update && \+    apt-get install -y jq && \+    apt-get clean && rm -rf /var/lib/apt/lists/*++ENV SSH_KEY id_rsa+RUN { \+    echo "#!/bin/bash"; \+    echo "eval \$(ssh-agent) >/dev/null"; \+    echo "ssh-add /root/.ssh/\$SSH_KEY"; \+    echo "exec \"\$@\""; \+    } > /entrypoint.sh && chmod 755 /entrypoint.sh+ENTRYPOINT ["/entrypoint.sh"]++RUN mkdir -p /root/.puppetlabs/bolt && \+    echo "disabled: true" > /root/.puppetlabs/bolt/analytics.yaml++COPY .cdpe-workflow-tests-config.json /root/

Ah ah nevermind! You had told me to grab the config values I needed from artifactory.delivery.puppetlabs.net/cd4pe-maven-test-runner:latest -- and the details I'm describing are for the file /root/.cdpe-test-credentials.json... not the file from this reop / container.

The file in this container is what I needed to run workflow tests.

Sorry for the confusion!

nwolfe

comment created time in 3 months

Pull request review commentpuppetlabs/puppetlabs-cd4pe

(maint) Run configTestVm.sh from a container

+FROM ruby:2.4.1+RUN apt-get update && \+    apt-get install -y jq && \+    apt-get clean && rm -rf /var/lib/apt/lists/*++ENV SSH_KEY id_rsa+RUN { \+    echo "#!/bin/bash"; \+    echo "eval \$(ssh-agent) >/dev/null"; \+    echo "ssh-add /root/.ssh/\$SSH_KEY"; \+    echo "exec \"\$@\""; \+    } > /entrypoint.sh && chmod 755 /entrypoint.sh+ENTRYPOINT ["/entrypoint.sh"]++RUN mkdir -p /root/.puppetlabs/bolt && \+    echo "disabled: true" > /root/.puppetlabs/bolt/analytics.yaml++COPY .cdpe-workflow-tests-config.json /root/

Just FYI... this file does not seem to be the same one that the workflow tests are depending on.

I copied the file out of this directory to use with workflow tests and it not laid out the same way as the tests are expecting. For instance, there should be a storage key per:

https://github.com/puppetlabs/PipelinesInfra/blob/master/workflow-tests/tests/install/setupStorageS3.test.js#L12

However, there isn't one.

> JSON.parse(fs.readFileSync('cdpe-workflow-tests-config.json')).storage
undefined

Instead the settings live under

> JSON.parse(fs.readFileSync('cdpe-workflow-tests-config.json')).STORAGE_SETTINGS
{
  S3: {
    ...
  },
  ARTIFACTORY: {
    ...
  }
}
nwolfe

comment created time in 3 months

issue commentfacebook/jest

FetchError: request to URL/token failed, reason: self signed certificate in certificate chain

Looks like the same regression as https://github.com/facebook/jest/issues/8449

styrus

comment created time in 3 months

more