profile
viewpoint

groodt/99bottles-jmeter 7

A simple example showing how to use scripting inside JMeter for more complex HTTP request generation.

groodt/dotfiles 3

My collection of dotfiles

groodt/4clojure 1

An interactive problem website for Clojure beginners. All new development should be done on the "develop" branch.

groodt/bootstrap 1

HTML, CSS, and JS toolkit from Twitter

groodt/Databinder-Dispatch 1

Scala library for accessing HTTP services

groodt/europython12 1

Lightning talk review of EuroPython 2012

groodt/amazon-vpc-cni-k8s 0

Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS

issue openedfluxcd/flux

1.20.0 release?

Describe the bug

It has been a while since there was a release. 3 April.

There have been a number of bug fixes since then and the 1.20.0 milestone appears to be complete. Is it worth cutting a 1.20.0 release, or even a 1.19.1 release?

Is there a reason there hasn't been a new release?

created time in 7 days

pull request commentNixOS/nixpkgs

qbec: add ldflags

Ping @kalbasit

groodt

comment created time in 9 days

pull request commentNixOS/nixpkgs

qbec: add ldflags

Ping @kalbasit

groodt

comment created time in 11 days

issue commentbazelbuild/rules_python

Cannot import a module scattered in multiple directories

@plule-ansys

Yes, it is a legacy foot-cannon in Python. If you're able, you probably should set the following in your .bazelrc, which is what we do at Canva.

# Prevent creation of empty __init__.py
# See: https://github.com/bazelbuild/bazel/issues/10076, https://github.com/bazelbuild/bazel/issues/7386
build --incompatible_default_to_explicit_init_py
test --incompatible_default_to_explicit_init_py
plule-ansys

comment created time in 14 days

push eventgroodt/nixpkgs

Greg Roodt

commit sha 7304a4ae4ba63220d2974530e37e339c0f00bdbe

qbec: add ldflags

view details

push time in 18 days

push eventgroodt/nixpkgs

R. RyanTM

commit sha 47fb499bc1747e921c128c98bc72ec74109b02c3

libdap: 3.20.5 -> 3.20.6

view details

Daniel Schaefer

commit sha f56b70378ebcbf551e52598e7c77ad8b8e744161

rpm: 4.14.2.1 -> 4.15.1 It's only compatible with Python3 now.

view details

Stig Palmquist

commit sha 4e9fb76420d764ffc661afa142bc291e14128789

perlPackages.NetIPLite: init at 0.03

view details

misuzu

commit sha 83e6b6d906e85d61d3f6f24c431830d29efb062b

python3Packages.python-engineio: build on macOS

view details

misuzu

commit sha c3b0a4c8bc7e2b044b21d5e65d352b476a515f3f

python3Packages.pygdbmi disable tests on macOS

view details

R. RyanTM

commit sha 1356bc3d365bf6d3f64d9c16a0d2e5850670077c

blockbook: 0.3.3 -> 0.3.4

view details

R. RyanTM

commit sha 720128ce7e8e39597895109b7687b511510048ca

consul: 1.7.3 -> 1.7.4

view details

R. RyanTM

commit sha 89561e0be585172038501d076ace0b592d6feff3

delve: 1.2.0 -> 1.4.1

view details

Florian Klink

commit sha 89c3e73dad0970b26183e415555fb0379ba33e7a

hardware/u2f: remove module udev gained native support to handle FIDO security tokens, so we don't need a module which only added the now obsolete udev rules. Fixes: https://github.com/NixOS/nixpkgs/issues/76482

view details

R. RyanTM

commit sha d14559707ae96cf891f584c289ec26346459cb6e

dnscontrol: 3.0.0 -> 3.2.0

view details

R. RyanTM

commit sha 814893b27410a9a1a78698b417dc6ea801933747

ergo: 3.2.5 -> 3.2.6

view details

R. RyanTM

commit sha 3b6d8c2bb59c376b35f4ced1b0eea6cce36f9eaf

gauge: 1.0.4 -> 1.1.1

view details

R. RyanTM

commit sha e08c199856cbc93eb3a03af63dda8ddf9d50d133

librealsense: 2.34.0 -> 2.35.2

view details

Keshav Kini

commit sha c2f945b43a18252d0d107a150d190161afe51063

amarok: 2.9.0-20190824 -> amarok-unstable 2020-06-12 This commit bumps amarok to the most recent commit in master and adds liblastfm as a dependency to make use of the last.fm integration that has been re-enabled in upstream master (it was disabled for a while due to breakage on Qt5). I also updated the package name and version to match [the stipulations in the Nixpxgs manual](https://nixos.org/nixpkgs/manual/#sec-package-naming).

view details

R. RyanTM

commit sha 33aa224625a9d970d4cf4194cfa46709c0908620

minishift: 1.34.0 -> 1.34.2

view details

R. RyanTM

commit sha 4af06a5177f1a7ee9fbb72268de036a38f73022f

nomad: 0.11.1 -> 0.11.3

view details

R. RyanTM

commit sha 57a49ce3a464c43df97bbf826559efc073aab24c

picard-tools: 2.22.9 -> 2.23.0

view details

Keshav Kini

commit sha 557f56d46568d309f31a735fa16075f3de457f2b

liblastfm: 1.1.0 -> liblastfm-unstable 2019-08-23

view details

R. RyanTM

commit sha b663749a25f2a645877d8ab59166eea6f8c6f3c4

qbec: 0.11.2 -> 0.12.0

view details

R. RyanTM

commit sha ded0d3bd0e3920916e46fcddb017f570d053f0f8

rmlint: 2.10.0 -> 2.10.1

view details

push time in 18 days

create barnchgroodt/nixpkgs

branch : groodt-qbec-ldflags

created branch time in 18 days

PR opened NixOS/nixpkgs

qbec: add ldflags

<!-- To help with the large amounts of pull requests, we would appreciate your reviews of other pull requests, especially simple package updates. Just leave a comment describing what you have tested in the relevant package/service. Reviewing helps to reduce the average time-to-merge for everyone. Thanks a lot if you do! List of open PRs: https://github.com/NixOS/nixpkgs/pulls Reviewing guidelines: https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions -->

Motivation for this change
Things done

<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->

  • [ ] Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS linux)
  • Built on platform(s)
    • [ ] NixOS
    • [ ] macOS
    • [ ] other Linux distributions
  • [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • [ ] Tested compilation of all pkgs that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
  • [ ] Tested execution of all binary files (usually in ./result/bin/)
  • [ ] Determined the impact on package closure size (by running nix path-info -S before and after)
  • [ ] Ensured that relevant documentation is up to date
  • [ ] Fits CONTRIBUTING.md.
+10 -2

0 comment

1 changed file

pr created time in 18 days

pull request commentNixOS/nixpkgs

buildkite-cli: init at 1.1.0

Hi @kalbasit

Is there anything waiting on me to have this merged?

groodt

comment created time in 19 days

create barnchgroodt/commuter

branch : groodt-nextjs-api

created branch time in 19 days

create barnchgroodt/nixpkgs

branch : groodt-argo-2.8.1

created branch time in 22 days

PR opened NixOS/nixpkgs

argo: 2.6.1 -> 2.8.1

<!-- To help with the large amounts of pull requests, we would appreciate your reviews of other pull requests, especially simple package updates. Just leave a comment describing what you have tested in the relevant package/service. Reviewing helps to reduce the average time-to-merge for everyone. Thanks a lot if you do! List of open PRs: https://github.com/NixOS/nixpkgs/pulls Reviewing guidelines: https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions -->

Motivation for this change
Things done

<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->

  • [ ] Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS linux)
  • Built on platform(s)
    • [ ] NixOS
    • [ ] macOS
    • [ ] other Linux distributions
  • [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • [ ] Tested compilation of all pkgs that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
  • [ ] Tested execution of all binary files (usually in ./result/bin/)
  • [ ] Determined the impact on package closure size (by running nix path-info -S before and after)
  • [ ] Ensured that relevant documentation is up to date
  • [ ] Fits CONTRIBUTING.md.
+13 -4

0 comment

1 changed file

pr created time in 22 days

Pull request review commentnteract/commuter

Refactor folder structure

 const next = require("next"); const dev = process.env.NODE_ENV !== "production" && !process.env.NOW;  function createNextApp() {-  const app = next({ dev, dir: __dirname });+  const app = next({ dev, dir: `${__dirname}/../` });

Done.

groodt

comment created time in 24 days

push eventgroodt/commuter

Greg Roodt

commit sha 38c98274d6c04b5632a365996a9a5f1cc7ff8136

path.join

view details

push time in 24 days

Pull request review commentnteract/commuter

Refactor folder structure

 const next = require("next"); const dev = process.env.NODE_ENV !== "production" && !process.env.NOW;  function createNextApp() {-  const app = next({ dev, dir: __dirname });+  const app = next({ dev, dir: `${__dirname}/../` });

It's a good point. I'm not sure. I'll look into changing it to path.join.

groodt

comment created time in 24 days

push eventgroodt/jwtauthenticator_v2

Greg Roodt

commit sha 6e1488cf7f543a01d9b3ecd20f6204af362c7af9

Update jwtauthenticator.py

view details

push time in 25 days

push eventgroodt/jwtauthenticator_v2

Greg Roodt

commit sha 2a8bfe3d75a12b49f029242e3a4069de6e62674f

Update jwtauthenticator.py

view details

push time in 25 days

fork groodt/jwtauthenticator_v2

A JWT Token Authenticator for JupyterHub

fork in 25 days

pull request commentnteract/commuter

Refactor folder structure

Ok, thanks @captainsafia Plan SGTM!

As far as item 1. Finish up and merge refactor of folder structure (this PR), it is done from my perspective. I've made the changes that I'd like to make (at least initially) and the functionality is all still working from my perspective and development process. I'd appreciate if somebody else gave things a spin as well. Maybe your process is different to mine.

Are there any other refactors people would like to see in this PR in terms of structure changes?

Otherwise, I believe this can be merged and migration to Next.JS API Routes can be looked at.

groodt

comment created time in 25 days

pull request commentPythonCharmers/python-future

Publish sdist and bdist wheel

Not based on micro, no, but minor version certainly.

Ok, Wheels already support this.

It's possible future changes to Python's APIs at the micro-release level may require us to do so, however, and moving off of our current package format removes that capability.

Micro version changes in Python are for bug fixes only https://devguide.python.org/devcycle/#maintenance-branches

We do some runtime checks at the micro level, but I worry about compile-time optimizing these version checks, as the file would be unusable on versions other than the install version. Do wheels provide any sort of safeguard around t hat?

Wheels do not contain any .pyc files. The .py files are compiled to .pyc when the Wheel is installed, so there would be no issue.

A Wheel is installed into a particular versioned Python environment (or virtual env). Any hypothetical micro version runtime checks in future can still happen at import time and the result cached which should address any optimization concerns.

groodt

comment created time in a month

pull request commentPythonCharmers/python-future

Publish sdist and bdist wheel

I believe the compatibility for Python built distributions is specified in PEP 425

It supports specifying compatibility down to the minor version e.g. py33-none-any

Does future truly install differently based on the .micro version?

groodt

comment created time in a month

pull request commentNixOS/nixpkgs

buildkite-cli: init at 1.1.0

Ping @kalbasit

groodt

comment created time in a month

issue commentPythonCharmers/python-future

Can a wheel be built and uploaded to pypi along with a tarball

@johnthagen Yes. There is also an open PR with a simple change that would allow wheels to be published. https://github.com/PythonCharmers/python-future/pull/536

The project is obviously not very active anymore, but there were commits in Feb 2020.

@jmadler Is there any interest in publishing a Wheel?

sriram-mv

comment created time in a month

pull request commentnteract/commuter

Refactor folder structure

I wasn't aware of that PR. It would be great if the project can get a fresh sync from Netflix and migrate to Typescript. I agree, it makes far more sense to use that as a base before considering future work.

What is required to move that branch forward? Let me know if there is anywhere that I can assist @MSeal @rgbkrk @captainsafia

groodt

comment created time in a month

PR opened nteract/commuter

Refactor folder structure

One of the roadmap items that I'm interested in is a configurable basePath.

There is (experimental, but moving towards supported) support for this in next.js. So I would propose to migrate from expressjs (backend) + nextjs (frontend) to a full nextjs application (frontend + backend).

In preparation of this, I've restructured the folders into something that I find more pleasing to work with in JS projects and I think it will allow for easier refactoring from express to nextjs.

I'd love your thoughts @rgbkrk @captainsafia

+24 -24

0 comment

46 changed files

pr created time in a month

create barnchgroodt/commuter

branch : groodt-nextjs-folderstructure

created branch time in a month

issue commentfluxcd/flux

Garbage collection of CronJob resources

@mhenniges The fix was merged here: https://github.com/fluxcd/flux/pull/3008

It looks like a 1.20.0 release will be happening at some stage: https://github.com/fluxcd/flux/milestone/27

groodt

comment created time in a month

create barnchgroodt/nixpkgs

branch : groodt-init-buildkite-cli

created branch time in a month

PR opened NixOS/nixpkgs

buildkite-cli: init at 1.1.0

<!-- To help with the large amounts of pull requests, we would appreciate your reviews of other pull requests, especially simple package updates. Just leave a comment describing what you have tested in the relevant package/service. Reviewing helps to reduce the average time-to-merge for everyone. Thanks a lot if you do! List of open PRs: https://github.com/NixOS/nixpkgs/pulls Reviewing guidelines: https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions -->

Motivation for this change
Things done

<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->

  • [ ] Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS linux)
  • Built on platform(s)
    • [ ] NixOS
    • [ ] macOS
    • [ ] other Linux distributions
  • [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • [ ] Tested compilation of all pkgs that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
  • [ ] Tested execution of all binary files (usually in ./result/bin/)
  • [ ] Determined the impact on package closure size (by running nix path-info -S before and after)
  • [ ] Ensured that relevant documentation is up to date
  • [ ] Fits CONTRIBUTING.md.
+28 -0

0 comment

2 changed files

pr created time in a month

issue commentsnowflakedb/snowflake-connector-python

SNOW-157438: Feature request: print `sso_url` when unable to open browser when using federated auth

@ValentinMoullet For my scenario, printing out the sso_url takes me to Okta, then redirects back to a port on localhost via SAML RelayState, which obviously fails for us. We then have to manually change the host from localhost and it works.

Are you seeing similar? Or is this a different, but related issue?

ValentinMoullet

comment created time in a month

issue commentnteract/commuter

Allow deploying commuter on a basepath

Thanks @rgbkrk

I'll have to look into it a bit more. Migrating commuter from express to next might take a bit of work.

rgbkrk

comment created time in a month

pull request commentnteract/commuter

Add Dockerfile

@captainsafia I've added some notes to the README.md and Dockerfile for development purposes (if that's useful).

Do I need to do anything to merge this, or is this not going to be merged?

groodt

comment created time in a month

push eventgroodt/commuter

Greg Roodt

commit sha f0a01c2fe39c7d1931d88b163f4d4185f4824fe9

Add README

view details

push time in a month

push eventgroodt/commuter

Greg Roodt

commit sha ba5acbb949ac82bcb0d2b395657259334146fe64

Add README

view details

push time in a month

pull request commentfluent/fluent-bit-kubernetes-logging

Add example configuration for Loggly

Ping @james-callahan

groodt

comment created time in a month

pull request commentbitnami-labs/kube-libsonnet

Use rbac.authorization.k8s.io/v1

Ready for review @jjo

I started my PR off an old master, so I had to merge master back into it. The PR can be squashed upon merge into master.

groodt

comment created time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha 14447a53d62cc21b8d75a8627910106750d9507d

Fix SealedSecret (#36)

view details

JuanJo Ciarlante

commit sha 53b0670a0d7efa2f81e061e318d2d730b6d67e00

[jjo] merge kube-libsonnet bits from bitnami/kube-prod-runtime, add unittesting (#37)

view details

JuanJo Ciarlante

commit sha cda37fc66a62cd1e7bb59e2fc78f962e6e2fc209

[jjo] improve assertion msgs, add testing for FAIL (#38) * kube.libsonnet assertions: - SealedSecret: add msg (else it's hard to peek on what failed) - PodSpec: also need to assert for `containers_` (map) field as only asserting for `containers` (array) field is too late for that local (that happens before the array manifestation - PodDisruptionBudget: fix _xor_ check for minAvailable/maxUnavailable * Makefile: - added (expected to) "fail" targets, reworked it a bit for clarity

view details

ademariag

commit sha 713c661f8e8ff9b31032f2909d50be524c811ce7

feat: Add gke.ManagedCertificates and gke.BackendConfig objects (#39) Adds missing objects `ManagedCertificate` and `BackendConfig`, scoped to `gke::` namespace * `ManagedCertificate` usage: ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.ManagedCertificate("my-cert") { spec+: { domains: ["foo.example.com"], }, } ``` ### BackendConfig ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.BackendConfig("config") { spec+: { cdn: { enabled: true }, }, } ```

view details

JuanJo Ciarlante

commit sha 7d44593f7e62627dd3afd0d05c94b4dfa322ffa0

[jjo] add v1.18, fix integration tests jsonnet pattern (#40)

view details

JuanJo Ciarlante

commit sha 11a1cea035012c3fed3b176c6cfc09f3c77d0d52

[jjo] fix: let env_ `null` values stay as so (vs string-ifying it) (#41)

view details

JuanJo Ciarlante

commit sha e8d81a6b051cf2c453412c0568db9c1810d11764

[jjo] fix: remote example/*/lib symlinks, bothers some CI/CDs (#43)

view details

push time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha 14447a53d62cc21b8d75a8627910106750d9507d

Fix SealedSecret (#36)

view details

JuanJo Ciarlante

commit sha 53b0670a0d7efa2f81e061e318d2d730b6d67e00

[jjo] merge kube-libsonnet bits from bitnami/kube-prod-runtime, add unittesting (#37)

view details

JuanJo Ciarlante

commit sha cda37fc66a62cd1e7bb59e2fc78f962e6e2fc209

[jjo] improve assertion msgs, add testing for FAIL (#38) * kube.libsonnet assertions: - SealedSecret: add msg (else it's hard to peek on what failed) - PodSpec: also need to assert for `containers_` (map) field as only asserting for `containers` (array) field is too late for that local (that happens before the array manifestation - PodDisruptionBudget: fix _xor_ check for minAvailable/maxUnavailable * Makefile: - added (expected to) "fail" targets, reworked it a bit for clarity

view details

ademariag

commit sha 713c661f8e8ff9b31032f2909d50be524c811ce7

feat: Add gke.ManagedCertificates and gke.BackendConfig objects (#39) Adds missing objects `ManagedCertificate` and `BackendConfig`, scoped to `gke::` namespace * `ManagedCertificate` usage: ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.ManagedCertificate("my-cert") { spec+: { domains: ["foo.example.com"], }, } ``` ### BackendConfig ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.BackendConfig("config") { spec+: { cdn: { enabled: true }, }, } ```

view details

JuanJo Ciarlante

commit sha 7d44593f7e62627dd3afd0d05c94b4dfa322ffa0

[jjo] add v1.18, fix integration tests jsonnet pattern (#40)

view details

JuanJo Ciarlante

commit sha 11a1cea035012c3fed3b176c6cfc09f3c77d0d52

[jjo] fix: let env_ `null` values stay as so (vs string-ifying it) (#41)

view details

JuanJo Ciarlante

commit sha e8d81a6b051cf2c453412c0568db9c1810d11764

[jjo] fix: remote example/*/lib symlinks, bothers some CI/CDs (#43)

view details

Greg Roodt

commit sha a3e369d6713cec76054aca34efb7600db8bcbb14

Merge remote-tracking branch 'upstream/master' into groodt-authz-api-v1

view details

push time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha 38ff0ccef6528c168eb39e2f948f3455644fc51a

Fix SealedSecret (#36)

view details

JuanJo Ciarlante

commit sha 8525d1435b9cf7801f0e25adb0f0733cc31e09a5

[jjo] merge kube-libsonnet bits from bitnami/kube-prod-runtime, add unittesting (#37)

view details

JuanJo Ciarlante

commit sha 4188ec7d88e8fff05c1321b58484968087a52d76

[jjo] improve assertion msgs, add testing for FAIL (#38) * kube.libsonnet assertions: - SealedSecret: add msg (else it's hard to peek on what failed) - PodSpec: also need to assert for `containers_` (map) field as only asserting for `containers` (array) field is too late for that local (that happens before the array manifestation - PodDisruptionBudget: fix _xor_ check for minAvailable/maxUnavailable * Makefile: - added (expected to) "fail" targets, reworked it a bit for clarity

view details

ademariag

commit sha d5e5e8a64019ee5f961893fcb0c0de3d87406e47

feat: Add gke.ManagedCertificates and gke.BackendConfig objects (#39) Adds missing objects `ManagedCertificate` and `BackendConfig`, scoped to `gke::` namespace * `ManagedCertificate` usage: ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.ManagedCertificate("my-cert") { spec+: { domains: ["foo.example.com"], }, } ``` ### BackendConfig ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.BackendConfig("config") { spec+: { cdn: { enabled: true }, }, } ```

view details

JuanJo Ciarlante

commit sha 025ad990258b8f5179589585c029f7c079e10bc6

[jjo] add v1.18, fix integration tests jsonnet pattern (#40)

view details

JuanJo Ciarlante

commit sha 18be246f8100b34472fb61640ab5a0e4813e3583

[jjo] fix: let env_ `null` values stay as so (vs string-ifying it) (#41)

view details

JuanJo Ciarlante

commit sha 5970e6282221f3e120a38ec1e88c716c663cd684

[jjo] fix: remote example/*/lib symlinks, bothers some CI/CDs (#43)

view details

Greg Roodt

commit sha 10622b77924ce2209ce253a3a8caf0b40c1800fe

Use rbac.authorization.k8s.io/v1

view details

Greg Roodt

commit sha feda09a1bbb29f2b8f908b7eec143452c859ac25

Regenerate golden tests

view details

Greg Roodt

commit sha b2a4be36864e4c5224daee04cdc7414c4afdacab

Regenerate golden tests

view details

Greg Roodt

commit sha e39c46da3123e6e550e3752137fcb3060400ff81

Regenerate golden tests

view details

Greg Roodt

commit sha f978107ddd508160190a9efbc551140651764afd

Regenerate golden tests

view details

push time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha b4b37dbb6b42e60be73ec88d079d1f39c60ff4fb

Regenerate golden tests

view details

push time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha 13779a075f9b7396071c2a1738af764c1d5146d6

Regenerate golden tests

view details

push time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha 6f8de31940a282d47dbf0e339583bb5c2401fe5f

Regenerate golden tests

view details

push time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha 14447a53d62cc21b8d75a8627910106750d9507d

Fix SealedSecret (#36)

view details

JuanJo Ciarlante

commit sha 53b0670a0d7efa2f81e061e318d2d730b6d67e00

[jjo] merge kube-libsonnet bits from bitnami/kube-prod-runtime, add unittesting (#37)

view details

JuanJo Ciarlante

commit sha cda37fc66a62cd1e7bb59e2fc78f962e6e2fc209

[jjo] improve assertion msgs, add testing for FAIL (#38) * kube.libsonnet assertions: - SealedSecret: add msg (else it's hard to peek on what failed) - PodSpec: also need to assert for `containers_` (map) field as only asserting for `containers` (array) field is too late for that local (that happens before the array manifestation - PodDisruptionBudget: fix _xor_ check for minAvailable/maxUnavailable * Makefile: - added (expected to) "fail" targets, reworked it a bit for clarity

view details

ademariag

commit sha 713c661f8e8ff9b31032f2909d50be524c811ce7

feat: Add gke.ManagedCertificates and gke.BackendConfig objects (#39) Adds missing objects `ManagedCertificate` and `BackendConfig`, scoped to `gke::` namespace * `ManagedCertificate` usage: ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.ManagedCertificate("my-cert") { spec+: { domains: ["foo.example.com"], }, } ``` ### BackendConfig ``` local kube = import "kube-platforms.libsonnet"; my_cert: kube.gke.BackendConfig("config") { spec+: { cdn: { enabled: true }, }, } ```

view details

JuanJo Ciarlante

commit sha 7d44593f7e62627dd3afd0d05c94b4dfa322ffa0

[jjo] add v1.18, fix integration tests jsonnet pattern (#40)

view details

JuanJo Ciarlante

commit sha 11a1cea035012c3fed3b176c6cfc09f3c77d0d52

[jjo] fix: let env_ `null` values stay as so (vs string-ifying it) (#41)

view details

JuanJo Ciarlante

commit sha e8d81a6b051cf2c453412c0568db9c1810d11764

[jjo] fix: remote example/*/lib symlinks, bothers some CI/CDs (#43)

view details

Greg Roodt

commit sha 5c7eddc40d93da134c33388abedade03955cc3c5

Merge remote-tracking branch 'upstream/master' into groodt-authz-api-v1

view details

push time in a month

push eventgroodt/kube-libsonnet

Greg Roodt

commit sha 105a745b3937077cfcd25d17528aae6ba7f86e4b

Regenerate golden tests

view details

push time in a month

pull request commentnteract/commuter

Add Dockerfile

Is there some reason for installing tini explicitly? Docker has had it for a few years now (PR 1, PR 2), so it's only a matter of --init.

For running in Kubernetes mainly.

groodt

comment created time in a month

create barnchgroodt/kube-libsonnet

branch : groodt-authz-api-v1

created branch time in a month

PR opened nteract/commuter

Remove unnecessary PORT

Fixes https://github.com/nteract/commuter/issues/294

+1 -1

0 comment

1 changed file

pr created time in a month

create barnchgroodt/commuter

branch : groodt-use-commuter_port

created branch time in a month

pull request commentnteract/commuter

Add Dockerfile

@mrtns This PR now works when only setting COMMUTER_PORT. PORT should remain unset.

docker pull groodt/commuter:latest
docker run \
  --publish 4200:4200 \
  --mount type=bind,source=/home/username/work/commuter/examples,target=/examples \
  --env COMMUTER_LOCAL_STORAGE_BASEDIRECTORY=/examples \
  --env COMMUTER_PORT=4200 \
  groodt/commuter:latest

Or build the image yourself:

docker build --tag commuter:latest .

docker run \
  --publish 4200:4200 \
  --mount type=bind,source=/home/username/work/commuter/examples,target=/examples \
  --env COMMUTER_LOCAL_STORAGE_BASEDIRECTORY=/examples \
  --env COMMUTER_PORT=4200 \
  commuter:latest
groodt

comment created time in a month

push eventgroodt/commuter

Greg Roodt

commit sha a74cc8cd0620a3a522d1249445d844b4b3b9c398

Remove unnecessary ENV

view details

push time in a month

issue openednteract/commuter

Use COMMUTER_PORT variable everywhere

At the moment there is inconsistency between the variables the frontend uses for knowledge about the server PORT and the server itself.

See: https://github.com/nteract/commuter/blob/a69c08bc4c155ab9e6c519dac9de933878badc72/pages/view.js#L45

vs

https://github.com/nteract/commuter/blob/146a5eb72eec1dc8e815a034f5604374cbd3e53c/backend/config.js#L139

I think we should only use COMMUTER_PORT or default to 4000 everywhere.

I'll raise a PR to fix this shortly.

created time in a month

pull request commentnteract/commuter

Add Dockerfile

@mrtns My pleasure. I'm actually running this in production at the moment and it seems to work really well!

I've found the issue that requires 2 ports to be set:

This line: https://github.com/nteract/commuter/blob/146a5eb72eec1dc8e815a034f5604374cbd3e53c/backend/config.js#L139

combined with this line: https://github.com/nteract/commuter/pull/291/files#diff-3254677a7917c6c01f55212f86c57fbfR77

will cause express to be running on port 4000.

I'll update this PR to not set the PORT.

I'll also raise a PR to simplify the PORT variable logic to a single variable.

groodt

comment created time in a month

Pull request review commentnteract/commuter

Add Dockerfile

+##################################+# Build+##################################+FROM node:14 as build++RUN mkdir -p /opt/build;++WORKDIR /opt/build++# Would be a bit simpler if the code was inside a top-level src folder

Done: https://github.com/nteract/commuter/issues/293

groodt

comment created time in a month

issue openednteract/commuter

Consider moving all src into a src folder

The top-level of the repo has many different src files. This makes packaging up the project, particularly for Docker, slightly more work.

This isn't an urgent issue, but it might be worth considering.

See here for more information: https://github.com/nteract/commuter/pull/291

created time in a month

Pull request review commentnteract/commuter

Add Dockerfile

+##################################+# Build+##################################+FROM node:14 as build++RUN mkdir -p /opt/build;++WORKDIR /opt/build++# Would be a bit simpler if the code was inside a top-level src folder

Sure thing. Will do.

groodt

comment created time in a month

Pull request review commentnteract/commuter

Add Dockerfile

+##################################+# Build+##################################+FROM node:14 as build

It is for multi-stage builds. See: https://docs.docker.com/develop/develop-images/multistage-build/

These are useful for controlling the number of layers in the final image and also for controlling the contents of the image itself. Most of the time, the tools necessary at build time are not necessary at runtime. Using a multi-stage build allows us to leave the compiler toolchains behind and not include them in the final image.

groodt

comment created time in a month

push eventgroodt/commuter

Greg Roodt

commit sha 4d8e1b3c4fb48f4acef35f0e2cbf67a4d17fe5bf

Add Dockerfile

view details

push time in 2 months

PR opened nteract/commuter

Add Dockerfile

Adds a Dockerfile for running commuter in a production environment

There is an old PR still open for adding simple Docker support, but that appears more suitable for local development: https://github.com/nteract/commuter/issues/188

This PR is intended to build a Docker container suitable for publishing to Dockerhub (or other registry) and be run in a production environment such as Kubernetes.

+81 -0

0 comment

2 changed files

pr created time in 2 months

create barnchgroodt/commuter

branch : groodt-production-docker

created branch time in 2 months

issue commentnteract/commuter

Allow deploying commuter on a basepath

@rgbkrk I'm just looking at this now. I'm wondering how you were imagining it being implemented?

It appears there is an Express backend app, hosting a Next frontend app.

I've tried the obvious:

    const baseURI = process.env.COMMUTER_SOMEVAR || '/';
    app.use(baseURI, router);

and it isn't working. That's typically that all that would be required in Express.

Where are the complications in implementing this and where should I start looking?

There is experimental support now in Next for a configurable basePath, which allows the frontend to be dynamically aware of it's mount point.

rgbkrk

comment created time in 2 months

PR opened nteract/commuter

next: 9.3.2 -> 9.4.1
+3070 -3825

0 comment

11 changed files

pr created time in 2 months

create barnchgroodt/commuter

branch : groodt-next-9.4.1

created branch time in 2 months

fork groodt/commuter

🚎 Notebook sharing hub

fork in 2 months

issue openedmogthesprog/jwtauthenticator

This doesn't work in JupyterHub

If I do the following:

Dockerfile:

FROM jupyterhub/k8s-hub:0.9.0

# The published version on pypi is ancient
# https://github.com/mogthesprog/jwtauthenticator/issues/27
RUN export VERSION=bc08e8c389c9ce41a920376d8c2b15af66d2be15 && \
  curl https://github.com/mogthesprog/jwtauthenticator/archive/$VERSION.tar.gz --output archive.tar.gz && \
  tar -xzvf archive.tar.gz && \
  cd jwtauthenticator-$VERSION && \
  pip install -e .

jupyterhub_config.py

c.JupyterHub.authenticator_class = 'jwtauthenticator.jwtauthenticator.JSONWebTokenAuthenticator'

I receive this error:

The 'authenticator_class' trait of <jupyterhub.app.JupyterHub object at 0x7f6760385748> instance must be a type, but 'jwtauthenticator.jwtauthenticator.JSONWebTokenAuthenticator' could not be imported

However, if I use this package: https://pypi.org/project/jupyterhub-jwtauthenticator-v2/ it does work in JupyterHub. This package is a forked and modified version of your package, I believe the fixes are related to the imports in __init__.py

created time in 2 months

issue commentmogthesprog/jwtauthenticator

Please build a new PyPi release

Yes, I'd also appreciate a new version. I've just realised the version on pypi as ancient and doesn't support configuring header_is_authorization.

athornton

comment created time in 2 months

pull request commenttensorflow/ecosystem

Support scala 2.12.10 and spark 2.4.4

maven central gets updated whenever a new TensorFlow version is released

Hmmm... is there a technical reason for this? It doesn't seem to be the case though, since there have been numerous TensorFlow releases since Aug 2018, but Maven central is showing that the last time this connector was released was in Aug 2018.

https://mvnrepository.com/artifact/org.tensorflow/spark-connector_2.11/1.10.0

vikatskhay

comment created time in 2 months

issue commentterraform-providers/terraform-provider-cloudflare

Failure creating cloudflare_access_policy

@wolfmd any chance you know what the syntax would be for multiple groups?

  include {
    okta {
      name = "A"
      identity_provider_id = "idp"
    }
  }
  include {
    okta {
      name = "B"
      identity_provider_id = "idp"
    }
  }
  include {
    okta {
      name = "C"
      identity_provider_id = "idp"
    }
  }
  include {
    okta {
      name = "D"
      identity_provider_id = "idp"
    }
  }

This seems to only add D in Cloudflare.

groodt

comment created time in 2 months

issue commentterraform-providers/terraform-provider-cloudflare

Failure creating cloudflare_access_policy

Thanks, I've tried this.

Almost works. I get the same error as you are getting in #682 when attempting to login.

groodt

comment created time in 2 months

issue commentbazelbuild/rules_python

Support for creating pex binaries

FYI, subpar isn't being actively maintained except to the extent it needs updates to keep this repository working.

Yes, I think the native support in Bazel for Python zips is probably a sufficient replacement for subpar now in most cases.

sitaktif

comment created time in 2 months

issue commentterraform-providers/terraform-provider-cloudflare

Failure creating cloudflare_access_policy

Is the "Group" string you're using here the name of the group or the id for the group? For instance, when you hit the api endpoint https://api.cloudflare.com/client/v4/accounts/<account>/access/groups is this theidoruidfield or thename` field of the group object?

I've been using the literal String name of an Okta Group here, similar to the examples here show using email https://www.terraform.io/docs/providers/cloudflare/r/access_policy.html

Do I first need to create a cloudflare_access_group resource (that includes my Okta Group) and then reference this cloudflare_access_group instead?

groodt

comment created time in 2 months

issue commentnteract/nteract

New pypi release of nteract_on_jupyter

You'll also be able to provide a Jupyter endpoint and token and have things work. The heavy lifting in both scenarios is done by the rx-jupyter nteract package, which support connecting to local and remote Jupyter servers

Ok, that's a relief. If I'm able to connect via localhost to an existing jupyter environment, or build a custom JupyterLab image that bundles nteract_web or runs it as a side-car it solves my use-case.

groodt

comment created time in 2 months

issue commentnteract/nteract

New versions of nteract_on_jupyter remove bookstore and papermill functionality

Thanks! Yes, I'd say it's a fairly big UX bug to address. A toggle that silently operates without visually representing a state change is a terrible UX.

groodt

comment created time in 2 months

issue commentnteract/bookstore

How to use bookstore?

Thanks for the information.

I do think it could be made a bit clearer that bookstore requires some UI extensions to provide a UI to users.

It's a shame the currently published version of nteract_on_jupyter doesn't work with `bookstore.

Hopefully it will be possible to create a custom build that works with bookstore after this issue is merged https://github.com/nteract/nteract/pull/5098

I've looked around for a JupyterLab extension for bookstore, but couldn't find anything obvious. I'll look into what it would take to create something.

groodt

comment created time in 2 months

pull request commentterraform-providers/terraform-provider-cloudflare

Add support for Access Groups in policies

@jacobbednarz Done: https://github.com/terraform-providers/terraform-provider-cloudflare/issues/683

Thanks!

filipowm

comment created time in 2 months

issue openedterraform-providers/terraform-provider-cloudflare

Failure creating cloudflare_access_policy

Terraform Version

Terraform v0.12.9

  • provider.cloudflare v2.6.0

Affected Resource(s)

cloudflare_access_policy

Terraform Configuration Files

resource "cloudflare_access_policy" "datahub" {
  application_id = "appid"
  zone_id        = "zoneid"
  name           = "Name"
  precedence     = "1"
  decision       = "allow"

  include {
    group = [
      "Group",
    ]
  }
}

Expected Behavior

The resource should be successfully created.

Actual Behavior

An error occurs

Error: error creating Access Policy for ID "": error from makeRequest: HTTP status 500: content "{\n  \"result\": null,\n  \"success\": false,\n  \"errors\": [\n    {\n      \"code\": 10001,\n      \"message\": \"access.api.error.internal_server_error\"\n    }\n  ],\n  \"messages\": []\n}\n"

created time in 2 months

pull request commentterraform-providers/terraform-provider-cloudflare

Add support for Access Groups in policies

Is anyone successfully running this via Terraform? I get the following error:

Error: error creating Access Policy for ID "": error from makeRequest: HTTP status 500: content "{\n  \"result\": null,\n  \"success\": false,\n  \"errors\": [\n    {\n      \"code\": 10001,\n      \"message\": \"access.api.error.internal_server_error\"\n    }\n  ],\n  \"messages\": []\n}\n"
filipowm

comment created time in 2 months

issue commentnteract/nteract

New pypi release of nteract_on_jupyter

For now, I'd recommend getting setup with nteract_on_jupyter using the steps you used in your other issue.

Ok, will do! I might submit a documentation PR to clarify this for users.

Once the beta of nteract web is out (follow the repo or @nteractio on Twitter), you'll be able to use the publish feature there.

I'm looking forward to nteract_web. I'm just a little worried that it only supports Binderhub. I like the functionality of running a Jupyter with a managed container environment. I'm not really looking for a Binder experience that dynamically builds containers. Binder and Jupyter serve slightly different use-cases.

Never the less, thanks for your responses!

groodt

comment created time in 2 months

issue commentnteract/nteract

New versions of nteract_on_jupyter remove bookstore and papermill functionality

The fact that the "Publish" option doesn't appear in the menu is caused by a bug in the way the extension reads the config. I've submitted a fix for this issue in #5090.

Awesome! Thanks for fixing!

With regard to the lack of papermill features in nteract_on_jupyter, there isn't anything interesting that the nteract Jupyter extension or desktop app due besides providing the "Toggle Parmater Cell" option. We anticipate that most people are going to be using papermill via the CLI.

This may be true, but then if the "Toggle parameter cell" functionality is there, it should surely display visually in the UI. It is important with papermill to set tags on a cell that indicates "parameters". If the nteract UI doesn't visually display this, then it's difficult to know if a cell with "parameters" has been setup. It's also confusing if the toggle exists, but doesn't toggle anything. If nteract isn't going to support papermill (I would be disappointed), then it would make more sense to remove the toggle entirely and make it clear that users need to set papermill tags via the classic jupyter UI.

groodt

comment created time in 2 months

pull request commenttensorflow/ecosystem

Support scala 2.12.10 and spark 2.4.4

@jhseu @skavulya Does there need to be a new release to maven central now that this has been merged?

vikatskhay

comment created time in 2 months

issue commentsnowflakedb/snowflake-connector-python

SNOW-157438: Feature request: print `sso_url` when unable to open browser when using federated auth

I would very much like to see a feature like this as well.

There are other similar issues with MFA and using the snowflake-connector-python on remote machines without browsers.

ValentinMoullet

comment created time in 2 months

issue openednteract/nteract

New versions of nteract_on_jupyter remove bookstore and papermill functionality

Application or Package Used nteract_on_jupyter

Describe the bug Attempting to integrate nteract_on_jupyter, bookstore and papermill projects appears to be broken beyond version 2.1.3.

To Reproduce

I install nteract_on_jupyter:2.1.3 as follows:

python3 -m pip install nteract_on_jupyter papermill bookstore
jupyter serverextension enable bookstore
jupyter serverextension enable nteract_on_jupyter
jupyter nteract --ip=0.0.0.0

This results in a functional nteract_on_jupyter environment, apart from a bookstore publishing bug: https://github.com/nteract/nteract/pull/4738)

It results in a UI with bookstore and papermill "parameters" feature. (See Screenshots)

However to get around the bug, attempting to install version nteract_on_jupyter2.9.1 (appears to be latest release on GitHub) results in an environment with bookstore and papermill features removed / disabled even though they are installed.

NOJ_VERSION=2.9.1
wget https://github.com/nteract/nteract/archive/nteract-on-jupyter@$NOJ_VERSION.tar.gz
tar -xzvf nteract-on-jupyter@$NOJ_VERSION.tar.gz
cd nteract-nteract-on-jupyter-$NOJ_VERSION
yarn install

cd applications/jupyter-extension/nteract_on_jupyter
NODE_OPTIONS="--max-old-space-size=4096" NODE_ENV=production yarn exec webpack

cd ..
python3 -m pip install -e .

python3 -m pip install papermill bookstore
jupyter serverextension enable bookstore
jupyter serverextension enable nteract_on_jupyter
jupyter nteract --ip=0.0.0.0

Expected behavior Version 2.9.1 should have bookstore publishing and papermill "parameters" working.

Screenshots 2.1.3 have bookstore publishing and papermill "parameters" feature. Screen Shot 2020-05-09 at 8 37 38 pm

2.9.1 has no bookstore publishing or papermill "parameters" feature.

Screen Shot 2020-05-09 at 9 01 52 pm

created time in 2 months

issue closedNixOS/nixpkgs

bazel: 0.22.0 doesn't build in Darwin sandbox

Issue description

bazel: 0.22.0 doesn't build in Darwin sandbox

Steps to reproduce

Take a look and build the Bazel derivation in this PR: https://github.com/NixOS/nixpkgs/pull/58557

nix-build -A bazel

It will fail on Darwin with sandbox = true and succeed with sandbox = false

Technical details

Please run nix-shell -p nix-info --run "nix-info -m" and paste the results.

closed time in 2 months

groodt

issue closedNixOS/nixpkgs

kubeval no longer builds

Describe the bug The kubeval derivation is no longer building. There have been no changes. I am not sure if the problem is due to the large size or something else.

To Reproduce Steps to reproduce the behavior:

  1. ...
  2. ...
  3. ...

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Additional context Add any other context about the problem here.

Metadata Please run nix-shell -p nix-info --run "nix-info -m" and paste the result.

Maintainer information:

# a list of nixpkgs attributes affected by the problem
attribute:
# a list of nixos modules affected by the problem
module:

closed time in 2 months

groodt

issue commentfluxcd/flux

Garbage collection of CronJob resources

Anything more to be done to get the fix merged to master @hiddeco ?

groodt

comment created time in 2 months

issue openednteract/bookstore

Project still active?

Looking at the commit activity, this project seems that it may no longer be active.

Can somebody comment on whether this project is still actively maintained?

created time in 2 months

issue commentnteract/bookstore

How to use bookstore?

Ok, I've since realised publish is only intended to work with nteract_on_jupyter.

Unfortunately, this doesn't work due to: https://github.com/nteract/nteract/issues/5088

groodt

comment created time in 2 months

issue openednteract/nteract

New pypi release of nteract_on_jupyter

Application or Package Used nteract_on_jupyter

Describe the bug The published version of nteract_on_jupyter on PyPI is old and does not work with bookstore due to https://github.com/nteract/nteract/pull/4738

There should be a new release of nteract_on_jupyter that is compatible with newer versions of bookstore.

To Reproduce

  1. pip install nteract_on_jupyter bookstore
  2. jupyter serverextension enable nteract_on_jupyter
  3. jupyter nteract

Try publish a notebook, there will be a 404.

Expected behavior Bookstore publish should work and there should be no 404.

created time in 2 months

issue commentnteract/bookstore

How to use bookstore?

For the life of me, I can't work out how to initiate the publish operation. I can see the serverextension enabled, but nowhere in the UI to initiate it. I'm guessing I don't manually create an HTTP PUT to the API.

groodt

comment created time in 2 months

issue commentnteract/bookstore

How to use bookstore?

I've realised I can do this so far by reading this page: https://bookstore.readthedocs.io/en/latest/openapi.html

Manually create a browser url to do this (with setting up a fs_cloning_basedir): http://127.0.0.1:8888/bookstore/fs-clone?relpath=Notebook.ipynb

Manually create a browser url to do this: http://127.0.0.1:8888/bookstore/clone?s3_bucket=somebucket& s3_key=/Notebook.ipynb

Is this the intended workflow or am I missing something something more obvious?

I guess it's a way for me to share a notebook with a colleague by giving them a url so they can clone it into their own workspace?

groodt

comment created time in 2 months

issue openednteract/bookstore

How to use bookstore?

This is more of a question really.

How is bookstore actually used?

I've got bookstore installed into plain jupyter (not nteract). It's saving my notebooks to S3.

How does the clone and publish functionality work? I would love some screenshots from somebody about what it all should look like.

:)

created time in 2 months

issue commentkubernetes/autoscaler

Cluster-autoscaler and WaitForFirstConsumer binding mode

That's correct

That's really interesting. TIL. Thank you! I'll give it a try.

I imagine for some EFS won't be suitable and hopefully it eventually does become possible to use EBS, but for now, EFS may be a good substitute.

dvianello

comment created time in 2 months

issue commentkubernetes/autoscaler

Cluster-autoscaler and WaitForFirstConsumer binding mode

EFS also supports reclaimPolicy: "Delete", so in that sense thay are the same.

So are you saying if I have a single EFS filesystem that is referenced by a single StorageClass (reclaimPolicy: "Delete"), I would be able to create numerous different PVC (different names) that reference this StorageClass and when the various PVCs are deleted, their data will be cleaned from the single EFS filesystem?

dvianello

comment created time in 2 months

issue commentkubernetes/autoscaler

Cluster-autoscaler and WaitForFirstConsumer binding mode

I believe this is also the case for dynamically provisioned EBS volumes?

Not with reclaimPolicy: "Delete", which is what we use at the moment. It works really well, but not from the scale-to-zero case that this original GH issue pertains to.

there is a third option: instances with local SSDs, i.e. m5ad, c5d, r5dn etc

Yes, using this with emptyDir would work too, but I'm talking even larger data than this in some cases.

it is quite tricky for CA to handle them in a predictable way.

Yes, absolutely. I think this is what @MaciekPytel is working on. I don't think it will come soon, but I do hope it comes some day! :)

dvianello

comment created time in 2 months

issue commentkubernetes/autoscaler

Cluster-autoscaler and WaitForFirstConsumer binding mode

@drewhemm I'm glad it is working for your use-cases, but I think there is still a caveat with the EFS approach that means it won't be ideal for many ephemeral use-cases.

Please correct me if I'm wrong. :)

The nodes might scale up and down to 0, but the storage does not. What I mean by this, is that an EFS file-system needs to be created ahead of time. Creating the filesystem itself ahead of time isn't so much a problem, but for ephemeral use-cases, the storage used by individual PVCs on this file-system will not be released when the pods terminate?

An example of a use-case where ephemeral disk is useful is for things like Machine Learning workflows or Spark jobs, where processing of large data happens on pods, but when done, the final output is stored somewhere permanent such as S3 or a database etc. These workflows often need large amounts of storage, but only temporarily.

Another example would be elastic Jupyter Notebook services, where some exploratory analysis can be performed with ephemeral storage that is released once the user is done. The Notebook example could probably get away with host storage, but they do often work with pretty large data as well and having a one-size-fits-all EBS volume attached to the nodes themselves isn't always suitable.

The "dream" for me (maybe others too) would be if it was possible to treat storage with Kubernetes entirely elastically in the same as compute. In such a way, a PVC of any size could be requested and mounted onto a Cluster Autoscaled compute node.

dvianello

comment created time in 2 months

issue commentkubernetes/autoscaler

Cluster-autoscaler and WaitForFirstConsumer binding mode

I don't think there is a good workaround. I'm actively looking into how this can be fixed, but it will require at least some changes in volume scheduling in Kubernetes and a huge change in autoscaler (comparable to scheduler framework migration which consisted of >100 commits and took us months to complete). At this point I can't give any guarantees regarding timeline. It certainly won't be ready in time for 1.19.

Thanks @MaciekPytel I can appreciate it's a very complex problem at the moment. I was mostly trying to understand if I was missing some obvious workaround and getting clarity around if I can ever expect a solution. Sounds like a "possible solution" is in the "thinking about it" phase, but won't be ready (if ever) for a long time, possible 1.20+. Thanks for the update!

Correct me if I am wrong, but using EBS with CA seems rather counterproductive. I use CA with EFS because the latter is multi-AZ, meaning workloads can come and go in any AZ and resume with their persistent storage. There are some workloads that have issues with EFS (and NFS in general), but I think it is better to move them onto alternative multi-AZ storage system such as GlusterFS rather than tie workloads to a specific AZ, which is a requirement when using EBS.

You're not wrong @drewhemm It's just that for some workloads, particularly ephemeral workloads, it is desirable to have cheap, dynamic storage that is released when the pod terminates. EFS is 3x more expensive than EBS and does provide block-storage. Admittedly, for the use-cases I have in mind, I don't think block-storage is strictly necessary. What has your experience been with EFS on k8s? Are you using ClusterAutoscaler in the scale to/from 0 case successfully? Are you using dynamic provisioning with EFS per pod or mounting it to the node?

dvianello

comment created time in 2 months

issue commentkubernetes/autoscaler

Cluster-autoscaler and WaitForFirstConsumer binding mode

Is there a known workaround or solution to this? It seems like it isn't possible to scale to/from 0 using CA and AWS EBS Volumes.

dvianello

comment created time in 2 months

pull request commenttensorflow/ecosystem

Support scala 2.12.10 and spark 2.4.4

I would really like a Scala 2.12 version to be published as well.

vikatskhay

comment created time in 2 months

pull request commentfluent/fluent-bit-kubernetes-logging

Add example configuration for Loggly

Ping @solsson

groodt

comment created time in 2 months

issue commentfluxcd/flux

Garbage collection of CronJob resources

@hiddeco Yes, it appears to work now! Thank you!

ts=2020-04-15T23:09:12.204301076Z caller=sync.go:159 info="cluster resource not in resources to be synced; deleting" dry-run=false resource=default:cronjob/delete-me
ts=2020-04-15T23:09:12.204612196Z caller=sync.go:159 info="cluster resource not in resources to be synced; deleting" dry-run=false resource=default:configmap/delete-me
ts=2020-04-15T23:09:12.204687659Z caller=sync.go:540 method=Sync cmd=delete args= count=2
ts=2020-04-15T23:09:12.4239456Z caller=sync.go:606 method=Sync cmd="kubectl delete -f -" took=219.231801ms err=null output="cronjob.batch \"delete-me\" deleted\nconfigmap \"delete-me\" deleted"

Not sure if there are some tests we can add to prevent regressions in future.

groodt

comment created time in 3 months

more