profile
viewpoint
Marco Voelz voelzmo SAP SE Heidelberg, Germany

cppforlife/turbulence-release 49

Turbulence release is used for injecting failure scenarios into any BOSH deployment.

cloudfoundry/homebrew-tap 47

Cloud Foundry Homebrew packages

cloudfoundry-incubator/bosh-openstack-environment-templates 12

This repository is UNMAINTAINED, please consider https://github.com/cloudfoundry/bosh-bootloader to deploy BOSH on OpenStack

voelzmo/bosh-provisioner 1

Stand-alone BOSH provisioner

beyhan/docs-deploying-cf 0

The docs repo for material on deploying Cloud Foundry

bosh-dep-forks/fog-openstack 0

Fog for OpenStack Platform

fork voelzmo/docs-cf-admin

A place to put documentation about how to operate your Cloud Foundry deployment using the command line tools bosh and cf

fork in 11 days

startedstarkandwayne/gluon

started time in 16 days

issue commentcloudfoundry/bosh

Bump postgres 10 because of known vulnerabilities

Awesome, thanks for the quick change. Can we get a new release with this soon?

voelzmo

comment created time in 22 days

startedcorona-warn-app/cwa-documentation

started time in 23 days

issue commentcloudfoundry/bosh

stemcell - Ubuntu 18.04.1 LTS (Bionic Beaver)

Just leaving this here for documentation: The current state of the BOSH PMC is that it adopted the distributed committer model. There is currently no work done towards creating Bionic stemcells. More context can be found in the announcement on the BOSH mailing list: https://lists.cloudfoundry.org/g/cf-bosh/message/2707

If you're willing to help out creating these stemcells, please reach out to me or Rupa as stated in the announcement.

kmacpher67

comment created time in 24 days

issue openedcloudfoundry/bosh

Bump postgres 10 because of known vulnerabilities

Version 10.9 contains 4 CVEs which have been fixed in the most recent 10.12: https://www.postgresql.org/support/security/10/

Can you please bump the postgres version such that we get rid of these? Thanks!

created time in a month

issue closedcloudfoundry/garden-runc-release

Update busybox because of CVEs

Hi,

according to https://www.cvedetails.com/vulnerability-list/vendor_id-4282/product_id-7452/version_id-226433/Busybox-Busybox-1.27.2.html and our internal scans, the currently used version of busybox contains 4 CVEs that could be fixed by updating to the most recent version. The currently used version of busybox is 3 years old, according to 762dc2d0e947b9259878263c8ba4266f887b493d.

Can we either update this to the most recent busybox release (currently 1.31.1) or the most recent Ubuntu patch version (1.22.0-15ubuntu1.4 for Xenial) to have these fixed?

closed time in a month

voelzmo

issue commentcloudfoundry/garden-runc-release

Update busybox because of CVEs

closing this per slack conversation in https://cloudfoundry.slack.com/archives/C033RE5D6/p1588853694012100

TL;DR: busybox is used in tests only, will not be updated.

voelzmo

comment created time in a month

issue commentcloudfoundry/garden-runc-release

Update busybox because of CVEs

After searching the backlog I found this https://www.pivotaltracker.com/story/show/164946632

Just for my understanding: does this mean busybox is only used in integration testing and you therefore won't upgrade the version?

voelzmo

comment created time in a month

issue openedcloudfoundry/garden-runc-release

Update busybox because of CVEs

Hi,

according to https://www.cvedetails.com/vulnerability-list/vendor_id-4282/product_id-7452/version_id-226433/Busybox-Busybox-1.27.2.html and our internal scans, the currently used version of busybox contains 4 CVEs that could be fixed by updating to the most recent version. The currently used version of busybox is 3 years old, according to 762dc2d0e947b9259878263c8ba4266f887b493d.

Can we either update this to the most recent busybox release (currently 1.31.1) or the most recent Ubuntu patch version (1.22.0-15ubuntu1.4 for Xenial) to have these fixed?

created time in a month

issue commentcloudfoundry/bosh

`bosh stop` tries to resolve links

So sorry, I missed this somehow completely.

What do you expect to happen if drain needs the link? That's a nice catch, I didn't think about this too much! I guess my assumption was that links are resolved during rendering of the scripts, not at execution time. Are drain scripts re-rendered on bosh stop? Did you encounter any dynamic usage of links like that? Any kind of dynamic addressing should ideally be covered by DNS, I guess.

voelzmo

comment created time in a month

issue commentcloudfoundry/cloud_controller_ng

Creating a /v3/deployment causes /v2/spaces/:guid/summary to show the wrong app guid

Thanks @cwlbraa for re-opening this and sharing your assessment!

forbushbl

comment created time in a month

issue commentcloudfoundry/bosh-openstack-cpi-release

OpenStack CPI should error when multiple security groups match and hint to using UUIDs

Changed the title to match criteria for a possible solution

benschweizer

comment created time in a month

issue openedcloudfoundry/bosh-openstack-cpi-release

document possible usage of UUID instead of name for security group

Done with https://github.com/cloudfoundry/bosh-openstack-cpi-release/pull/117 but not mentioned in the job spec.

See #231 – it seems like this could help people achieve their goals.

created time in a month

issue commentcloudfoundry/bosh-openstack-cpi-release

bosh create-env fails when multiple security groups match

That's a good point, @benschweizer, thanks!

For the time being, you can use the UUID instead of the name for a security group reference, in case that unblocks you in your current environment. This was implemented as a follow up of https://github.com/cloudfoundry/bosh-openstack-cpi-release/pull/117 – but apparently not documented.

benschweizer

comment created time in a month

startedfabianschwarzfritz/Arduino-Tram-Notifier

started time in a month

issue commentcloudfoundry/cloud_controller_ng

Creating a /v3/deployment causes /v2/spaces/:guid/summary to show the wrong app guid

Some additional context, so I don't have to dig this up again for future discusisons:

The issue comes up, once you have re-deployed an app with v3, due to the reasons pointed out by @cwlbraa above.

  • /v2/spaces/<GUID>/summary returns v3/process GUIDs as app.guid
  • /v2/spaces/<GUID>/apps returns v3/app GUIDs as app.guid
  • /v2/spaces/<GUID>/apps returns url references with v3/process GUIDs in their URL, e.g. "routes_url": "/v2/apps/c21f34fd-b9e3-462a-9658-f121005c57bb/routes",

This seems inconsistent at best, even if you'd like to technically not label this as a bug, as stated above. I guess, v2 API should either

  • point to v3/process GUIDs or
  • point to v3/app GUIDs but most likely not mix this – especially not in a single call like /v2/spaces/<GUID>/apps

I understand the current focus on finishing v3, at the same time I'd like to understand more about the expectations we can have regarding compatibility between v2/v3.

/cc @ssisil @emalm @Gerg

forbushbl

comment created time in a month

issue commentcloudfoundry/cloud_controller_ng

Creating a /v3/deployment causes /v2/spaces/:guid/summary to show the wrong app guid

Thanks @cwlbraa for the explanation!

I'd like to take up the conversation regarding one point specifically

We recommend using exclusively v3 endpoints to avoid this sort of weirdness, as v2 will be going away. If you're using the cli, the cf7 beta also avoids these sorts of issues.

How would you imagine this to work in a world where a platform provider has a UI (e.g. currently working with v2 API, especially due to some performance shortcomings in v3) and consumers of the platform which can independently decide whether they're using cf or cf7?

forbushbl

comment created time in a month

push eventvoelzmo/metrics-discovery-release

Marco Voelz

commit sha 364bee8c31ac385900264f28e31d84bf143498aa

Update link to Reference Architectures section

view details

push time in a month

fork voelzmo/metrics-discovery-release

BOSH Release to discovery Prometheus-formatted metric endpoints in Cloud Foundry

fork in a month

startedEngineerBetter/control-tower

started time in 2 months

startedcloudfoundry/metrics-discovery-release

started time in 2 months

startedphil9909/ytt-lint

started time in 2 months

pull request commentcloudfoundry/bosh

Add default value to ScheduledTasksCleanup initialize argument

Looking at https://github.com/cloudfoundry/bosh/blob/master/src/bosh-director/lib/bosh/director/jobs/scheduled_events_cleanup.rb I'm wondering if ScheduledTasksCleanup#initialize should look the same, i.e. getting the number of tasks to cleanup as parameter?

beyhan

comment created time in 2 months

startedcloudfoundry/smb-volume-k8s-release

started time in 2 months

startedrobscott/kube-capacity

started time in 2 months

issue commentcloudfoundry/bosh

Resurrection of VMs with multiple disk in not working

Assuming this is the slack thread, just for reference and cross-linking: https://cloudfoundry.slack.com/archives/C02HPPYQ2/p1578339832051000

rm75

comment created time in 2 months

issue openedcloudfoundry/bosh

`bosh stop` tries to resolve links

Describe the bug When trying to bosh stop a deployment which instances consume a link from another deployment, BOSH tries to resolve these links and fails if it cannot do so. This is true, even if --no-converge is used.

To Reproduce Steps to reproduce the behavior (example):

  1. Deploy a bosh director
  2. Upload a stemcell and cloud-config
  3. Deploy A providing a link
  4. Deploy B consuming the link
  5. Delete deployment A
  6. Run bosh stop -d B [--no-converge] and watch it fail

Expected behavior bosh stop doesn't care about links being provided, especially when told to not converge

Logs

✗ bosh stop -d ping-app ping-app-back/06cb25cc-477e-4927-be6a-da67386b341e --no-converge
Using environment '192.168.50.6' as client 'admin'

Using deployment 'ping-app'

Continue? [yN]: y

Task 354

Task 354 | 12:57:27 | Error: Link 'common_link' in job 'consul_agent' from instance group 'ping-app-front' consumes from deployment 'consul', but the deployment does not exist.

Task 354 Started  Tue Mar 17 12:57:27 UTC 2020
Task 354 Finished Tue Mar 17 12:57:27 UTC 2020
Task 354 Duration 00:00:00
Task 354 error

Non-converging action failed:
  Expected task '354' to succeed but state is 'error'

Exit code 1

Debug task log

E, [2020-03-17T12:57:27.676617 #23194] [task:354] ERROR -- DirectorJobRunner: Link 'common_link' in job 'consul_agent' from instance group 'ping-app-front' consumes from deployment 'consul', but the deployment does not exist.
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/links/link_consumers_parser.rb:42:in `parse_consumers_from_job'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/links/links_parser.rb:25:in `parse_consumers_from_job'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/addon/addon.rb:95:in `block (2 levels) in add_addon_jobs_to_instance_groups'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/addon/addon.rb:80:in `each'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/addon/addon.rb:80:in `block in add_addon_jobs_to_instance_groups'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/addon/addon.rb:73:in `each'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/addon/addon.rb:73:in `add_addon_jobs_to_instance_groups'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/addon/addon.rb:49:in `add_to_deployment'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/planner_factory.rb:108:in `block in parse_from_manifest'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/planner_factory.rb:107:in `each'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/planner_factory.rb:107:in `parse_from_manifest'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/planner_factory.rb:46:in `create_from_manifest'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/planner_factory.rb:40:in `create_from_model'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/jobs/update_instance.rb:27:in `block in perform'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/lock_helper.rb:7:in `block in with_deployment_lock'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/lock.rb:79:in `lock'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/lock_helper.rb:7:in `with_deployment_lock'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/jobs/update_instance.rb:21:in `perform'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/job_runner.rb:99:in `perform_job'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/job_runner.rb:34:in `block in run'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh_common-0.0.0/lib/common/thread_formatter.rb:50:in `with_thread_name'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/job_runner.rb:34:in `run'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/jobs/base_job.rb:9:in `perform'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/jobs/db_job.rb:32:in `block in perform'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/bosh-director-0.0.0/lib/bosh/director/jobs/db_job.rb:98:in `block (3 levels) in run'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/eventmachine-1.2.7/lib/eventmachine.rb:1077:in `block in spawn_threadpool'
/var/vcap/data/packages/director/a2026fc29c49cefc8f1ace58f61c3d46684adc6b/gem_home/ruby/2.6.0/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'

Versions (please complete the following information):

  • Infrastructure: bosh-lite
  • BOSH version 270.12.0
  • BOSH CLI version 6.2.1-a28042ac-2020-02-10T18:41:00Z
  • Stemcell version ubuntu-xenial/621.59
  • ... other versions of releases being used bosh-deployment 670b442

Additional context

  • https://github.com/cloudfoundry/bosh/issues/1723
  • https://github.com/cloudfoundry/bosh-cli/issues/456

created time in 3 months

issue commentcloudfoundry/bosh-cli

Add --skip-drain flag to delete-deployment

Just for further documenting this: When your deployment consumes a link from an already deleted deployment, it seems that the bosh stop workaround doesn't work:

✗ bosh stop -d ping-app --hard --skip-drain
Using environment '192.168.50.6' as client 'admin'

Using deployment 'ping-app'

Continue? [yN]: y

Task 345

Task 345 | 12:49:55 | Error: Link 'common_link' in job 'consul_agent' from instance group 'ping-app-front' consumes from deployment 'consul', but the deployment does not exist.

Task 345 Started  Tue Mar 17 12:49:55 UTC 2020
Task 345 Finished Tue Mar 17 12:49:55 UTC 2020
Task 345 Duration 00:00:00
Task 345 error

Changing state:
  Expected task '345' to succeed but state is 'error'

Exit code 1
jasonkeene

comment created time in 3 months

Pull request review commentcloudfoundry/bosh-deployment

Automate bosh-lite ssh

+#!/bin/bash++set -eu -o pipefail++if [ ! -f "${PWD}/creds.yml" ]; then+  echo "Cound't find 'creds.yml'."+  echo "You are not running this within the bosh-lite diployment folder or you didn't deployed bosh-lite yet."
  echo "You are not running this within the bosh-lite deployment folder or you didn't deploy bosh-lite yet."
beyhan

comment created time in 3 months

issue commentcloudfoundry/bosh-aws-cpi-release

m5n and m5dn instance types are not supported

@voelzmo do you then expect all instance types to be in the list?

I'm not sure what the test process looks like for these instances. My understanding was that the CPI maintains a list of nitro-based VM types in AWS and the link above points to that list.

In this specific issue, we were mostly concerned with the m5dn instances, but I think your change makes sense. If other issues exist which make these instance types unusable, they're at least visible now and could be fixed individually.

Thanks, @h4xnoodle!

h4xnoodle

comment created time in 3 months

issue commentcloudfoundry/bosh-aws-cpi-release

m5n and m5dn instance types are not supported

More specifically: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances

h4xnoodle

comment created time in 3 months

more