profile
viewpoint
Jim Myers jfmyers9 @brexhq New York

cloudfoundry-attic/receptor 8

restful api facade for diego

cppforlife/zookeeper-release 6

Zookeeper release for BOSH

cloudfoundry-attic/runtime-metrics-server 1

(DEPRECATED) Provides varz/healthz metrics for the state of the Runtime (currently Diego) system.

craigfurman/bbs 0

Internal API to access the database for Diego.

craigfurman/cloud_controller_ng 0

Cloud Foundry Cloud Controller

craigfurman/rep 0

Representative bids on tasks and schedules them on an associated Executor

PR opened DataDog/integrations-core

add labels from `self.SAMPLE_LABELS` to container status metrics

What does this PR do?

This change respects the pod_labels from label_joins when emitting the container status metrics.

This is similar to how SAMPLE_LABELS are processes for other resources in the check.

Motivation

Users rely on label_join to associate labels from the kube_pod_labels with other kube_pod_* metrics. For example, we rely on various labels to help us route alerts to the correct team for metrics within monitors.

Additional Notes

Nope.

Review checklist (to be filled by reviewers)

  • [ ] Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
  • [ ] PR title must be written as a CHANGELOG entry (see why)
  • [ ] Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR)
  • [ ] PR must have changelog/ and integration/ labels attached
+13 -11

0 comment

1 changed file

pr created time in 2 days

push eventjfmyers9/integrations-core

Jim Myers

commit sha d4acf970fc544bcec85830fe0bc18cbda1dcf6d8

also update conf.yaml.example

view details

push time in 22 days

issue openedDataDog/integrations-core

[kubernetes_state] Distinguish between CronJobs and Jobs in metrics

At the moment, we detect the difference between CronJobs and Jobs by the presence of timestamp at the end of the Job name. We keep track of these failures separately, and then we increment the kubernetes_state.job.failed and kubernetes_state.job.succeeded counters after processing the kube-state-metrics.

Since we are already distinguishing between CronJobs and Jobs in code, would it be possible to add a way to distinguish between these two types of Jobs in the metric that we emit? Maybe a tag?

Thoughts?

created time in 22 days

push eventjfmyers9/integrations-core

Jim Myers

commit sha c6f8530c63a4aa7024135e62bb7e62dea8ac1c39

Update kubernetes_state/datadog_checks/kubernetes_state/data/auto_conf.yaml Co-authored-by: Cedric Lamoriniere <cedric.lamoriniere@datadoghq.com>

view details

push time in 22 days

PR opened DataDog/integrations-core

update kubernetes_state auto_conf.yaml with valid example configuration

What does this PR do?

Tiny change to the example configuration in the auto_conf.yaml for kubernetes_state.

Motivation

I tried copying the example configuration and it didn't work. Had to read the code to figure out the correct value.

Additional Notes

No.

Review checklist (to be filled by reviewers)

  • [ ] Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
  • [ ] PR title must be written as a CHANGELOG entry (see why)
  • [ ] Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR)
  • [ ] PR must have changelog/ and integration/ labels attached
+2 -1

0 comment

1 changed file

pr created time in a month

create barnchjfmyers9/integrations-core

branch : jmyers/update-auto-conf-example

created branch time in a month

fork jfmyers9/integrations-core

Core integrations of the Datadog Agent

fork in a month

issue closedcloudfoundry/bosh-cli

Feature Request: `bosh upload-blobs` accepts a filename parameter for private.yml

Original Issue: https://github.com/cloudfoundry/bosh/issues/1244

I would like to use a password management tool such as Last Pass to track my credentials for my blobstore (e.g. I have a note with the contents of my private.yml). Right now, I need to copy/paste that into a file, upload blobs, then remove the file. This leaves traces of my credentials that I'd rather not leave.

I would like to be able to do something like bosh upload blobs --private-file <(fetch-my-creds) so that the credentials are only in memory for a short period of time.

Thanks, Dan

closed time in a month

jfmyers9

issue closedcloudfoundry/bbl-state-resource

The bbl-state resource should write the contents of `bbl print-env` to a file.

We want to be able to target the BOSH director in one of our builds using a director that has been deployed by the bbl-state-resource. Right now, we would need to run eval "$(bbl --state-dir bbl-state print-env)". This requires us to have the bbl binary on our path.

It would be cool if the bbl-state-resource wrote a file that we could source when fetching the resource as an input. Something like:

source bbl-state/environment.sh

closed time in a month

jfmyers9

issue closedbosh-packages/ruby-release

Replace `overwrite_shebang.rb` with `--env-shebang` when installing rubygems.

We added a fix which overwrites the shebang of the default gems with #!/usr/bin/env ruby.

As of https://github.com/rubygems/rubygems/pull/2271, the --env-shebang should be present on the setup command. We should use that and remove the overwrite_shebang.rb scripts when it is released in a future version.

closed time in a month

jfmyers9

issue commentgetsentry/sentry-elixir

Allow configuration to set maximum number of breadcrumbs.

If you could backport it, that would be very much appreciated. Thanks for the help.

jfmyers9

comment created time in a month

issue commentgetsentry/sentry-elixir

Allow configuration to set maximum number of breadcrumbs.

@mitchellhenke Is this something that you think could be released on the 7.x.x line? Or will this only be released when 8.x.x comes out?

jfmyers9

comment created time in a month

startedDiederikvandenB/apollo-link-sentry

started time in 2 months

create barnchjfmyers9/apollo-error-demo

branch : promise-rejection

created branch time in 2 months

delete branch jfmyers9/sentry-elixir

delete branch : allow-max-breadcrumbs-configuration

delete time in 2 months

issue closedDataDog/dd-trace-py

[feature request] grpc client traces respect service name

Hi folks,

When patching GRPC, it seems that the service name is hardcoded to <service>-grpc-client. Is it possible to change this behavior so that it is just <service>? At the moment this is creating multiple APM services in DataDog for a single service.

Thoughts?

closed time in 2 months

jfmyers9

issue commentDataDog/dd-trace-py

[feature request] grpc client traces respect service name

I think this has been fixed as of v0.40.0. Thanks!

jfmyers9

comment created time in 2 months

PR opened getsentry/sentry-elixir

Allow users to configure maximum number of breadcrumbs

This configuration option allows a user to specify the maximum number of breadcrumbs when posting events to Sentry. The default value is 100.

This is helpful in situations in which long running processes can add a large number of breadcrumbs resulting in failures to create Sentry events when the message becomes too large.

Fixes #407

+19 -1

0 comment

4 changed files

pr created time in 2 months

create barnchjfmyers9/sentry-elixir

branch : allow-max-breadcrumbs-configuration

created branch time in 2 months

fork jfmyers9/sentry-elixir

The official Elixir SDK for Sentry (sentry.io)

https://sentry.io

fork in 2 months

issue closedjfmyers9/blogs

Multi-line Env Vars

closed time in 2 months

jfmyers9

issue closedjfmyers9/blogs

Crucible

closed time in 2 months

jfmyers9

issue openedgetsentry/sentry-elixir

Allow configuration to set maximum number of breadcrumbs.

Environment

  • Sentry version (mix deps): 7.2.0
  • Operating system: macOS, Linux

Description

Sometimes for long lived processes, too many breadcrumbs are added to a sentry event. When posting the event, the client fails if the body is too large. Similar to this issue: https://forum.sentry.io/t/http-error-413-request-entity-too-large/1942.

Is it possible to get configuration to limit the number of breadcrumbs that sentry will attempt to send? Something akin to max_breadcrumbs that is present in the python sentry library?

created time in 2 months

issue commentDataDog/dd-trace-py

[feature-request] allow configuration of endpoints to skip tracing for Flask

Thanks for the pointer. Will look into if that is an option.

jfmyers9

comment created time in 3 months

issue commentDataDog/dd-trace-py

[feature-request] allow configuration of endpoints to skip tracing for Flask

I think these would be the relevant versions of pip packages involved in the above traces:

ddtrace==0.38.1
Flask==1.0
gunicorn==19.9.0
psycopg2==2.8.4
jfmyers9

comment created time in 3 months

IssuesEvent

issue commentDataDog/dd-trace-py

[feature-request] allow configuration of endpoints to skip tracing for Flask

Hi @Kyle-Verhoog,

I tried using filters, and that did work somewhat for our use case, but there was some behavior that I found surprising.

To validate this in development, I'm using the LogWriter from ddtrace.internal.writer to see the traces that get outputted to stdout. When I configure the filter to block a specific endpoint, I see that traces will still originate from this endpoint.

For example, without the filter, I see a trace outputted loosely in the following structure:

flask.request: trace-id
  postgres.query
  postgres.connection.rollback
  ...

where the flask request is an entire trace with many spans underneath it.

When I add the filter I see the following for the same endpoint:

postgres.query: trace-id1
postgres.connection.rollback: trace-id2
...

The overarching Flask trace is no longer present, but the spans underneath it are broken into individual traces.

The behavior that I was expecting was to see no traces for this endpoint. Is my understanding of how filters work incorrect?

jfmyers9

comment created time in 3 months

more