profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/marcotc/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

DataDog/dd-trace-rb 141

Datadog Tracing Ruby Client

marcotc/docker-ruby-jemalloc 4

Docker Unofficial Image packaging for Ruby with jemalloc

DataDog/codecov-ruby 0

Ruby uploader for Codecov

DataDog/simplecov 0

Code coverage for Ruby with a powerful configuration library and automatic merging of coverage across test suites

marcotc/appraisal 0

A Ruby library for testing your library against different versions of dependencies.

marcotc/aws-sdk-ruby 0

The official AWS SDK for Ruby.

marcotc/chkcrontab 0

A tool for checking system crontab files (/etc/crontab and /etc/cron.d normally) for errors and common mistakes.

marcotc/circleci-cli 0

:cyclone: CLI client / command line tool for CircleCI

marcotc/commons 0

Twitter common libraries for python and the JVM

create barnchDataDog/dd-trace-rb

branch : uds-auto-detect

created branch time in 12 hours

pull request commentDataDog/dd-trace-rb

Correlate Active Job logs to the active DataDog trace

Thank you, @agrobbin!

agrobbin

comment created time in 14 hours

push eventDataDog/dd-trace-rb

Alex Robbin

commit sha 1610e1b0c1da3cfefc396a6a474f0c0056bfad42

correlate Active Job logs to the active DataDog trace Similar to how dd-trace hooks into the Rails web request logs, it can do the same for Active Job executions.

view details

Marco Costa

commit sha b026d27597147d4b205c6679975d9612ca5eb860

Merge pull request #1694 from agrobbin/active-job-log-injection Correlate Active Job logs to the active DataDog trace

view details

push time in 14 hours

PR merged DataDog/dd-trace-rb

Correlate Active Job logs to the active DataDog trace integrations community feature

Similar to how dd-trace hooks into the Rails web request logs, it can do the same for Active Job executions.

This is not quite a full solution to #1068, but it does get dd-trace most of the way there for folks who are using Sidekiq via Active Job. Since Active Job handles most of the work in those cases, the only logs that are not correlated to the active DataDog trace are those coming directly from Sidekiq itself (i.e. start and done).

This follows up on #1639.

+50 -3

2 comments

6 changed files

agrobbin

pr closed time in 14 hours

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentDataDog/dd-trace-rb

Correlate Active Job logs to the active DataDog trace

 class Settings < Contrib::Configuration::Settings            option :service_name, default: Ext::SERVICE_NAME           option :error_handler, default: Datadog::Tracer::DEFAULT_ON_ERROR+          option :log_injection, default: false

Could you document this new option here: https://github.com/DataDog/dd-trace-rb/blob/master/docs/GettingStarted.md#active-job (No need to worry about the missing error_handler)

agrobbin

comment created time in 2 days

push eventDataDog/dd-trace-rb

Ivo Anjo

commit sha 0b612f6ef050de16523366777105dab301c98906

Allow latest racecar gem on Ruby 2.4 The racecar 2.3 series was incompatible with Ruby 2.4, but upstream has now released racecar 2.4 which restores support for Ruby 2.4. Issue zendesk/racecar#252

view details

Marco Costa

commit sha c127937a05865eba9cb64c2427e2e06cab0b656c

Merge pull request #1696 from DataDog/ivoanjo/allow-latest-racecar-on-2_4

view details

push time in 2 days

delete branch DataDog/dd-trace-rb

delete branch : ivoanjo/allow-latest-racecar-on-2_4

delete time in 2 days

PR merged DataDog/dd-trace-rb

Allow latest racecar gem on Ruby 2.4

The racecar 2.3 series was incompatible with Ruby 2.4, but upstream has now released racecar 2.4 which restores support for Ruby 2.4.

Issue zendesk/racecar#252

+76 -72

1 comment

15 changed files

ivoanjo

pr closed time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentDataDog/dd-trace-rb

Use most recent event for trace resource name

 def message             "Mapping for sample value type '#{type}' to index is unknown."           end         end++        protected++        def new_group(event, values)+          EventGroup.new(event, values)+        end++        def update_group(event_group, event, values)+          # Update values for group+          event_group.values.each_with_index do |group_value, i|+            event_group.values[i] = group_value + values[i]

I think there's a measurable performance difference for invoking event_group.values on every loop iteration, due to creating of the values arrays every time. I believe the previous implementation, of storing the result of event_group.values would perform better here.

          group_values = event_group.values
          group_values.each_with_index do |group_value, i|
            group_values[i] = group_value + values[i]
delner

comment created time in 4 days

PullRequestReviewEvent

issue commentDataDog/dd-trace-rb

Distributed tracing support through Kafka integration

👋 @tak1n, we have a large effort at Datadog around improving tracing for distributed payloads, Kafka being the most popular system representing such payloads today.

This effort is being championed by the Java team and thus will cascade down to other languages after the groundwork in both the Java tracer and backend/UI have been flushed out.

ddtrace will follow suit when the internal specs are more flushed out and Kafka will definitely be one of the supported use cases.

At this moment, I would not recommend you tackle any work regarding adding such support do ddtrace, as our data modelling and presentation for traces today is insufficient to captured the complexity of all possible Kafka scenarios.

tak1n

comment created time in 7 days

issue openedDataDog/dogstatsd-ruby

Documentation links under #metrics are broken

Most of the links under the Metrics sections are pointing to pages that don't exist anymore: https://github.com/DataDog/dogstatsd-ruby#metrics

I wasn't able to find the new alternative locations, thus I'm opening an issue to capture this information.

created time in 7 days

issue commentDataDog/dd-trace-rb

Number of opened connections over the WebSocket protocol

Hey @alexfedoseev, you can do this using our metrics gem https://github.com/DataDog/dogstatsd-ruby. Specifically, a gauge metric: https://docs.datadoghq.com/metrics/dogstatsd_metrics_submission/#code-examples-1

In a background thread, say every 10 seconds, you'd run:

require 'datadog/statsd'
statsd = Datadog::Statsd.new('localhost', 8125)

Thread.new do
  loop do
    connection_count = ActionCable.server.connections.length
    statsd.gauge('action_cable.connection_count', connection_count, tags: ['env:prod', 'service:myservice'])
    sleep 10
  end
end
statsd.close

This falls a bit outside the scope of ddtrace, which deals with generating traces.

alexfedoseev

comment created time in 7 days

issue commentDataDog/dd-trace-rb

Rack integration does not tag content-length headers from HTTP requests

Thank you for this issue report, @mkigikm.

Since they are not prefixed with 'HTTP_', the parsing code misses them.

This sounds spot on. I'll schedule for us to tackle this in the coming sprints.

mkigikm

comment created time in 7 days

issue commentDataDog/dd-trace-rb

Add `Datadog::Span#[]=` to match OpenTelemetry.

This seem like a reasonable request, @ioquatix, thank you for bringing it up.

It is a small task, but we are in the middle of a Datadog::Span refactoring (https://github.com/DataDog/dd-trace-rb/pull/1675), so I'll schedule for us to tackle this after we finished that work.

ioquatix

comment created time in 7 days

pull request commentDataDog/dd-trace-rb

Trace Sidekiq server internal events (heartbeat, job fetch, and scheduled push)

Everything looks great, @agrobbin!

One last thing: did you get a chance to try this in your development/staging environment, to see if the new traces look good to you as a Sidekiq user?

agrobbin

comment created time in 7 days

Pull request review commentDataDog/dd-trace-rb

Trace Sidekiq server internal events (heartbeat, job fetch, and scheduled push)

 if ruby_version?('2.1')     gem 'semantic_logger', '~> 4.0'     gem 'sequel', '~> 4.0', '< 4.37'     gem 'shoryuken'-    gem 'sidekiq', '~> 3.5.4'+    gem 'sidekiq', '~> 4.0.0'

We always try to not break existing behaviour: if it was working in version X, should work in version X+1.

If we want to improve things, but these improvements only work with newer versions of the integrated library (Sidekiq in our case), it's reasonable to have separate code paths: one for old behaviour, another for new and improved behaviour. We normally only do this for brand new features, not for two versions of an existing feature (unless strictly technically required).

So, for example, adding a few additional spans or additional tags for only more recent versions of Sidekiq is ok. But having, for example, the span.resource for the main Sidekiq span changing depending on Sidekiq version is not great, as people will upgrade Sidekiq and all of the sudden their instrumentation has changed as well, breaking dashboards and monitors.

agrobbin

comment created time in 7 days

PullRequestReviewEvent

Pull request review commentDataDog/dd-trace-rb

Trace Sidekiq server internal events (heartbeat, job fetch, and scheduled push)

 def patch             config.server_middleware do |chain|               chain.add(Sidekiq::ServerTracer)             end++            patch_server_internals           end         end++        def patch_server_internals+          # Sidekiq < 5.2.4 (before mperham/sidekiq@ddb0c8b3) doesn't require this until+          # too late in the process (after `CLI#run` has been called)+          require 'sidekiq/launcher'

only those who call use :sidekiq in their configuration

For web applications, for example, they will have use :sidekiq because they want to capture MyJob.perform_async calls. But these applications will never require sidekiq/launcher normally. Does that make sense?

agrobbin

comment created time in 7 days

PullRequestReviewEvent

Pull request review commentDataDog/dd-trace-rb

[PROF-3944] Change profiler to gather "local root span id" from active traces instead of "trace id"

 def new_unfinished_span(name = nil)     end   end +  describe '#current_span_and_root_span' do+    subject(:current_span_and_root_span) { context.current_span_and_root_span }++    let(:span) { Datadog::Span.new(tracer, 'span', context: context) }+    let(:root_span) { Datadog::Span.new(tracer, 'root span', context: context) }++    it 'returns the current span as well as the current root span' do+      context.add_span(root_span)+      context.add_span(span)++      current_span, current_root_span = current_span_and_root_span++      expect(current_span).to be span+      expect(current_span).to be context.current_span+      expect(current_root_span).to be root_span+      expect(current_root_span).to be context.current_root_span+    end++    # NOTE: The rest of the behavior tests for setting the root span is in #current_root_span

I think you can remove it. If it's not being tested here, I'd assume it's covered elsewhere.

The comment lets us know that we didn't forget to test, but I'm not sure if it's strictly needed here.

ivoanjo

comment created time in 7 days

PullRequestReviewEvent

Pull request review commentDataDog/dd-trace-rb

Tracer#trace produces SpanOperation instead of Span

 class Span      attr_accessor :name, :service, :span_type,                   :span_id, :trace_id, :parent_id,-                  :status, :sampled,-                  :tracer, :context+                  :status, :sampled      attr_reader :parent, :start_time, :end_time, :resource_container      attr_writer :duration -    # Create a new span linked to the given tracer. Call the \Tracer method <tt>start_span()</tt>-    # and then <tt>finish()</tt> once the tracer operation is over.+    # Create a new span manually. Call the <tt>start()</tt> method to start the time+    # measurement and then <tt>stop()</tt> once the timing operation is over.     #     # * +service+: the service name for this span     # * +resource+: the resource this span refers, or +name+ if it's missing.     #     +nil+ can be used as a placeholder, when the resource value is not yet known at +#initialize+ time.     # * +span_type+: the type of the span (such as +http+, +db+ and so on)     # * +parent_id+: the identifier of the parent span     # * +trace_id+: the identifier of the root span for this trace-    # * +context+: the context of the span-    def initialize(tracer, name, options = {})-      @tracer = tracer-+    def initialize(name, options = {})

It only affects arguments in which you have to tell the difference between the caller providing nil vs completely omitting that argument. For example: for def initialize(opt1: "my-default"), if you need to the the difference between Span.new(name, opt1: nil) and Span.new(name), you'll need the options = {} (or **options) generic keyword argument capture, instead of a simple keyword argument with default value opt1: "my-default".

This doesn't seem to be the case with any of the arguments for Span, so it looks safe to refactor.

delner

comment created time in 7 days

PullRequestReviewEvent

issue commentDataDog/dd-trace-rb

Missing traces when upgrading from 0.34.2

👋 @rahul342, thank you for this report.

Would you be able to take a look at your Datadog agent logs and see if there are any messages related to "decoding-error" or "payload-too-large"?

We introduced a few new default tags since 0.34.2 and there's a chance some of them tripped your trace payload to be just over the limit.

Another thing you can do is enabled "partial_flush" in ddtrace: https://docs.datadoghq.com/tracing/setup_overview/setup/ruby/#payload-too-large

I know this doesn't address your root cause, but it normally works great for very large spans.

"partial_flush" will break down very large traces into smaller chunks, reducing your applications accumulated trace data at a point in time, at the expense of more frequent network flushes to the agent. For something with 10k+ spans, I'd definitely recommend "partial_flush".

rahul342

comment created time in 7 days

issue commentDataDog/dd-trace-rb

Potential bug in DefaultContextProvider

:wave: @apneadiving, thank you for raising this issue. I think what's confusing here is the naming of the argument key.

I've created a PR to rename this argument to thread, as this is the expected argument type for #context: https://github.com/DataDog/dd-trace-rb/pull/1692

apneadiving

comment created time in 7 days