profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/ferhatelmas/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
ferhat elmas ferhatelmas @GetStream Amsterdam https://ferhatelmas.com Passionate developer

ferhatelmas/algo 32

:books: My solutions to algorithm problems on various websites

ferhatelmas/adb 1

:saxophone: Advanced databases course projects

ferhatelmas/7l7w 0

Seven languages in seven weeks

ferhatelmas/academic-kickstart 0

Easily create a beautiful website using Academic and Hugo

ferhatelmas/admin-on-rest 0

A frontend framework for building admin SPAs on top of REST services, using React and Material Design

ferhatelmas/advanced-algo 0

Our advanced algorithm class stuff

ferhatelmas/aepp-sdk-js 0

Javascript SDK for the æternity blockchain

ferhatelmas/airr-react 0

Reusable React components for creating Single Page Apps

ferhatelmas/algoliasearch-client-go 0

Algolia Search API Client for Go

ferhatelmas/alp 0

A toy application protocol

Pull request review commentopen-telemetry/opentelemetry-collector-contrib

example: add kubernetes logs example

+receivers:+  filelog:+    include: [ /var/log/pods/*/*/*.log ]+    start_at: beginning+    include_file_path: true+    include_file_name: false+    operators:+      # Find out which format is used by kubernetes+      - type: router+        id: get-format+        routes:+          - output: parser-docker+            expr: '$$record matches "^\\{"'+          - output: parser-crio+            expr: '$$record matches "^[^ Z]+ "'+          - output: parser-containerd+            expr: '$$record matches "^[^ Z]+Z"'+      # Parse CRI-O format+      - type: regex_parser+        id: parser-crio+        regex: '^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) (?P<log>.*)$'+        output: extract_metadata_from_filepath+        timestamp:+          parse_from: time+          layout_type: gotime+          layout: '2006-01-02T15:04:05.000000000-07:00'+      # Parse CRI-Containerd format+      - type: regex_parser+        id: parser-containerd+        regex: '^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) (?P<log>.*)$'+        output: extract_metadata_from_filepath+        timestamp:+          parse_from: time+          layout: '%Y-%m-%dT%H:%M:%S.%LZ'+      # Parse Docker format+      - type: json_parser+        id: parser-docker+        output: extract_metadata_from_filepath+        timestamp:+          parse_from: time+          layout: '%Y-%m-%dT%H:%M:%S.%LZ'+      # Extract metadata from file path+      - type: regex_parser+        id: extract_metadata_from_filepath+        regex: '^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]{36})\/(?P<container_name>[^\._]+)\/(?P<run_id>\d+)\.log$'+        parse_from: $$attributes.file_path+      # Move out attributes to Attributes+      - type: metadata+        attributes:+          stream: 'EXPR($.stream)'+          k8s.container.name: 'EXPR($.container_name)'+          k8s.namespace.name: 'EXPR($.namespace)'+          k8s.pod.name: 'EXPR($.pod_name)'+          run_id: 'EXPR($.run_id)'

I'm not sure if its really needed. It defines from which run of a pod logs are coming. In case pod is restarted the run_id is incremented. It could be sth like k8s.container.run_number, ``k8s.container.run_id`

sumo-drosiek

comment created time in 2 minutes

startedtimberio/vector

started time in 8 minutes

Pull request review commentopen-telemetry/opentelemetry-collector-contrib

example: add kubernetes logs example

+receivers:+  filelog:+    include: [ /var/log/pods/*/*/*.log ]+    start_at: beginning+    include_file_path: true+    include_file_name: false+    operators:+      # Find out which format is used by kubernetes+      - type: router+        id: get-format+        routes:+          - output: parser-docker+            expr: '$$record matches "^\\{"'+          - output: parser-crio+            expr: '$$record matches "^[^ Z]+ "'+          - output: parser-containerd+            expr: '$$record matches "^[^ Z]+Z"'+      # Parse CRI-O format+      - type: regex_parser+        id: parser-crio+        regex: '^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) (?P<log>.*)$'+        output: extract_metadata_from_filepath+        timestamp:+          parse_from: time+          layout_type: gotime+          layout: '2006-01-02T15:04:05.000000000-07:00'+      # Parse CRI-Containerd format+      - type: regex_parser+        id: parser-containerd+        regex: '^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) (?P<log>.*)$'+        output: extract_metadata_from_filepath+        timestamp:+          parse_from: time+          layout: '%Y-%m-%dT%H:%M:%S.%LZ'+      # Parse Docker format+      - type: json_parser+        id: parser-docker+        output: extract_metadata_from_filepath+        timestamp:+          parse_from: time+          layout: '%Y-%m-%dT%H:%M:%S.%LZ'+      # Extract metadata from file path+      - type: regex_parser+        id: extract_metadata_from_filepath+        regex: '^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]{36})\/(?P<container_name>[^\._]+)\/(?P<run_id>\d+)\.log$'

Unfortunately, we cannot

Error: cannot setup pipelines: cannot build receivers: cannot create receiver filelog: compiling regex: error parsing regexp: invalid named capture: `(?P<k8s.namespace.name>

Also, I'm moving them later to be attributes of log, not part of it

sumo-drosiek

comment created time in 25 minutes

Pull request review commentopen-telemetry/opentelemetry-collector-contrib

example: add kubernetes logs example

+receivers:+  filelog:+    include: [ /var/log/pods/*/*/*.log ]

To exclude Otelcol logs: exclude: [ /var/log/pods/<namespace>_<pod_name>_*/<container_name>/*.log where namespace, pod_name and container_name has to be glob compliant

sumo-drosiek

comment created time in 35 minutes

Pull request review commentopen-telemetry/opentelemetry-collector-contrib

example: add kubernetes logs example

+# OpenTelemetry Collector Demo++This demo is a sample app to build the collector and exercise its kubernetes logs scrapping functionality.++## Build and Run++Two steps are required to build and run the demo:++1. Build latest docker image in main repository directory `make docker-otelcontribcol`+1. Switch to this directory and run `docker-compose up`

What do you mean by k8s config example? Example daemonset with configmap? Otelcol configuration is going to be the same like in this example

sumo-drosiek

comment created time in 42 minutes

issue openedopen-telemetry/opentelemetry-specification

Collector resource detection shouldn't override library resource attributes

The spec specifies a merge behavior where the updating resource is overriding the existing resource attributes.

The resulting resource MUST have all attributes that are on any of the two input resources. If a key exists on both the old and updating resource, the value of the updating resource MUST be picked (even if the updated value is empty).

This behavior is unfortunately not great if collector detects and overrides resources via the resourcedetectionprocessor (https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor). For example, in cases where libraries are sending telemetry data to a collector running on a different host, this model will lead to the loss of host-specific attributes.

We should recommend collector not to override the resource attributes coming via OTLP.

created time in an hour

Pull request review commentopen-telemetry/opentelemetry-collector-contrib

Fix concurrency in emf exporter

 import ( )  const (-	//http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html-	//In truncation logic, it assuming this constant value is larger than PerEventHeaderBytes + len(TruncatedSuffix)-	MaxEventPayloadBytes = 1024 * 256 //256KB+	// http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html+	// In truncation logic, it assuming this constant value is larger than PerEventHeaderBytes + len(TruncatedSuffix)+	DefaultMaxEventPayloadBytes = 1024 * 256 //256KB 	// http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html 	MaxRequestEventCount   = 10000 	PerEventHeaderBytes    = 26 	MaxRequestPayloadBytes = 1024 * 1024 * 1 -	logEventChanBufferSize    = 10000 // 1 request can handle max 10000 log entries-	minPusherIntervalInMillis = 200   // 5 TPS+	minPusherIntervalInMillis = 200 // 5 TPS -	logEventBatchPushChanBufferSize = 2 // processing part does not need to be blocked by the current put log event request-	TruncatedSuffix                 = "[Truncated...]"+	TruncatedSuffix = "[Truncated...]"  	LogEventTimestampLimitInPast   = 14 * 24 * time.Hour //None of the log events in the batch can be older than 14 days 	LogEventTimestampLimitInFuture = -2 * time.Hour      //None of the log events in the batch can be more than 2 hours in the future. ) -//Struct to present a log event.+var (+	maxEventPayloadBytes = DefaultMaxEventPayloadBytes+)++// Struct to present a log event. type LogEvent struct { 	InputLogEvent *cloudwatchlogs.InputLogEvent-	//The time which log generated.+	// The time which log generated. 	LogGeneratedTime time.Time } -//Calculate the log event payload bytes.-func (logEvent *LogEvent) eventPayloadBytes() int {-	return len(*logEvent.InputLogEvent.Message) + PerEventHeaderBytes+// Create a new log event+// logType will be propagated to logEventBatch and used by pusher to determine which client to call PutLogEvent+func NewLogEvent(timestampInMillis int64, message string) *LogEvent {

Keep methods private unless they must be public (generally only the ones that implement an interface)

bjrara

comment created time in an hour

Pull request review commentopen-telemetry/opentelemetry-collector-contrib

Fix concurrency in emf exporter

 import ( )  const (-	//http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html-	//In truncation logic, it assuming this constant value is larger than PerEventHeaderBytes + len(TruncatedSuffix)-	MaxEventPayloadBytes = 1024 * 256 //256KB+	// http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html+	// In truncation logic, it assuming this constant value is larger than PerEventHeaderBytes + len(TruncatedSuffix)+	DefaultMaxEventPayloadBytes = 1024 * 256 //256KB 	// http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html 	MaxRequestEventCount   = 10000 	PerEventHeaderBytes    = 26 	MaxRequestPayloadBytes = 1024 * 1024 * 1 -	logEventChanBufferSize    = 10000 // 1 request can handle max 10000 log entries-	minPusherIntervalInMillis = 200   // 5 TPS+	minPusherIntervalInMillis = 200 // 5 TPS -	logEventBatchPushChanBufferSize = 2 // processing part does not need to be blocked by the current put log event request-	TruncatedSuffix                 = "[Truncated...]"+	TruncatedSuffix = "[Truncated...]"  	LogEventTimestampLimitInPast   = 14 * 24 * time.Hour //None of the log events in the batch can be older than 14 days 	LogEventTimestampLimitInFuture = -2 * time.Hour      //None of the log events in the batch can be more than 2 hours in the future. ) -//Struct to present a log event.+var (+	maxEventPayloadBytes = DefaultMaxEventPayloadBytes+)++// Struct to present a log event. type LogEvent struct { 	InputLogEvent *cloudwatchlogs.InputLogEvent-	//The time which log generated.+	// The time which log generated. 	LogGeneratedTime time.Time } -//Calculate the log event payload bytes.-func (logEvent *LogEvent) eventPayloadBytes() int {-	return len(*logEvent.InputLogEvent.Message) + PerEventHeaderBytes+// Create a new log event+// logType will be propagated to logEventBatch and used by pusher to determine which client to call PutLogEvent+func NewLogEvent(timestampInMillis int64, message string) *LogEvent {

We generally use timestampMS convention

bjrara

comment created time in an hour

Pull request review commentopen-telemetry/opentelemetry-specification

Suggest the correct markdownlint CLI

 the box settings for this repository will be consistent. To check for style violations, use  ```bash-# Ruby and gem are required for mdl-gem install mdl-mdl -c .mdlrc .+# npm is required to install markdownlint+npm install -g markdownlint-cli

I don't think it has to block this change, but adding a docker image for markdownlint to the build-tools would be nice

rakyll

comment created time in an hour

Pull request review commentopen-telemetry/opentelemetry-specification

Suggest the correct markdownlint CLI

 the box settings for this repository will be consistent. To check for style violations, use  ```bash-# Ruby and gem are required for mdl-gem install mdl-mdl -c .mdlrc .+# npm is required to install markdownlint+npm install -g markdownlint-cli

Developers are advanced enough to fallback to that option if they don't want to install global. This enables make markdownlint on a development machine which was the actual purpose of the change.

rakyll

comment created time in an hour

PR closed open-telemetry/opentelemetry-collector-contrib

Introducing `groupbyauth` processor Stale

Description: <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> The processor is based on groupbytrace processor. It is grouping resources associated to the same authentication token. It is a part of multi tenant support: https://github.com/open-telemetry/opentelemetry-collector/pull/2495

Testing: <Describe what testing was performed and which tests were added.> Unit tests

Documentation: <Describe the documentation added.> In code comments

+3443 -0

7 comments

18 changed files

pmatyjasek-sumo

pr closed time in 2 hours

pull request commentopen-telemetry/opentelemetry-collector-contrib

Introducing `groupbyauth` processor

Closed as inactive. Feel free to reopen if this PR is still being worked on.

pmatyjasek-sumo

comment created time in 2 hours

PR closed open-telemetry/opentelemetry-collector-contrib

[WIP] Multi-architecture container images Stale

Description: Update CI/CD to build multi-architecture container images. Currently multiple binaries are created for OS/architectures; however, only a single container image is created (linux/amd64). This PR will build and push both linux/amd64 and linux/arm64 container images and setup the foundation to make it trivial to build additional architecture images in the future.

Link to tracking Issue: #2379

Testing: TODO

Documentation: TODO

+46 -4

2 comments

3 changed files

gramidt

pr closed time in 2 hours

pull request commentopen-telemetry/opentelemetry-collector-contrib

[WIP] Multi-architecture container images

Closed as inactive. Feel free to reopen if this PR is still being worked on.

gramidt

comment created time in 2 hours

Pull request review commentopen-telemetry/opentelemetry-go

Avoid overriding configuration of tracer provider

 This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm    | Windows | 1.14       | amd64        |    | Windows | 1.15       | 386          |    | Windows | 1.14       | 386          |+- Added `WithDefaultSampler` and `WithSpanLimits` to tracer provider. (#1633)

That is fine. This should be handled by an automatic tool, which lowers everyone's mental burden. ;)

ijsong

comment created time in 2 hours

issue openedopen-telemetry/opentelemetry-collector-contrib

[exporter/newrelic] instrumentation.provider attribute is missing

Describe the bug According to the opentelemetry spec for new relic, there should be an instrumentation.provider attribute:https://github.com/newrelic/newrelic-exporter-specs/blob/master/opentelemetry/OpenTelemetry-Spans.md

This appears to be missing in the exporter and spans sent to new relic are missing this attribute.

Steps to reproduce

  1. Instrument an app (I used the Go SDK)
  2. Point the app to use an opentelemetry collector with the new relic exporter.
  3. Filter entities in the explorer with instrumentation.provider = opentelemetry and see that the entities do not show up.

What did you expect to see? Spans sent to new relic should have the instrumentation.provider attribute set.

What did you see instead? The instrumentation.provider attribute was not set.

What version did you use? v0.21.0

What config did you use?

receivers:
  otlp:
    protocols:
      grpc:
processors:
  batch:
exporters:
  newrelic:
    apikey: NEW_RELIC_KEY_GOES_HERE
    timeout: 30s
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [newrelic]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [newrelic]

Environment OS: Minikube in docker mode

created time in 2 hours

startedesimov/pigo

started time in 2 hours

Pull request review commentopen-telemetry/opentelemetry-specification

Suggest the correct markdownlint CLI

 the box settings for this repository will be consistent. To check for style violations, use  ```bash-# Ruby and gem are required for mdl-gem install mdl-mdl -c .mdlrc .+# npm is required to install markdownlint+npm install -g markdownlint-cli

I would suggest creating a package.json and installing locally instead of relying on global install.

rakyll

comment created time in 3 hours

PR opened open-telemetry/opentelemetry-specification

Suggest the correct markdownlint CLI

So the developers can run make markdownlint to lint the files.

+3 -3

0 comment

1 changed file

pr created time in 3 hours

issue openedopen-telemetry/opentelemetry-specification

Consistently format semantic convention enums

Semantic convention enums currently have inconsistent formatting. For example, os.type values are all capital (https://github.com/open-telemetry/opentelemetry-specification/blob/main/semantic_conventions/resource/os.yaml#L10) whereas cloud.infrastructure_service values are underscored lowercase (https://github.com/open-telemetry/opentelemetry-specification/blob/main/semantic_conventions/resource/cloud.yaml#L45).

With the new requirement of autogenerating the semantic conventions, formatting in the YAML files is important for the code generator developers. Bless a format and consistently use it everywhere.

created time in 3 hours

delete branch GetStream/rails-chat-example

delete branch : dependabot/npm_and_yarn/elliptic-6.5.3

delete time in 3 hours

PR closed GetStream/rails-chat-example

Bump elliptic from 6.4.1 to 6.5.3 dependencies javascript

Bumps elliptic from 6.4.1 to 6.5.3. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/indutny/elliptic/commit/8647803dc3d90506aa03021737f7b061ba959ae1"><code>8647803</code></a> 6.5.3</li> <li><a href="https://github.com/indutny/elliptic/commit/856fe4d99fe7b6200556e6400b3bf585b1721bec"><code>856fe4d</code></a> signature: prevent malleability and overflows</li> <li><a href="https://github.com/indutny/elliptic/commit/60489415e545efdfd3010ae74b9726facbf08ca8"><code>6048941</code></a> 6.5.2</li> <li><a href="https://github.com/indutny/elliptic/commit/9984964457c9f8a63b91b01ea103260417eca237"><code>9984964</code></a> package: bump dependencies</li> <li><a href="https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a"><code>ec735ed</code></a> utils: leak less information in <code>getNAF()</code></li> <li><a href="https://github.com/indutny/elliptic/commit/71e4e8e2f5b8f0bdbfbe106c72cc9fbc746d3d60"><code>71e4e8e</code></a> 6.5.1</li> <li><a href="https://github.com/indutny/elliptic/commit/7ec66ffa255079260126d87b1762a59ea10de5ea"><code>7ec66ff</code></a> short: add infinity check before multiplying</li> <li><a href="https://github.com/indutny/elliptic/commit/ee7970b92f388e981d694be0436c4c8036b5d36c"><code>ee7970b</code></a> travis: really move on</li> <li><a href="https://github.com/indutny/elliptic/commit/637d0216b58de7edee4f3eb5641295ac323acadb"><code>637d021</code></a> travis: move on</li> <li><a href="https://github.com/indutny/elliptic/commit/5ed0babb6467cd8575a9218265473fda926d9d42"><code>5ed0bab</code></a> package: update deps</li> <li>Additional commits viewable in <a href="https://github.com/indutny/elliptic/compare/v6.4.1...v6.5.3">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+15 -10

1 comment

1 changed file

dependabot[bot]

pr closed time in 3 hours

pull request commentGetStream/rails-chat-example

Bump elliptic from 6.4.1 to 6.5.3

Superseded by #22.

dependabot[bot]

comment created time in 3 hours

PR opened GetStream/rails-chat-example

Bump elliptic from 6.4.1 to 6.5.4

Bumps elliptic from 6.4.1 to 6.5.4. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/indutny/elliptic/commit/43ac7f230069bd1575e1e4a58394a512303ba803"><code>43ac7f2</code></a> 6.5.4</li> <li><a href="https://github.com/indutny/elliptic/commit/f4bc72be11b0a508fb790f445c43534307c9255b"><code>f4bc72b</code></a> package: bump deps</li> <li><a href="https://github.com/indutny/elliptic/commit/441b7428b0e8f6636c42118ad2aaa186d3c34c3f"><code>441b742</code></a> ec: validate that a point before deriving keys</li> <li><a href="https://github.com/indutny/elliptic/commit/e71b2d9359c5fe9437fbf46f1f05096de447de57"><code>e71b2d9</code></a> lib: relint using eslint</li> <li><a href="https://github.com/indutny/elliptic/commit/8421a01aa3ff789c79f91eaf8845558a7be2b9fa"><code>8421a01</code></a> build(deps): bump elliptic from 6.4.1 to 6.5.3 (<a href="https://github-redirect.dependabot.com/indutny/elliptic/issues/231">#231</a>)</li> <li><a href="https://github.com/indutny/elliptic/commit/8647803dc3d90506aa03021737f7b061ba959ae1"><code>8647803</code></a> 6.5.3</li> <li><a href="https://github.com/indutny/elliptic/commit/856fe4d99fe7b6200556e6400b3bf585b1721bec"><code>856fe4d</code></a> signature: prevent malleability and overflows</li> <li><a href="https://github.com/indutny/elliptic/commit/60489415e545efdfd3010ae74b9726facbf08ca8"><code>6048941</code></a> 6.5.2</li> <li><a href="https://github.com/indutny/elliptic/commit/9984964457c9f8a63b91b01ea103260417eca237"><code>9984964</code></a> package: bump dependencies</li> <li><a href="https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a"><code>ec735ed</code></a> utils: leak less information in <code>getNAF()</code></li> <li>Additional commits viewable in <a href="https://github.com/indutny/elliptic/compare/v6.4.1...v6.5.4">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+25 -20

0 comment

1 changed file

pr created time in 3 hours

pull request commentopen-telemetry/opentelemetry-specification

[Proposal] A pre-release review process for new major releases of any client library

This PR was marked stale due to lack of activity. It will be closed in 7 days.

bogdandrutu

comment created time in 4 hours

PR closed open-telemetry/opentelemetry-specification

Add additional env variables for Jaeger exporter Stale

Changes

This change adds two additional env variables for Jaeger exporter OTEL_EXPORTER_JAEGER_CERTIFICATE for gRPC and OTEL_EXPORTER_JAEGER_PROTOCOL for specifying protocol to use.

+15 -0

2 comments

1 changed file

lonewolf3739

pr closed time in 4 hours

pull request commentopen-telemetry/opentelemetry-specification

Add additional env variables for Jaeger exporter

Closed as inactive. Feel free to reopen if this PR is still being worked on.

lonewolf3739

comment created time in 4 hours

pull request commentopen-telemetry/opentelemetry-specification

Add SetAllAttributes to allow setting multiple attributes at once to …

This PR was marked stale due to lack of activity. It will be closed in 7 days.

anuraaga

comment created time in 4 hours

issue openedopen-telemetry/opentelemetry-specification

Guidance on naming tracers

What are you trying to achieve?

Make it easier to write instrumentation by providing more concrete guidelines on defining tracer name.

Additional context.

This came up here

I think we don't have much guidance on how to define a tracer name

There's an example with a reverse domain name here

https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#get-a-tracer

This says look at the overview for naming guidelines

https://github.com/open-telemetry/opentelemetry-specification/blob/11cc73939a32e3a2e6f11bdeab843c61cf8594e9/specification/glossary.md#instrumentation-library

The only naming guideline I can find is how to name the published artifact

https://github.com/open-telemetry/opentelemetry-specification/blob/11cc73939a32e3a2e6f11bdeab843c61cf8594e9/specification/overview.md#instrumentation-libraries

The name of the published artifact is a clear naming convention and we could clarify that that is what should go in the tracer name. But it has a downside that names aren't consistent across languages and not clear what library it actually is

io.couchbase.clients:java-client - Java couchbase - Python couchbase - NodeJS

Python and NodeJS actually have the same name, ouch!

A possible recommendation that ignores the artifact name is "some sort of reverse domain name, the type of library such as client, server, the language". For couchbase, it'd probably be io.couchbase.client.java, io.couchbase.client.python, io.couchbase.client.js, and in the future hopefully io.couchbase.server.

created time in 4 hours