profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/cstyan/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Callum Styan cstyan @grafana Vancouver, Canada https://twitter.com/CallumStyan cloud/distributed system things

cstyan/adbDocumentation 185

Better documentation of the ADB protocol, specifically for USB uses.

cstyan/backdoorThing 1

It's a thing that makes a backdoor. Very basic PoC, don't expect this to get around any fancy security.

cstyan/dotfiles 1

dotfiles, config files, etc.

brijshah/Covert-Backdoor 0

Covert Communication Application written in Python

brijshah/DNS-Spoofer 0

DNS Spoofer with ARP Poisoning in Python

brijshah/Python-Media 0

Image/Audio/Video Manipulation in Python

cstyan/1password-teams-open-source 0

Get a free 1Password Teams membership for your open source project

cstyan/8005-a2 0

Scalable server research. Comparing the performance of various server techniques

cstyan/alertmanager 0

Prometheus Alertmanager

pull request commentgrafana/loki

Rewrite lambda-promtail to use subscription filters.

@julienduchesne thanks for the review!

cstyan

comment created time in 2 days

push eventcstyan/loki

Callum Styan

commit sha cfc53fe740324c2cb254cec566e711dedfd288a6

Fix small terraform comments. Signed-off-by: Callum Styan <callumstyan@gmail.com>

view details

push time in 2 days

PullRequestReviewEvent

Pull request review commentgrafana/loki

Rewrite lambda-promtail to use subscription filters.

+provider "aws" {+  region = "us-east-2"+}++data "aws_region" "current" {}++resource "aws_iam_role" "iam_for_lambda" {+  name = "iam_for_lambda"++  assume_role_policy = <<EOF+{+  "Version": "2012-10-17",+  "Statement": [+    {+      "Action": "sts:AssumeRole",+      "Principal": {+        "Service": "lambda.amazonaws.com"+      },+      "Effect": "Allow",+      "Sid": ""+    }+  ]+}+EOF+}++resource "aws_iam_role_policy" "logs" {+  name   = "lambda-logs"+  role   = aws_iam_role.iam_for_lambda.name+  policy = jsonencode({+    "Statement": [+      {+        "Action": [+          "logs:CreateLogGroup",+          "logs:CreateLogStream",+          "logs:PutLogEvents",+        ],+        "Effect": "Allow",+        "Resource": "arn:aws:logs:*:*:*",+      }+    ]+  })+}++resource "aws_lambda_function" "lambda_promtail" {+  image_uri = "your-image-ecr:latest"+  function_name = "lambda_promtail"+  role          = aws_iam_role.iam_for_lambda.arn++  timeout=60+  memory_size=128+  package_type="Image"++  # vpc_config {+  #   # Every subnet should be able to reach an EFS mount target in the same Availability Zone. Cross-AZ mounts are not permitted.+  #   subnet_ids         = [aws_subnet.subnet_for_lambda.id]+  #   security_group_ids = [aws_security_group.sg_for_lambda.id]+  # }++  environment {+    variables = {+      WRITE_ADDRESS = "http://localhost:8080/loki/api/v1/push"+    }+  }+}++resource "aws_lambda_function_event_invoke_config" "lambda_promtail_invoke_config" {+  function_name                = aws_lambda_function.lambda_promtail.function_name+  maximum_retry_attempts       = 2+}++resource "aws_lambda_permission" "lambda_promtail_allow_cloudwatch" {+  statement_id  = "lambda-promtail-allow-cloudwatch"+  action        = "lambda:InvokeFunction"+  function_name = "${aws_lambda_function.lambda_promtail.function_name}"

:+1: turns out terraform fmt fixes this for us!

cstyan

comment created time in 2 days

push eventcstyan/loki

Callum Styan

commit sha 42314cf10e3b7bb03ecd57693eabcb875594948f

Clean up tests. Signed-off-by: Callum Styan <callumstyan@gmail.com>

view details

push time in 2 days

Pull request review commentgrafana/loki

WIP: Make log query results cacheable

 func Test_splitByInterval_Do(t *testing.T) { 	} } +func Test_splitByInterval_Do_aligned(t *testing.T) {+	ctx := user.InjectOrgID(context.Background(), "1")+	data := []logproto.Entry{+		{Timestamp: time.Unix(0, 0), Line: fmt.Sprintf("%d", 0)},+		{Timestamp: time.Unix(0, time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds()-1)},+		{Timestamp: time.Unix(0, time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds())},+		{Timestamp: time.Unix(0, 2*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 2*time.Hour.Nanoseconds()-1)},+		{Timestamp: time.Unix(0, 2*time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", 2*time.Hour.Nanoseconds())},+		{Timestamp: time.Unix(0, 3*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 3*time.Hour.Nanoseconds()-1)},+		{Timestamp: time.Unix(0, 3*time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", 3*time.Hour.Nanoseconds())},+		{Timestamp: time.Unix(0, 4*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 4*time.Hour.Nanoseconds()-1)},+	}++	next := queryrange.HandlerFunc(func(_ context.Context, r queryrange.Request) (queryrange.Response, error) {+		var entries []logproto.Entry+		for _, e := range data {+			if e.Timestamp.UnixNano()/int64(time.Millisecond) >= r.GetStart() && e.Timestamp.UnixNano()/int64(time.Millisecond) < r.GetEnd() {+				entries = append(entries, e)+			}+		}+		return &LokiResponse{+			Status:    loghttp.QueryStatusSuccess,+			Direction: r.(*LokiRequest).Direction,+			Limit:     r.(*LokiRequest).Limit,+			Version:   uint32(loghttp.VersionV1),+			Data: LokiData{+				ResultType: loghttp.ResultTypeStream,+				Result: []logproto.Stream{+					{+						Labels:  `{foo="bar", level="debug"}`,+						Entries: entries,+					},+				},+			},+		}, nil+	})++	l := WithDefaultLimits(fakeLimits{}, queryrange.Config{SplitQueriesByInterval: time.Hour})+	split := SplitByIntervalMiddleware(+		l,+		LokiCodec,+		splitByTime,+		nilMetrics,+	).Wrap(next)++	tests := []struct {+		name string+		req  *LokiRequest+		want *LokiResponse+	}{+		{+			"backward",+			&LokiRequest{+				StartTs:   time.Unix(0, (5 * time.Minute).Nanoseconds()),+				EndTs:     time.Unix(0, (4 * time.Hour).Nanoseconds()),+				Query:     "",+				Limit:     1000,+				Step:      1,+				Direction: logproto.BACKWARD,+				Path:      "/api/prom/query_range",+			},+			&LokiResponse{+				Status:    loghttp.QueryStatusSuccess,+				Direction: logproto.BACKWARD,+				Limit:     1000,+				Version:   1,+				Data: LokiData{+					ResultType: loghttp.ResultTypeStream,+					Result: []logproto.Stream{+						{+							Labels: `{foo="bar", level="debug"}`,+							Entries: []logproto.Entry{ // Apparently the function we're testing doesn't sort the entries+								{Timestamp: time.Unix(0, 3*time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", 3*time.Hour.Nanoseconds())},+								{Timestamp: time.Unix(0, 4*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 4*time.Hour.Nanoseconds()-1)},+								{Timestamp: time.Unix(0, 2*time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", 2*time.Hour.Nanoseconds())},+								{Timestamp: time.Unix(0, 3*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 3*time.Hour.Nanoseconds()-1)},+								{Timestamp: time.Unix(0, time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds())},+								{Timestamp: time.Unix(0, 2*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 2*time.Hour.Nanoseconds()-1)},+								{Timestamp: time.Unix(0, time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds()-1)},+							},+						},+					},+				},+			},+		},+		{+			"forward",+			&LokiRequest{+				StartTs:   time.Unix(0, (5 * time.Minute).Nanoseconds()),+				EndTs:     time.Unix(0, (4 * time.Hour).Nanoseconds()),+				Query:     "",+				Limit:     1000,+				Step:      1,+				Direction: logproto.FORWARD,+				Path:      "/api/prom/query_range",+			},+			&LokiResponse{+				Status:    loghttp.QueryStatusSuccess,+				Direction: logproto.FORWARD,+				Limit:     1000,+				Version:   1,+				Data: LokiData{+					ResultType: loghttp.ResultTypeStream,+					Result: []logproto.Stream{+						{+							Labels: `{foo="bar", level="debug"}`,+							Entries: []logproto.Entry{ // Apparently the function we're testing doesn't sort the entries+								{Timestamp: time.Unix(0, time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds()-1)},+								{Timestamp: time.Unix(0, time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds())},+								{Timestamp: time.Unix(0, 2*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 2*time.Hour.Nanoseconds()-1)},+								{Timestamp: time.Unix(0, 2*time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", 2*time.Hour.Nanoseconds())},+								{Timestamp: time.Unix(0, 3*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 3*time.Hour.Nanoseconds()-1)},+								{Timestamp: time.Unix(0, 3*time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", 3*time.Hour.Nanoseconds())},+								{Timestamp: time.Unix(0, 4*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 4*time.Hour.Nanoseconds()-1)},+							},+						},+					},+				},+			},+		},++		{+			"forward limited",+			&LokiRequest{+				StartTs:   time.Unix(0, (5 * time.Minute).Nanoseconds()),+				EndTs:     time.Unix(0, (4 * time.Hour).Nanoseconds()),+				Query:     "",+				Limit:     2,+				Step:      1,+				Direction: logproto.FORWARD,+				Path:      "/api/prom/query_range",+			},+			&LokiResponse{+				Status:    loghttp.QueryStatusSuccess,+				Direction: logproto.FORWARD,+				Limit:     2,+				Version:   1,+				Data: LokiData{+					ResultType: loghttp.ResultTypeStream,+					Result: []logproto.Stream{+						{+							Labels: `{foo="bar", level="debug"}`,+							Entries: []logproto.Entry{+								{Timestamp: time.Unix(0, time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds()-1)},+								{Timestamp: time.Unix(0, time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", time.Hour.Nanoseconds())},+							},+						},+					},+				},+			},+		},+		{+			"backward limited",+			&LokiRequest{+				StartTs:   time.Unix(0, (5 * time.Minute).Nanoseconds()),+				EndTs:     time.Unix(0, (4 * time.Hour).Nanoseconds()),+				Query:     "",+				Limit:     2,+				Step:      1,+				Direction: logproto.BACKWARD,+				Path:      "/api/prom/query_range",+			},+			&LokiResponse{+				Status:    loghttp.QueryStatusSuccess,+				Direction: logproto.BACKWARD,+				Limit:     2,+				Version:   1,+				Data: LokiData{+					ResultType: loghttp.ResultTypeStream,+					Result: []logproto.Stream{+						{+							Labels: `{foo="bar", level="debug"}`,+							Entries: []logproto.Entry{+								{Timestamp: time.Unix(0, 3*time.Hour.Nanoseconds()), Line: fmt.Sprintf("%d", 3*time.Hour.Nanoseconds())},+								{Timestamp: time.Unix(0, 4*time.Hour.Nanoseconds()-1), Line: fmt.Sprintf("%d", 4*time.Hour.Nanoseconds()-1)},+							},+						},+					},+				},+			},+		},+	}++	for _, tt := range tests {+		t.Run(tt.name, func(t *testing.T) {+			res, err := split.Do(ctx, tt.req)+			require.NoError(t, err)+			require.Equal(t, tt.want, res)+		})+	}+}++func Test_alignedIntervals(t *testing.T) {+	l := WithDefaultLimits(fakeLimits{}, queryrange.Config{SplitQueriesByInterval: time.Hour})+	// next := queryrange.HandlerFunc(func(_ context.Context, r queryrange.Request) (queryrange.Response, error) {+	// 	return &LokiResponse{}, nil+	// })++	// split := &splitByInterval{+	// 	next:     next,+	// 	limits:   l,+	// 	merger:   LokiCodec,+	// 	metrics:  nilMetrics,+	// 	splitter: splitByTime,+	// }++	tests := []struct {

I thought there would be some value in having both, but I can see how the other test would be confusing.

cstyan

comment created time in 2 days

PullRequestReviewEvent

startedslok/sloth

started time in 2 days

pull request commentgrafana/loki

Rewrite lambda-promtail to use subscription filters.

@julienduchesne let me know if you have any comments on the lambda/terraform here :)

cstyan

comment created time in 3 days

push eventcstyan/loki

Callum Styan

commit sha d3a0f674bdb345f9d9973effe0b823c9b69e4332

Compare errors contents instead of doing error equals comparison directly since the rate limit error is an RPC error. Signed-off-by: Callum Styan <callumstyan@gmail.com>

view details

push time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentgrafana/loki

replace out_of_order with entry_too_far_behind

Do we want to rename this even though ingestion of out of order data is opt in via ingester config?

owen-d

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentprometheus/prometheus

Add initial support for exemplar to the remote write receiver endpoint

+// Copyright 2021 The Prometheus Authors+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+// http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package storage++import (+	"fmt"++	"github.com/go-kit/log"+	"github.com/go-kit/log/level"+	"github.com/pkg/errors"+	"github.com/prometheus/client_golang/prometheus"

we usually separate third party library and prometheus library imports with a newline

secat

comment created time in 4 days

Pull request review commentprometheus/prometheus

Add initial support for exemplar to the remote write receiver endpoint

 import ( 	config_util "github.com/prometheus/common/config" 	"github.com/prometheus/common/model" 	"github.com/prometheus/common/version"-

Best to try and avoid unnecessary changes like this. IIRC we try to keep a newline between any prometheus org imports and other external imports, so if anything this newline should still exist but be between lines 34 and 35.

secat

comment created time in 4 days

PullRequestReviewEvent

pull request commentgrafana/loki

Rewrite lambda-promtail to use subscription filters.

XXL tag is due to vendoring changes.

cstyan

comment created time in 5 days

push eventcstyan/loki

Callum Styan

commit sha 1aba5bd26e2e3f701b55c66f2871fb36f696aa5a

Update lambda library vendoring.

view details

push time in 5 days

pull request commentgrafana/loki

Improve error message for stream rate limit.

Rebased off main but tests are still failing, @owen-d if the error message improvement is still relevant I will fix the issues with tests, otherwise we can close.

cstyan

comment created time in 5 days

push eventcstyan/loki

Callum Styan

commit sha 522ca2e4a103a783894d606c6dbee62ad8cbf477

Rewrite lambda-promtail to use subscription filters. Signed-off-by: Callum Styan <callumstyan@gmail.com>

view details

push time in 5 days

push eventcstyan/loki

Callum Styan

commit sha 5ac66c435a6ca74cd4730c20a06a82739001983a

Rewrite lambda-promtail to use subscription filters. Signed-off-by: Callum Styan <callumstyan@gmail.com>

view details

push time in 5 days

PR opened grafana/loki

Reviewers
Rewrite lambda-promtail to use subscription filters.

Okay, I finally have the terraform/cloudformation/docker image all working (minus the removal of the AWS resource names I was using from the files I'm pushing here).

Probably still some README improvements/initial details that can be added but this works :tm:

Signed-off-by: Callum Styan callumstyan@gmail.com

+2270 -97

0 comment

10 changed files

pr created time in 5 days

create barnchcstyan/loki

branch : aws-promtail-improvements

created branch time in 5 days

push eventcstyan/loki

Karsten Jeschkies

commit sha e25587bfd7896c12cc225bf0a1d54104d8f6f0ea

Document known Docker driver issues. (#4190) * Document known Docker driver issues. * Update docs/sources/clients/docker-driver/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/clients/docker-driver/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> * Update docs/sources/clients/docker-driver/_index.md Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com> Co-authored-by: Karen Miller <84039272+KMiller-Grafana@users.noreply.github.com>

view details

Mateusz Gozdek

commit sha b3df9f5dfd70bb363c22cb952628bf78321bbea8

cmd/logcli: add --follow flag as an alias for --tail (#4203) When using 'logcli query --tail', '--tail' behaves similar to the 'tail' command, which uses '--follow' flag, if you want to continuously follow the appended data to the file. I believe '--follow' flag is more natural for system administrators to use rather than '--tail' if one wants to "follow" the incoming logs, so this commit adds one, as an alias for '--tail'. Closes #3570 Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>

view details

Benoît Knecht

commit sha 488a21bff77eb876353c6915fecdff6319b14389

pkg/storage/chunk/aws: Add s3.http.ca-file option (#4211) * pkg/storage/chunk/aws: Fix insecure-skip-verify documentation The documentation claimed that the `s3.http.insecure-skip-verify` option needed to be set to `false` in order to skip verifying certificates, but it needs to be set to `true`. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * pkg/storage/chunk/aws: Add s3.http.ca-file option This option lets users set a custom CA file for connections to S3, in case they use a local S3 instance with an internal PKI. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>

view details

Owen Diehl

commit sha abcf4d083f8ab7358754ee7cb111861d5c91a3ee

WAL replay discard metrics (#4212) * adds replay discard metrics * lint

view details

Callum Styan

commit sha 1e998899b0bd406a600d3b11f64a09ce13aa65a0

Update tanka installation docs to refer to tanka section about `jb` (#4208) installation instead of providing a cli command that may not work. Signed-off-by: Callum Styan <callumstyan@gmail.com>

view details

Karsten Jeschkies

commit sha 29d23949624039055348a42e59df9c00b6295d8b

Link Kubernetes service discovery configuration. (#4206) * Link Kubernetes service discovery configuration. * Use `configuration`.

view details

So Koide

commit sha b72d8abfbc844113dae03562510ab6e073f99a43

Authc/z: Enable grpc_client_config to allow mTLS (#4176) * feat: enable gRPC client config * Update pkg/ingester/client/client.go Co-authored-by: Owen Diehl <ow.diehl@gmail.com>

view details

Owen Diehl

commit sha 668622c8130256b7e8ef90ce04e243c2bafe616f

Refactor per stream rate limit (#4213) * uses new StreamRateLimiter * sets burst before limiter, uses requested ts * creates new non infinite rate limiters when configs change * time/rate testware * less indirection & protect rate limit access from race conditions * removes old comment * always recreate stream limiter after config change

view details

Ed Welch

commit sha de580b923fbb22791b3720bfce22baf2911d73d4

add backport workflow for automating PR creation (#4220)

view details

Nick Pillitteri

commit sha 79a7bdbabcbf337002db72b6eb94c94c61610193

Chore: Use services and modules from grafana/dskit (#4196) * Update Cortex vendor to bcf15611345da65e03604a08a0213aaaaac49e54 Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com> * Use services and modules packages from grafana/dskit Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com> * Update promtail test to set default weaveworks server network type Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com> * Punctuation Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>

view details

Arve Knudsen

commit sha 034e2a63cf2e245d21734c159a48aa3768fccbae

Makefile: Add format target (#4226) Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>

view details

Arve Knudsen

commit sha cfb4fc1f55d9184891c0198e7f5dde2dadd070ec

Use flagext from dskit (#4225) Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>

view details

Owen Diehl

commit sha 79d53317dd34d32140fd7067fe68ad1c1c0ab26a

bumps per stream default rate limits (#4228) After some testing, this seems a more reasonable balance between protecting ingesters and good defaults out of the box

view details

Alexandre Vallarino

commit sha b0646e7156d1aae0f0428631f1bbc3a0e6597330

fix: typo on loki-external-labels for labels (#4231)

view details

Kaviraj

commit sha b36bc5ab3272dbbfbdab00f23afa98e8abc9dbfe

Add metrics for gcplog scrape. (#4235) * Add metrics for gcplog scrape. Also fix the Ready() method of target * Fix typo with help message

view details

李国忠

commit sha 6f78ffe33cf8f0d6534db5fe8acd663d790e9020

[fix] distributor: fix goroutine leak (#4238) Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>

view details

Bryan Boreham

commit sha 13a3df8826d87c07e683cdc1049c1b64f443fc95

Simplify Distributor.push (#4240) Instead of creating another goroutine to wait for the background `sendSamples()` to finish, give the channels it is sending on a buffer so that they can send without a listener. Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

view details

Karen Miller

commit sha 11a0d28b611f834c81eee206e26a66b057e443a2

Docs: first draft, Loki accepts out-of-order writes (#4237) * Docs: first draft, Loki accepts out-of-order writes * Update docs/sources/architecture/_index.md Co-authored-by: Owen Diehl <ow.diehl@gmail.com>

view details

Arve Knudsen

commit sha a5790f332ef7fa0d363f9cce2cb2806ccda008ca

Chore: Use runtimeconfig from dskit (#4227) * Use runtimeconfig from dskit Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com> * runtimeconfig: Use cortex_ prefix for metrics Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>

view details

Jordan Rushing

commit sha a58aea1bf456280aa06fb2f2d09d2d005eefa6ed

Update limits_config docs to include querier.max_query_lookback flag (#4244)

view details

push time in 6 days

Pull request review commentprometheus/prometheus

WIP: Add initial support for exemplar to the remote write receiver endpoint

 func (h *writeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { 	w.WriteHeader(http.StatusNoContent) } +func (h *writeHandler) checkAppendExemplarError(err error, e exemplar.Exemplar, outOfOrderExemplarErrs *int) error {

Either pkg/exemplar or storage/ makes sense to me, assuming you can avoid a circular import

secat

comment created time in 9 days

PullRequestReviewEvent

Pull request review commentprometheus/prometheus

WIP: Add initial support for exemplar to the remote write receiver endpoint

 func (h *writeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { 	w.WriteHeader(http.StatusNoContent) } +func (h *writeHandler) checkAppendExemplarError(err error, e exemplar.Exemplar, outOfOrderExemplarErrs *int) error {

Instead of replicating this function from the scrape loop we should move it somewhere both scrape and write_handler can use it.

secat

comment created time in 10 days

Pull request review commentprometheus/prometheus

WIP: Add initial support for exemplar to the remote write receiver endpoint

 func (h *writeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { 	w.WriteHeader(http.StatusNoContent) } +func (h *writeHandler) checkAppendExemplarError(err error, e exemplar.Exemplar, outOfOrderExemplarErrs *int) error {

Could probably do the same with the actual calling of this function and the potential error message logging later on for lines 109-114 as well. I don't have a suggestion for where to move the code to resuse off the top of my head though.

secat

comment created time in 10 days

PullRequestReviewEvent