profile
viewpoint
Dieter Plaetinck Dieterbe @grafana Malta http://dieter.plaetinck.be building open source monitoring at Grafana Labs

dgryski/go-tsz 404

Time series compression algorithm from Facebook's Gorilla paper

Dieterbe/anthracite 294

an event / change logging/managament app

Dieterbe/comma 15

Comma: a super simple comment server in go. great for statically generated web sites

Dieterbe/aif-configs 7

community contributed configs for AIF automatic procedure

Dieterbe/awmenu 5

A gtk text entry thing with completion in awesomebar-style

Dieterbe/dataprocessexp 5

performance experiments for different datastructures/approaches for a graphite-style data processing library

Dieterbe/ddm 5

DDM (Distributed Data Manager) is a bash script that manages your data, distributed over multiple (*nix) systems. You tell it how you want to work with your data and ddm will do the copying, deleting, syncing, committing, updating, ... for you

Dieterbe/arch-configs 4

scripts which helps merging config files on Arch Linux

Dieterbe/cake 4

My changes to CakePHP core

Dieterbe/dautostart 4

A standalone freedesktop-compliant (xdg based) application starter

issue commentgrafana/metrictank

Helm chart and sharding

Why can't we set the same partition for different nodes like with kafka-mdm-in "partitions"

IIRC you can. that said I would advise against running a cluster with carbon input. to me cluster implies HA. but the problem with carbon is, as soon as a MT instance needs to restart it needs to replay data to fill its chunks, but you can't do that with carbon. kafka as a WAL solves that problem.

kennux75

comment created time in 3 hours

pull request commentgrafana/metrictank

[WIP] Implements function movingWindow and child functions

strange, according to "movingWindow": {NewMovingWindow, true} it should work. the other reason why it might proxy is if you have this

# proxy to graphite when metrictank considers the request bad
proxy-bad-requests = true

AND metrictank considers the query invalid (e.g. bad parameters not matching signature). the logs should say

vaguiar

comment created time in 2 days

issue commentVSCodeVim/Vim

Disable cursor positioning with mouse in normal mode

strangely the behavior requested in #5212 is not how vim works at all... the behavior asked for here, is, however.

nicholascannon1

comment created time in 2 days

delete branch Dieterbe/i3status-rust

delete branch : music-on-click

delete time in 2 days

push eventgrafana/metrictank

Robert Milan

commit sha 2a63113c9dac459f6c58b1617ba4385dfe11305d

add capability to process bigtable indices and re-organize code

view details

Robert Milan

commit sha 0d11ff7533f4efe26fcfec0cbcc43aa9c460c88c

Update cmd/mt-index-cat/main.go Co-authored-by: Dieter Plaetinck <dieter@grafana.com>

view details

Robert Milan

commit sha c43c831b67b7f765fb2b59d737286210587b33e0

Update cmd/mt-index-cat/main.go Co-authored-by: Dieter Plaetinck <dieter@grafana.com>

view details

Robert Milan

commit sha 21d0eca23e2965bd1780a8738331fde3abdd78b8

print messages to stderr instead of stdout

view details

Robert Milan

commit sha f954578bf095f731aaa368e752eb111b70c723cd

update tools documentation

view details

Dieter Plaetinck

commit sha 813f78690219a31502aaae269d603ae6a865fdee

Merge pull request #1909 from grafana/add-bt-capability-to-index-cat add capability to process bigtable indices with mt-index-cat and re-organize code

view details

push time in 3 days

delete branch grafana/metrictank

delete branch : add-bt-capability-to-index-cat

delete time in 3 days

PR merged grafana/metrictank

add capability to process bigtable indices with mt-index-cat and re-organize code

This adds the long-needed ability use mt-index-cat on Bigtable.

+174 -52

0 comment

3 changed files

robert-milan

pr closed time in 3 days

PullRequestReviewEvent

Pull request review commentgrafana/metrictank

add capability to process bigtable indices with mt-index-cat and re-organize code

 func main() { 		} 	} -	var defs []schema.MetricDefinition-	if len(partitions) == 0 {-		defs = idx.Load(nil, time.Now())-	} else {-		defs = idx.LoadPartitions(partitions, nil, time.Now())-	}-	// set this after doing the query, to assure age can't possibly be negative unless if clocks are misconfigured.-	out.QueryTime = time.Now().Unix()-	total := len(defs)-	shown := 0--	for _, d := range defs {-		// note that prefix and substr can be "", meaning filter disabled.-		// the conditions handle this fine as well.-		if !strings.HasPrefix(d.Name, prefix) {-			continue-		}-		if !strings.HasSuffix(d.Name, suffix) {-			continue-		}-		if !strings.Contains(d.Name, substr) {-			continue-		}-		if tags == "none" && len(d.Tags) != 0 {-			continue-		}-		if tags == "some" && len(d.Tags) == 0 {-			continue+	// if partitionStr is set to all (*) and we are using bigtable then we must+	// ensure that we know the total number of partitions+	if partitionStr == "*" && btI > 0 {+		if btTotalPartitions == -1 {+			log.Println("When selecting all partitions with bigtable you must specify the total number of partitions for the instance")+			flag.Usage()+			os.Exit(-1)+		} else {+			for i := 0; i < btTotalPartitions; i++ {+				partitions = append(partitions, int32(i))+			} 		}-		if regex != nil && !regex.MatchString(d.Name) {-			continue+	}++	var total int+	var shown int++	processDefs := func(defs []schema.MetricDefinition) {+		total += len(defs)+		if shown >= limit && limit > 0 {+			log.Infof("Limit (%d) reached while processing Metric Definitions", limit)

also this check here is redundant since you already check every time after showing and we only call processDefs once.

robert-milan

comment created time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgrafana/metrictank

add capability to process bigtable indices with mt-index-cat and re-organize code

 func main() { 		} 	} -	var defs []schema.MetricDefinition-	if len(partitions) == 0 {-		defs = idx.Load(nil, time.Now())-	} else {-		defs = idx.LoadPartitions(partitions, nil, time.Now())-	}-	// set this after doing the query, to assure age can't possibly be negative unless if clocks are misconfigured.-	out.QueryTime = time.Now().Unix()-	total := len(defs)-	shown := 0--	for _, d := range defs {-		// note that prefix and substr can be "", meaning filter disabled.-		// the conditions handle this fine as well.-		if !strings.HasPrefix(d.Name, prefix) {-			continue-		}-		if !strings.HasSuffix(d.Name, suffix) {-			continue-		}-		if !strings.Contains(d.Name, substr) {-			continue-		}-		if tags == "none" && len(d.Tags) != 0 {-			continue-		}-		if tags == "some" && len(d.Tags) == 0 {-			continue+	// if partitionStr is set to all (*) and we are using bigtable then we must+	// ensure that we know the total number of partitions+	if partitionStr == "*" && btI > 0 {+		if btTotalPartitions == -1 {+			log.Println("When selecting all partitions with bigtable you must specify the total number of partitions for the instance")+			flag.Usage()+			os.Exit(-1)+		} else {+			for i := 0; i < btTotalPartitions; i++ {+				partitions = append(partitions, int32(i))+			} 		}-		if regex != nil && !regex.MatchString(d.Name) {-			continue+	}++	var total int+	var shown int++	processDefs := func(defs []schema.MetricDefinition) {+		total += len(defs)+		if shown >= limit && limit > 0 {+			log.Infof("Limit (%d) reached while processing Metric Definitions", limit)

note that by printing to stdout here, and below, we break the output format. a common use case is to pipe the output of mt-index-cat into other tool, so i'm hesitant of adding these new log messages. (note that all other/pre-existing log calls are error messages or help output

robert-milan

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentgrafana/metrictank

add capability to process bigtable indices with mt-index-cat and re-organize code

 func main() { 		fmt.Println("     'valid'   only show metrics whose tags (if any) are valid") 		fmt.Println("     'invalid' only show metrics that have one or more invalid tags") 		fmt.Println()-		fmt.Printf("idxtype: only 'cass' supported for now\n\n")+		fmt.Printf("\n\n")
		fmt.Printf("idxtype: 'cass' (cassandra) or 'bt' (bigtable)\n\n")
robert-milan

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentgrafana/metrictank

add capability to process bigtable indices with mt-index-cat and re-organize code

 func main() { 	var verbose bool 	var limit int 	var partitionStr string+	var btTotalPartitions int  	globalFlags := flag.NewFlagSet("global config flags", flag.ExitOnError) 	globalFlags.StringVar(&addr, "addr", "http://localhost:6060", "graphite/metrictank address") 	globalFlags.StringVar(&prefix, "prefix", "", "only show metrics that have this prefix") 	globalFlags.StringVar(&substr, "substr", "", "only show metrics that have this substring") 	globalFlags.StringVar(&suffix, "suffix", "", "only show metrics that have this suffix") 	globalFlags.StringVar(&partitionStr, "partitions", "*", "only show metrics from the comma separated list of partitions or * for all")+	globalFlags.IntVar(&btTotalPartitions, "bt-total-partitions", -1, "when using bigtable you must set this to the total number of partitions for the instance if you do not specify partitions with the 'partitions' setting")
	globalFlags.IntVar(&btTotalPartitions, "bt-total-partitions", -1, "total number of partitions (when using bigtable and partitions='*')")
robert-milan

comment created time in 3 days

PullRequestReviewEvent

push eventgrafana/metrictank

Mauro Stettler

commit sha 5d1d837a20f10af265ffd891e50b6aaa312a6049

export find response helper methods

view details

Dieter Plaetinck

commit sha af6a3409199d7ce67bbcc0ee3e2fcb32ab0bdf29

Merge pull request #1908 from grafana/export_find_response_helpers export find response helper methods

view details

push time in 3 days

delete branch grafana/metrictank

delete branch : export_find_response_helpers

delete time in 3 days

PR merged grafana/metrictank

export find response helper methods

I would like to import and use these functions somewhere else, would you mind if we export them?

+7 -7

0 comment

1 changed file

replay

pr closed time in 3 days

PullRequestReviewEvent

push eventgrafana/metrictank

Dieter Plaetinck

commit sha 7594948a3a5c0fbb52181b920a31253b9e18612b

add mt-write-delay-schema-explain tool

view details

Dieter Plaetinck

commit sha 66fa4f67d810ed2fbb1af80342c754473125931c

fix gitignore

view details

Dieter Plaetinck

commit sha d489df42ccdcfa5a25df5fab0d4804e2e46dad61

cleanup

view details

Dieter Plaetinck

commit sha c73a2d7ef3ed7b1a799377a7dae6e1b3525834b8

Merge pull request #1895 from grafana/mt-write-delay-schema-explain add mt-write-delay-schema-explain tool

view details

push time in 3 days

delete branch grafana/metrictank

delete branch : mt-write-delay-schema-explain

delete time in 3 days

PR merged grafana/metrictank

add mt-write-delay-schema-explain tool

see docs for explanation :)

+127 -0

1 comment

3 changed files

Dieterbe

pr closed time in 3 days

PullRequestReviewEvent

pull request commentgrafana/metrictank

update go version to 1.15.2

oops. i'm sorry i was too trigger happy. was not expected, actually the builds have been broken since last week but yeah let's fix it!

robert-milan

comment created time in 5 days

push eventgrafana/metrictank

Robert Milan

commit sha 572888570efed6d84098c0dcdd7d34e9b00360e3

update go version to 1.15.2

view details

Dieter Plaetinck

commit sha e3203ebf1046a56a9de7519aa58239ae364d5c1f

Merge pull request #1906 from grafana/upgrade-mt-to-latest-go-version update go version to 1.15.2

view details

push time in 5 days

delete branch grafana/metrictank

delete branch : upgrade-mt-to-latest-go-version

delete time in 5 days

PullRequestReviewEvent

Pull request review commentgrafana/metrictank

ensure query mode ignores input, store, and idx settings

 func main() { 	} 	cluster.Init(*instance, version, startupTime, scheme, int(port)) +	// while in query mode, ensure all input, store, and idx plugins are turned off+	if cluster.Mode == cluster.ModeQuery {+		inCarbon.Enabled = false+		inKafkaMdm.Enabled = false+		memory.Enabled = false+		cassandra.CliConfig.Enabled = false+		bigtable.CliConfig.Enabled = false+		cassandraStore.CliConfig.Enabled = false+		bigtableStore.CliConfig.Enabled = false+	}+

if you do this, you should clarify at least in the config files that these settings will be ignored for query mode

robert-milan

comment created time in 5 days

PullRequestReviewEvent

pull request commentgrafana/metrictank

update qa to use cimg/go:1.12

Let's just update everything (tests+ build) to the latest Go version.

robert-milan

comment created time in 5 days

issue commentgrafana/carbon-relay-ng

CPU Issue with v0.13.0 version

is it possible to let it run without aggregator? reviewing the changelog, seems most changes for the 0.13.0 release was related to aggregators. I suspect either that, or something to do with matchers (anything that has a prefix/substring/regex condition on it, such as a route), but more likely aggregator.

krishnaindani

comment created time in 6 days

issue commentgrafana/metrictank

Short pauses cause kafka priority to be misreported

ping @shanson7 your thoughts on the above?

shanson7

comment created time in 7 days

issue openedgrafana/metrictank

assure that no index method holds the lock too long

e.g. /metrics/index.json

created time in 7 days

issue commentrickyrockrat/parcellite

Attempt to unlock mutex that was not locked

Hi i also have this problem with version 1.2.1 Out of the blue, parcellite just refuses to start when I start I get errors like this:

<stuff i copied from primary selection buffer>No magic! Assume no history.
Attempt to unlock mutex that was not locked
[1]    316828 abort (core dumped)  parcellite

and this:

<stuff i copied from primary selection buffer>No magic! Assume no history.
Attempt to unlock mutex that was not locked
[1]    316828 abort (core dumped)  parcellite
No magic! Assume no history.
Attempt to unlock mutex that was not locked
[1]    317334 abort (core dumped)  parcellite
Massimo-B

comment created time in 7 days

delete branch grafana/cloud-graphite-scripts

delete branch : oddlittlebird-patch-1

delete time in 11 days

push eventgrafana/cloud-graphite-scripts

Diana Payton

commit sha aca8053838493e7879eae75ab6013572f8d2aaf5

Update walk_metrics.py

view details

Dieter Plaetinck

commit sha c8733627a383336a6e4cd9a157c225c2a12f9877

Merge pull request #10 from grafana/oddlittlebird-patch-1 Update walk_metrics.py

view details

push time in 11 days

PullRequestReviewEvent

issue commentgreshake/i3status-rust

displaying wireless signal strength breaks the whole statusbar if wifi disconnects

strange, i can't reproduce the issue anymore with format = "{ssid} {signal_strength} {speed_up} {speed_down}" either for both format strings i disconnect from wifi using network manager and it all seems to work fine. when the issue originally appeared it wasn't after simply manually disconnecting, i was having wifi issues where i lost connectivity due to a still undiagnosed issue.

Dieterbe

comment created time in 12 days

issue commentgreshake/i3status-rust

displaying wireless signal strength breaks the whole statusbar if wifi disconnects

Hi Jason, with that format string it seems to work fine.

Dieterbe

comment created time in 12 days

issue commentgreshake/i3status-rust

displaying wireless signal strength breaks the whole statusbar if wifi disconnects

interesting. I have

[[block]]
block = "net"
device = "wlp3s0"
format = "{ssid} {signal_strength} {speed_up} {speed_down}"
interval = 5
use_bits = false

I note you have a signal_strength setting which per https://github.com/greshake/i3status-rust/blob/master/blocks.md#deprecated-options is deprecated.

which version do you use? i have it with latest master.

Dieterbe

comment created time in 12 days

issue openedgreshake/i3status-rust

displaying wireless signal strength breaks the whole statusbar if wifi disconnects

Error in block 'net': Signal strength is only available for connected wireless devices.

noooooaeou5554

we probably just need a small error handling tweak.

created time in 12 days

pull request commentgrafana/metrictank

Make it called "Grafana Metrictank", not just Metrictank.

This PR is basically how i wanted #1743 to be all along :)

tomwilkie

comment created time in 13 days

PullRequestReviewEvent

issue openedgrafana/metrictank

document difference consolidation behavior between MT and graphite

update graphite.md to clarify how we pick the consolidator, which takes storage-aggregation into account.

while we're at it, i think we can mention the response metadata as well.

created time in 13 days

startedofabry/go-callvis

started time in 17 days

issue commentgrafana/grafana

graphite annotations hardcoded maxDataPoints 100

seems like a very niche problem. let's close

Dieterbe

comment created time in 19 days

pull request commentgosuri/uilive

Writer.Flush: properly compute line count for buffers longer than 2 lines

Hello @gosuri , thanks for this nice library. What do you think of this PR? It fixes a bug for me and seems to work well to me. thanks.

fkaleo

comment created time in 21 days

startedsamhogan/Minecraft-Unity3D

started time in 21 days

pull request commentgreshake/i3status-rust

add on_click for music block + add `id` field for rotating text

hi @ammgws i have addressed the feedback. thanks!

Dieterbe

comment created time in 22 days

push eventDieterbe/i3status-rust

Dieter Plaetinck

commit sha 3fd80ed4bb7cf6b0c003d5e1026996490265146d

tweaks based on jason feedback

view details

push time in 22 days

push eventgrafana/metrictank

Dieter Plaetinck

commit sha d489df42ccdcfa5a25df5fab0d4804e2e46dad61

cleanup

view details

push time in 22 days

issue openedgolang/go

x/tools/cmd/guru implements can take extreme amount of resources (e.g. when run from vscode) depending on working directory

Hello, first of all, thanks for the work that's been put into guru. It's certainly been quite helpful to me over the years. I know guru is no longer actively maintained (and I don't expect a resolution), but considering gopls is still in alpha, i thought it might be helpful to others (or to my future self) to document this issue.

What version of Go are you using (go version)?

<pre> go version go1.15.1 linux/amd64 </pre>

Does this issue reproduce with the latest release?

yes

What operating system and processor architecture are you using (go env)?

<details><summary><code>go env</code> Output</summary><br><pre> go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/dieter/.cache/go-build" GOENV="/home/dieter/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/home/dieter/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/dieter/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/lib/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/dieter/go-build095063144=/tmp/go-build -gno-record-gcc-switches" </pre></details>

I found out that based on the working directory when guru implements is run, it can take an extreme amount of cpu and take a very long time.

In particular, consider these 3 types of runs:

1) cd /home/dieter && \
    guru -json implements '/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503'
2) cd /home/dieter/go/src/github.com/prometheus/prometheus && \
    guru -json implements /home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503
3) cd /home/dieter/go/src/github.com/prometheus/prometheus && \
    guru -json -scope github.com/prometheus/prometheus/... implements '/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503'

only 1) returns in a reasonable amount of time. 2) is how visual studio code invokes it (when i'm inside of the prometheus project), and 3) is my attempt to fix the problem with 2) by constraining the scope.

More info about each run:

Note: I moved guru to guru-real and replaced guru with this script:

$ cat /home/dieter/go/bin/guru
#!/bin/bash
file="/home/dieter/guru-$(date '+%F_%T')"
echo "guru $@" > $file
env >> $file
guru-real "$@"

this allows me to track the exact environment of the process each time it is run (particularly when the parent is vscode and not my terminal), but it turns out the issue was reproducible perfectly from a terminal, so at this point it's no longer useful, but it does explain why i have mentions of "guru-real" in the pstree output below.

run 1

time /home/dieter/go/bin/guru -json implements '/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503'
{
	"type": {
		"name": "github.com/prometheus/prometheus/storage.Storage",
		"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:43:6",
		"kind": "interface"
	},
	"to": [
		{
			"name": "*github.com/prometheus/prometheus/cmd/prometheus.readyStorage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/cmd/prometheus/main.go:898:6",
			"kind": "pointer"
		},
		{
			"name": "*github.com/prometheus/prometheus/storage.fanout",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/fanout.go:33:6",
			"kind": "pointer"
		},
		{
			"name": "*github.com/prometheus/prometheus/storage/remote.Storage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/remote/storage.go:48:6",
			"kind": "pointer"
		},
		{
			"name": "*github.com/prometheus/prometheus/tsdb.DB",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/tsdb/db.go:120:6",
			"kind": "pointer"
		},
		{
			"name": "github.com/prometheus/prometheus/storage/fanout.errStorage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/fanout/fanout_test.go:152:6",
			"kind": "struct"
		},
		{
			"name": "github.com/prometheus/prometheus/util/teststorage.testStorage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/util/teststorage/storage.go:46:6",
			"kind": "struct"
		}
	],
	"from": [
		{
			"name": "github.com/cortexproject/cortex/vendor/github.com/prometheus/common/expfmt.Closer",
			"pos": "/home/dieter/go/src/github.com/cortexproject/cortex/vendor/github.com/prometheus/common/expfmt/encode.go:40:6",
			"kind": "interface"
		},
		{
			"name": "github.com/prometheus/prometheus/storage.Appendable",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:35:6",
			"kind": "interface"
		},
		{
			"name": "github.com/prometheus/prometheus/storage.Queryable",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:56:6",
			"kind": "interface"
		},
		{
			"name": "github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt.Closer",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/encode.go:40:6",
			"kind": "interface"
		},
		{
			"name": "io.Closer",
			"pos": "/usr/lib/go/src/io/io.go:98:6",
			"kind": "interface"
		}
	]
}
/home/dieter/go/bin/guru -json implements   95.38s user 26.11s system 733% cpu 16.554 total
pstree -a -t -l | less
  |   |   `-guru /home/dieter/go/bin/guru -json implements /home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503
  |   |       `-guru-real -json implements /home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503
  |   |           `-13*[{guru-real}]

run 2

I didn't let it complete. I killed it after it ran for hours.

pstree -a -t -l | less
  |   |   `-guru /home/dieter/go/bin/guru -json implements /home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503
  |   |       `-guru-real -json implements /home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/rog-go/cmd/share
  |   |           |   `-11*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/rog-go/cmd/hello
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/rog-go/cmd/shape
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/rog-go/cmd/peter-rabbit
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/rog-go/cmd/showdeps
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/rog-go/cmd/share2
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/go.tools/godoc/redirect
  |   |           |   `-11*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/go.tools/godoc/static
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/gogoprotobuf/plugin/unsafeunmarshaler
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/gogoprotobuf/plugin/marshalto
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/gogoprotobuf/plugin/populate
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/gogoprotobuf/plugin/stringer
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- code.google.com/p/gogoprotobuf/plugin/unmarshal
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- k8s.io/kubernetes/cluster/addons/kube-proxy
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- k8s.io/kubernetes/cluster/gce/manifests
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- k8s.io/kubernetes/cluster/juju/prereqs
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- k8s.io/kubernetes/cluster/kubemark/pre-existing
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- k8s.io/kubernetes/docs/yaml/kubectl
  |   |           |   `-10*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- k8s.io/kubernetes/cmd/kube-apiserver/app
  |   |           |   `-11*[{go}]
  |   |           |-go list -e -compiler=gc -tags= -installsuffix= -f={{.Dir}}\012{{.ImportPath}}\012{{.Root}}\012{{.Goroot}}\012{{if .Error}}{{.Error}}{{end}}\012 -- k8s.io/kubernetes/cmd/kube-controller-manager/app
  |   |           |   `-11*[{go}]
  |   |           `-35*[{guru-real}]

run 3

time /home/dieter/go/bin/guru -json -scope github.com/prometheus/prometheus/... implements '/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:#1503'
{
	"type": {
		"name": "github.com/prometheus/prometheus/storage.Storage",
		"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:43:6",
		"kind": "interface"
	},
	"to": [
		{
			"name": "*github.com/prometheus/prometheus/cmd/prometheus.readyStorage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/cmd/prometheus/main.go:898:6",
			"kind": "pointer"
		},
		{
			"name": "*github.com/prometheus/prometheus/storage.fanout",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/fanout.go:33:6",
			"kind": "pointer"
		},
		{
			"name": "*github.com/prometheus/prometheus/storage/remote.Storage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/remote/storage.go:48:6",
			"kind": "pointer"
		},
		{
			"name": "*github.com/prometheus/prometheus/tsdb.DB",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/tsdb/db.go:120:6",
			"kind": "pointer"
		},
		{
			"name": "github.com/prometheus/prometheus/storage/fanout.errStorage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/fanout/fanout_test.go:152:6",
			"kind": "struct"
		},
		{
			"name": "github.com/prometheus/prometheus/util/teststorage.testStorage",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/util/teststorage/storage.go:46:6",
			"kind": "struct"
		}
	],
	"from": [
		{
			"name": "github.com/prometheus/common/expfmt.Closer",
			"pos": "/home/dieter/go/pkg/mod/github.com/prometheus/common@v0.10.0/expfmt/encode.go:40:6",
			"kind": "interface"
		},
		{
			"name": "github.com/prometheus/prometheus/storage.Appendable",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:35:6",
			"kind": "interface"
		},
		{
			"name": "github.com/prometheus/prometheus/storage.Queryable",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/storage/interface.go:56:6",
			"kind": "interface"
		},
		{
			"name": "github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt.Closer",
			"pos": "/home/dieter/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/encode.go:40:6",
			"kind": "interface"
		},
		{
			"name": "io.Closer",
			"pos": "/usr/lib/go/src/io/io.go:98:6",
			"kind": "interface"
		}
	]
}
/home/dieter/go/bin/guru -json -scope github.com/prometheus/prometheus/...    5478.10s user 886.50s system 246% cpu 42:57.99 total

created time in 22 days

startedfelixge/fgprof

started time in 23 days

push eventgrafana/metrictank

Dieter Plaetinck

commit sha cb451d94e11ac71d053732ef9671a8e77dd4f484

cleanup

view details

push time in 23 days

push eventgrafana/metrictank

Dieter Plaetinck

commit sha 66fa4f67d810ed2fbb1af80342c754473125931c

fix gitignore

view details

push time in 24 days

push eventgrafana/metrictank

Dieter Plaetinck

commit sha ad4108bb64dda64fcabe4fa89abca587a19a725c

clarifications for mt-kafka-mdm-sniff-out-of-order-clarifications

view details

Dieter Plaetinck

commit sha d5e75eb2000d137bcb268e2f90a1bb7dc81b3d96

Merge pull request #1899 from grafana/mt-kafka-mdm-sniff-out-of-order-clarifications Mt kafka mdm sniff out of order clarifications

view details

push time in 24 days

push eventgrafana/metrictank

replay

commit sha ee47bbaf37ed8bd7725366f4d1b9fe43404e14e9

implement /metrics/expand endpoint

view details

replay

commit sha e9684182900017892a1e6ab08a93511facfd8804

use errgroup

view details

replay

commit sha 4434106ad82062537be9efc935150b5918514a7f

document /metrics/expand

view details

Dieter Plaetinck

commit sha c53ded50b7792ad83d8792032fd6a5d469527362

clarify that pcre is not supported

view details

Sean Hanson

commit sha 34c6dea299a09c79c4578ce652f7862c39387cca

Add timeShift func and basic tests

view details

Sean Hanson

commit sha f8a0a12cd6443684c21de602e3709aef46d1efcf

timeShift stable

view details

Sean Hanson

commit sha 6d96417efea0020ed22bdb3ffa6caf309414873a

Allocate array up front if needed

view details

Sean Hanson

commit sha 6d1d39d7d503f814c522c5730c75546c4bb97b7d

Fix test cases

view details

Sean Hanson

commit sha 0933414a782a90141a366c35609b4626c596ce9e

More concise code in validator

view details

Sean Hanson

commit sha 97b785009594fa2a1fbe2dc534abe5a17a889d91

use equalOutput in test

view details

Sean Hanson

commit sha b2c19e9ff76ef9b9fc6fe01b62e2a230909377ec

Add tests - zero series input, does not modify input

view details

Sean Hanson

commit sha d3a6dc2fb0b81c4891e49df3864e1363302b05e7

Fix consolidation bug

view details

Sean Hanson

commit sha 76fe3744aef2ed094863bd872fe28da1a783771b

Fix function name

view details

Sean Hanson

commit sha d2dc4edfa434f0c6caba7da6782e16b911ed81b2

Add timeshift to docs

view details

Dieter Plaetinck

commit sha 3f5666bcbf07659193cbcc4b364a2ed89469bb79

exclude customer-impacting tickets from stalebot

view details

Dieter Plaetinck

commit sha 7dbb43518e73391a6bcfad014073d90adeb178e0

add logo. fix #1488

view details

Robert Milan

commit sha c3d3e953ddfdc855a23f35c34d36f34f6d4358a8

Merge pull request #1877 from grafana/add-logo add logo. fix #1488

view details

Dieter Plaetinck

commit sha 415743b40297440993dddc787425e6aa7bb90c60

make logo smaller

view details

Dieter Plaetinck

commit sha 5a389ef1a2a0bdc48c569e8492922a6b889f388c

attempt 2

view details

Dieter Plaetinck

commit sha b11bdde7772426e6e9f2e274c4c044558b1ed234

tweak

view details

push time in 24 days

issue commentgrafana/grafana

Show only active series under the cursor in tooltip

Once again, feature asked from years ... still no implementation

Out of all the items of work needed to achieve an implementation, the asking for it part is fairly minor.

If you want this to happen, you can make a concrete proposal or implementation, or pay someone (e.g. Grafana Labs) to do it for you.

nvartolomei

comment created time in 24 days

Pull request review commentgreshake/i3status-rust

add on_click for music block + add `id` field for rotating text

 Key | Values | Required | Default `separator` | String to insert between artist and title | No | `" - "` `buttons` | Array of control buttons to be displayed. Options are prev (previous title), play (play/pause) and next (next title) | No | `[]` `on_collapsed_click` | Shell command to run when the music block is clicked while collapsed. | No | None+`on_click` | Command to execute when the button is clicked while not collapsed. | No | None

ok, will change

Dieterbe

comment created time in 25 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgreshake/i3status-rust

add on_click for music block + add `id` field for rotating text

 Key | Values | Required | Default `separator` | String to insert between artist and title | No | `" - "` `buttons` | Array of control buttons to be displayed. Options are prev (previous title), play (play/pause) and next (next title) | No | `[]` `on_collapsed_click` | Shell command to run when the music block is clicked while collapsed. | No | None+`on_click` | Command to execute when the button is clicked while not collapsed. | No | None

Not sure. I see that block and button are used interchangeably in this file. e.g. for the cpu usage block on_click also refers to a button. should it be block everywhere consistently? I could do that in a followup PR.

Dieterbe

comment created time in 25 days

PullRequestReviewEvent

push eventDieterbe/i3status-rust

Dieter Plaetinck

commit sha 1354f9a536f92c2f9ef572ad8d5eea2bd64f88db

apply PR feedback from Stuart / themadprofessor

view details

push time in a month

Pull request review commentgreshake/i3status-rust

net block: fix IW SSID regex parsing

 fn get_iw_ssid(dev: &NetworkDevice) -> Result<Option<String>> {     }      let raw = raw.unwrap();-    let result = raw-        .stdout-        .split(|c| *c == b'\n')-        .filter_map(|x| IW_SSID_REGEX.find(x))-        .next();+    let bytes = raw.stdout.split(|c| *c == b'\n').next().unwrap();+    let cap = IW_SSID_REGEX.captures(bytes);+    if cap.is_none() {+        return Ok(None);+    }+    let match_obj = cap.unwrap().get(1);+    if match_obj.is_none() {

couldn't find much in the style guide about this specific tradeoff of using map_filter + next for processing Option's; but since @themadprofessor 's suggestion definitely seems more in line with the rest of the code in general, i've applied the tweak in a new commit. does this look good to merge now?

Dieterbe

comment created time in a month

PullRequestReviewEvent

push eventDieterbe/i3status-rust

Dieter Plaetinck

commit sha ce43eedbc382339f7c6f7909fa0509b7b3ce0f3e

apply PR feedback from Stuart / themadprofessor

view details

push time in a month

Pull request review commentgreshake/i3status-rust

net block: fix IW SSID regex parsing

 fn get_iw_ssid(dev: &NetworkDevice) -> Result<Option<String>> {     }      let raw = raw.unwrap();-    let result = raw-        .stdout-        .split(|c| *c == b'\n')-        .filter_map(|x| IW_SSID_REGEX.find(x))-        .next();+    let bytes = raw.stdout.split(|c| *c == b'\n').next().unwrap();+    let cap = IW_SSID_REGEX.captures(bytes);+    if cap.is_none() {+        return Ok(None);+    }+    let match_obj = cap.unwrap().get(1);+    if match_obj.is_none() {

you're fine with the current code in this PR or with @themadprofessor 's version? aside from "fine", is there a concensus which is "better"? Also @ammgws seems you're a de facto maintainer, perhaps you should be added to https://github.com/greshake/i3status-rust/blob/master/CONTRIBUTING.md#maintainership ?

Dieterbe

comment created time in a month

PullRequestReviewEvent

pull request commentgrafana/metrictank

only generate metric definition filter if it will be used

I think the people best suited to review this are @robert-milan and @shanson7 ?

replay

comment created time in a month

Pull request review commentgreshake/i3status-rust

net block: fix IW SSID regex parsing

 fn get_iw_ssid(dev: &NetworkDevice) -> Result<Option<String>> {     }      let raw = raw.unwrap();-    let result = raw-        .stdout-        .split(|c| *c == b'\n')-        .filter_map(|x| IW_SSID_REGEX.find(x))-        .next();+    let bytes = raw.stdout.split(|c| *c == b'\n').next().unwrap();+    let cap = IW_SSID_REGEX.captures(bytes);+    if cap.is_none() {+        return Ok(None);+    }+    let match_obj = cap.unwrap().get(1);+    if match_obj.is_none() {

@themadprofessor very interesting. thanks for sharing. so, basically use filter_map as a way to "absorb" intermediate None values and turn them into a "final" None value through next().

Interestingly, in the go world we tend to favor simple code even when it's a bit more verbose, but if the prevalent style in rust is to favor a bit more magic to create more concise code, that's fine with me too. I'm happy to add a commit that changes it to your style. is that what @ammgws would prefer?

Dieterbe

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgreshake/i3status-rust

net block: fix IW SSID regex parsing

 fn get_iw_ssid(dev: &NetworkDevice) -> Result<Option<String>> {     }      let raw = raw.unwrap();-    let result = raw-        .stdout-        .split(|c| *c == b'\n')-        .filter_map(|x| IW_SSID_REGEX.find(x))-        .next();+    let bytes = raw.stdout.split(|c| *c == b'\n').next().unwrap();+    let cap = IW_SSID_REGEX.captures(bytes);+    if cap.is_none() {+        return Ok(None);+    }+    let match_obj = cap.unwrap().get(1);+    if match_obj.is_none() {

that results in an error on the line below:

error[E0599]: no method named `as_bytes` found for struct `regex::re_bytes::Captures<'_>` in the current scope
   --> src/blocks/net.rs:999:41
    |
999 |     maybe_ssid_convert(result.map(|x| x.as_bytes()))
    |                                         ^^^^^^^^ method not found in `regex::re_bytes::Captures<'_>`
Dieterbe

comment created time in a month

delete branch grafana/metrictank

delete branch : enable-idx-write-queue

delete time in a month

push eventgrafana/metrictank

Dieter Plaetinck

commit sha 18844a5f65e9f83525ac3c38f574e11cb10b9dd8

enable index write queue by default (except for unit tests) it's proven to be beneficial

view details

Dieter Plaetinck

commit sha c532c3ff137547aef100e94f75b36f7e4b5cc76e

make default write-queue delay much more reasonable

view details

Dieter Plaetinck

commit sha 11504f612651d8b9bc04cd32f9290119cadc8cb3

Merge pull request #1891 from grafana/enable-idx-write-queue enable index write queue by default

view details

push time in a month

PR merged grafana/metrictank

enable index write queue by default

it's proven to be beneficial

+20 -20

0 comment

10 changed files

Dieterbe

pr closed time in a month

pull request commentgrafana/metrictank

WIP / add indexdump rules analyzer

Might make sense to mention somewhere what format of the dump it expects

as the doc and the help text say:

reads metric names from stdin

Dieterbe

comment created time in a month

Pull request review commentgrafana/metrictank

Implements function movingWindow and child functions

+package expr++import (+	"fmt"+	"math"+	"strings"++	"github.com/grafana/metrictank/batch"+	"github.com/grafana/metrictank/errors"+	"github.com/grafana/metrictank/schema"+	log "github.com/sirupsen/logrus"++	"github.com/grafana/metrictank/api/models"+	"github.com/grafana/metrictank/consolidation"+	"github.com/raintank/dur"+)++type FuncMovingWindow struct {+	in           GraphiteFunc+	windowSize   string+	fn           string+	xFilesFactor float64++	shiftOffset uint32+}++// NewMovingWindowConstructor takes an agg string and returns a constructor function+func NewMovingWindowConstructor(name string) func() GraphiteFunc {+	return func() GraphiteFunc {+		return &FuncMovingWindow{fn: name}+	}+}++func NewMovingWindow() GraphiteFunc {+	return &FuncMovingWindow{fn: "average", xFilesFactor: 0}+}++func (s *FuncMovingWindow) Signature() ([]Arg, []Arg) {+	return []Arg{+			ArgSeriesList{val: &s.in},+			ArgString{key: "windowSize", val: &s.windowSize, validator: []Validator{IsSignedIntervalString}},+			ArgString{key: "func", opt: true, val: &s.fn, validator: []Validator{IsConsolFunc}},+			ArgFloat{key: "xFilesFactor", opt: true, val: &s.xFilesFactor, validator: []Validator{WithinZeroOneInclusiveInterval}},+		}, []Arg{+			ArgSeriesList{},+		}+}++func (s *FuncMovingWindow) Context(context Context) Context {+	var err error+	s.shiftOffset, err = s.getWindowSeconds()+	if err != nil {+		// shouldn't happen, validated above+		log.Warnf("movingWindow: encountered error in getWindowSeconds, %s", err)+		return context+	}++	// Adjust to fetch from the shifted context.from with+	// context.to unchanged+	context.from -= s.shiftOffset+	return context+}++func (s *FuncMovingWindow) Exec(dataMap DataMap) ([]models.Series, error) {+	series, err := s.in.Exec(dataMap)+	if err != nil {+		return nil, err+	}++	aggFunc := consolidation.GetAggFunc(consolidation.FromConsolidateBy(s.fn))+	aggFuncName := fmt.Sprintf("moving%s", strings.Title(s.fn))+	formatStr := fmt.Sprintf("%s(%%s,\"%s\")", aggFuncName, s.windowSize)++	newName := func(oldName string) string {+		return fmt.Sprintf(formatStr, oldName)+	}++	outputs := make([]models.Series, 0, len(series))+	for _, serie := range series {++		from, points, err := aggregateOnWindow(serie, aggFunc, s.shiftOffset, s.xFilesFactor)+		if err != nil {+			return nil, err+		}++		serie.Target = newName(serie.Target)+		serie.QueryPatt = newName(serie.QueryPatt)+		serie.Tags = serie.CopyTagsWith(aggFuncName, s.windowSize)+		serie.QueryFrom = from+		serie.Datapoints = points++		outputs = append(outputs, serie)+	}++	dataMap.Add(Req{}, outputs...)+	return outputs, nil+}++func aggregateOnWindow(serie models.Series, aggFunc batch.AggFunc, interval uint32, xFilesFactor float64) (uint32, []schema.Point, error) {+	out := pointSlicePool.Get().([]schema.Point)++	numPoints := len(serie.Datapoints)+	serieStart, serieEnd := serie.QueryFrom, serie.QueryTo+	queryStart := uint32(serieStart + interval)++	if queryStart < serieStart || queryStart > serieEnd {+		return 0, nil, errors.NewInternalf("query start %d doesn't lie within serie's intervals [%d,%d] ", queryStart, serieStart, serieEnd)

could be as simple as "querystart needs to be between serieStart and serieEnd for the loop below to work (this should always be the case)" . at least that's my interpretation. maybe you can explain it better.

vaguiar

comment created time in a month

PullRequestReviewEvent

Pull request review commentgrafana/metrictank

Implements function movingWindow and child functions

+package expr++import (+	"fmt"+	"math"+	"strings"++	"github.com/grafana/metrictank/batch"+	"github.com/grafana/metrictank/errors"+	"github.com/grafana/metrictank/schema"+	log "github.com/sirupsen/logrus"++	"github.com/grafana/metrictank/api/models"+	"github.com/grafana/metrictank/consolidation"+	"github.com/raintank/dur"+)++type FuncMovingWindow struct {+	in           GraphiteFunc+	windowSize   string+	fn           string+	xFilesFactor float64++	shiftOffset uint32+}++// NewMovingWindowConstructor takes an agg string and returns a constructor function+func NewMovingWindowConstructor(name string) func() GraphiteFunc {+	return func() GraphiteFunc {+		return &FuncMovingWindow{fn: name}+	}+}++func NewMovingWindow() GraphiteFunc {+	return &FuncMovingWindow{fn: "average", xFilesFactor: 0}+}++func (s *FuncMovingWindow) Signature() ([]Arg, []Arg) {+	return []Arg{+			ArgSeriesList{val: &s.in},+			ArgString{key: "windowSize", val: &s.windowSize, validator: []Validator{IsSignedIntervalString}},+			ArgString{key: "func", opt: true, val: &s.fn, validator: []Validator{IsConsolFunc}},+			ArgFloat{key: "xFilesFactor", opt: true, val: &s.xFilesFactor, validator: []Validator{WithinZeroOneInclusiveInterval}},+		}, []Arg{+			ArgSeriesList{},+		}+}++func (s *FuncMovingWindow) Context(context Context) Context {+	var err error+	s.shiftOffset, err = s.getWindowSeconds()+	if err != nil {+		// shouldn't happen, validated above+		log.Warnf("movingWindow: encountered error in getWindowSeconds, %s", err)+		return context+	}++	// Adjust to fetch from the shifted context.from with+	// context.to unchanged+	context.from -= s.shiftOffset+	return context+}++func (s *FuncMovingWindow) Exec(dataMap DataMap) ([]models.Series, error) {+	series, err := s.in.Exec(dataMap)+	if err != nil {+		return nil, err+	}++	aggFunc := consolidation.GetAggFunc(consolidation.FromConsolidateBy(s.fn))+	aggFuncName := fmt.Sprintf("moving%s", strings.Title(s.fn))+	formatStr := fmt.Sprintf("%s(%%s,\"%s\")", aggFuncName, s.windowSize)++	newName := func(oldName string) string {+		return fmt.Sprintf(formatStr, oldName)+	}++	outputs := make([]models.Series, 0, len(series))+	for _, serie := range series {++		from, points, err := aggregateOnWindow(serie, aggFunc, s.shiftOffset, s.xFilesFactor)+		if err != nil {+			return nil, err+		}++		serie.Target = newName(serie.Target)+		serie.QueryPatt = newName(serie.QueryPatt)+		serie.Tags = serie.CopyTagsWith(aggFuncName, s.windowSize)+		serie.QueryFrom = from+		serie.Datapoints = points++		outputs = append(outputs, serie)+	}++	dataMap.Add(Req{}, outputs...)+	return outputs, nil+}++func aggregateOnWindow(serie models.Series, aggFunc batch.AggFunc, interval uint32, xFilesFactor float64) (uint32, []schema.Point, error) {+	out := pointSlicePool.Get().([]schema.Point)++	numPoints := len(serie.Datapoints)+	serieStart, serieEnd := serie.QueryFrom, serie.QueryTo+	queryStart := uint32(serieStart + interval)++	if queryStart < serieStart || queryStart > serieEnd {+		return 0, nil, errors.NewInternalf("query start %d doesn't lie within serie's intervals [%d,%d] ", queryStart, serieStart, serieEnd)

I see. well given how we adjust context.from in FuncMovingWindow.Context() (we reduce from), serieStart will always be before querystart. but ok, i have no objection against making sure.I do think these lines of code need some clarification about why they're checking this, what the concern is, how things might break if the condition is not met, etc

vaguiar

comment created time in a month

PullRequestReviewEvent

Pull request review commentgrafana/metrictank

Implements function movingWindow and child functions

+package expr++import (+	"fmt"+	"math"+	"strings"++	"github.com/grafana/metrictank/batch"+	"github.com/grafana/metrictank/errors"+	"github.com/grafana/metrictank/schema"+	log "github.com/sirupsen/logrus"++	"github.com/grafana/metrictank/api/models"+	"github.com/grafana/metrictank/consolidation"+	"github.com/raintank/dur"+)++type FuncMovingWindow struct {+	in           GraphiteFunc+	windowSize   string+	fn           string+	xFilesFactor float64++	shiftOffset uint32+}++// NewMovingWindowConstructor takes an agg string and returns a constructor function+func NewMovingWindowConstructor(name string) func() GraphiteFunc {+	return func() GraphiteFunc {+		return &FuncMovingWindow{fn: name}+	}+}++func NewMovingWindow() GraphiteFunc {+	return &FuncMovingWindow{fn: "average", xFilesFactor: 0}+}++func (s *FuncMovingWindow) Signature() ([]Arg, []Arg) {+	return []Arg{+			ArgSeriesList{val: &s.in},+			ArgString{key: "windowSize", val: &s.windowSize, validator: []Validator{IsSignedIntervalString}},

so, the fact that we only allow a window size as interval string, not as a integer number of points, is an exception to document. if, that aside, everything else behaves exactly like graphite does - to your knowledge - then yes you can mark it as stable.

vaguiar

comment created time in a month

PullRequestReviewEvent

Pull request review commentgrafana/metrictank

Implements function movingWindow and child functions

+package expr++import (+	"fmt"+	"math"+	"strings"++	"github.com/grafana/metrictank/batch"+	"github.com/grafana/metrictank/errors"+	"github.com/grafana/metrictank/schema"+	log "github.com/sirupsen/logrus"++	"github.com/grafana/metrictank/api/models"+	"github.com/grafana/metrictank/consolidation"+	"github.com/raintank/dur"+)++type FuncMovingWindow struct {+	in           GraphiteFunc+	windowSize   string+	fn           string+	xFilesFactor float64++	shiftOffset uint32+}++// NewMovingWindowConstructor takes an agg string and returns a constructor function+func NewMovingWindowConstructor(name string) func() GraphiteFunc {+	return func() GraphiteFunc {+		return &FuncMovingWindow{fn: name}+	}+}++func NewMovingWindow() GraphiteFunc {+	return &FuncMovingWindow{fn: "average", xFilesFactor: 0}+}++func (s *FuncMovingWindow) Signature() ([]Arg, []Arg) {+	return []Arg{+			ArgSeriesList{val: &s.in},+			ArgString{key: "windowSize", val: &s.windowSize, validator: []Validator{IsSignedIntervalString}},+			ArgString{key: "func", opt: true, val: &s.fn, validator: []Validator{IsConsolFunc}},+			ArgFloat{key: "xFilesFactor", opt: true, val: &s.xFilesFactor, validator: []Validator{WithinZeroOneInclusiveInterval}},

yes but the Signature() function is what is used to validate the query (function arguments) provided by the user. if we already set the func at construction time, we shouldn't look for an input parameter for it. See NewAggregate() / FuncAggregate.Signature(), it's the same idea.

vaguiar

comment created time in a month

PullRequestReviewEvent

Pull request review commentgreshake/i3status-rust

net block: fix IW SSID regex parsing

 fn get_iw_ssid(dev: &NetworkDevice) -> Result<Option<String>> {     }      let raw = raw.unwrap();-    let result = raw-        .stdout-        .split(|c| *c == b'\n')-        .filter_map(|x| IW_SSID_REGEX.find(x))-        .next();+    let bytes = raw.stdout.split(|c| *c == b'\n').next().unwrap();+    let cap = IW_SSID_REGEX.captures(bytes);+    if cap.is_none() {

see below

Dieterbe

comment created time in a month

PullRequestReviewEvent

Pull request review commentgreshake/i3status-rust

net block: fix IW SSID regex parsing

 fn get_iw_ssid(dev: &NetworkDevice) -> Result<Option<String>> {     }      let raw = raw.unwrap();-    let result = raw-        .stdout-        .split(|c| *c == b'\n')-        .filter_map(|x| IW_SSID_REGEX.find(x))-        .next();+    let bytes = raw.stdout.split(|c| *c == b'\n').next().unwrap();+    let cap = IW_SSID_REGEX.captures(bytes);+    if cap.is_none() {+        return Ok(None);+    }+    let match_obj = cap.unwrap().get(1);+    if match_obj.is_none() {

i couldn't find a way to do it more elegantly. i needed to convert the match object to to a &[u8], so the best way i saw was to retrieve it out of the Option, and call as_bytes() on it. And to be able to do that, i similarly had to pull out the captures out of the option.

I did verify though, that just calling the same processing chain, using unwrap() and removing the is_none() checks, and passing the result to maybe_ssid_convert, leads to a panic in case the output from iw does not match the expected format

Dieterbe

comment created time in a month

PullRequestReviewEvent

PR opened greshake/i3status-rust

net block: fix IW SSID regex parsing
  1. don't include extraneous 'SSID: ' prefix. see #814
  2. an SSID is allowed to contain any character. update regex accordingly. in particular mine is of the form foo-1234 and my ssid was reported as 'foo'.

I'm a rust beginner so any feedback on the code is welcome. I think more can be fixed in this block (e.g. adjusting the other regexes etc), but one small step at a time :)

+12 -7

0 comment

1 changed file

pr created time in a month

create barnchDieterbe/i3status-rust

branch : net-fix-iw-ssid-regex

created branch time in a month

push eventgrafana/metrictank

Dieter Plaetinck

commit sha a0c3323705bf21c08a6c60d73fb2b7a8a5d04196

run plan.Clean() also if there was an error. fix #719

view details

Dieter Plaetinck

commit sha 108ffb9ea16eb92bcfe0bd16002c64fc8b14fe0b

make 1 single pointslicepool type that has a GetMin() method while we're at it, add some syntactic sugar

view details

Dieter Plaetinck

commit sha c2cc3fbd6f532ca73e2bcfcbe06498bbcba504f4

typo

view details

Dieter Plaetinck

commit sha 3754a85ac9d91e441e3bde908bc467d03b0c7b98

avoid possibly allocating a too small, garbage pointslice

view details

Dieter Plaetinck

commit sha c8e946cef4e86fdaa14df33120fb93e09801fb1b

Merge pull request #1858 from grafana/run-plan-clean-upon-error-too Run plan.Clean() upon error too + refactor pointslicepool

view details

push time in a month

more