profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/discordianfish/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Johannes 'fish' Ziemke discordianfish Berlin https://5pi.de Freelance Cloud Native Consultant, Founder of @prometheus node_exporter.

discordianfish/blackbox_prober 21

Export availability, request latencies and size for remote services

discordianfish/blackbox-exporter-lambda 10

Run the Prometheus Blackbox Exporter as AWS Lambda

discordianfish/check_graphite.r 8

holt-winters forecast nagios check based on graphite

discordianfish/alpine-armhf-docker 3

Alpine Docker base images for ARM

discordianfish/banksman 3

Render iPXE from collins attributes

discordianfish/alpine-armhf-docker-dumb-init 1

Alpine armhf base image with dumb-init installed

discordianfish/bootylicious 1

One-file weblog on Mojo steroids!

discordianfish/bootylicious-plugin-top_pages 1

plugin to render a "page" on each blog page

Pull request review commentprometheus/node_exporter

Add a new ethtool stats collector

+// Copyright 2021 The Prometheus Authors+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+// http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++// +build !noethtool++// The hard work of collecting data from the kernel via the ethtool interfaces is done by+// https://github.com/safchain/ethtool/+// by Sylvain Afchain. Used under the Apache license.++package collector++import (+	"errors"+	"fmt"+	"os"+	"regexp"+	"sort"++	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/log/level"+	"github.com/prometheus/client_golang/prometheus"+	"github.com/prometheus/procfs/sysfs"+	"github.com/safchain/ethtool"+)++var (+	receivedRegex    = regexp.MustCompile(`_rx_`)+	transmittedRegex = regexp.MustCompile(`_tx_`)+)++type EthtoolStats interface {+	Stats(string) (map[string]uint64, error)+}++type ethtoolStats struct {+}++func (e *ethtoolStats) Stats(intf string) (map[string]uint64, error) {+	return ethtool.Stats(intf)+}++type ethtoolCollector struct {+	fs      sysfs.FS+	entries map[string]*prometheus.Desc+	logger  log.Logger+	stats   EthtoolStats+}++// makeEthtoolCollector is the internal constructor for EthtoolCollector.+// This allows NewEthtoolTestCollector to override it's .stats interface+// for testing.+func makeEthtoolCollector(logger log.Logger) (*ethtoolCollector, error) {+	fs, err := sysfs.NewFS(*sysPath)+	if err != nil {+		return nil, fmt.Errorf("failed to open sysfs: %w", err)+	}++	// Pre-populate some common ethtool metrics.+	return &ethtoolCollector{+		fs:    fs,+		stats: &ethtoolStats{},+		entries: map[string]*prometheus.Desc{+			"rx_bytes": prometheus.NewDesc(+				"node_ethtool_received_bytes_total",+				"Network interface bytes received",+				[]string{"device"}, nil,+			),+			"rx_dropped": prometheus.NewDesc(+				"node_ethtool_received_dropped_total",+				"Number of received frames dropped",+				[]string{"device"}, nil,+			),+			"rx_errors": prometheus.NewDesc(+				"node_ethtool_received_errors_total",+				"Number of received frames with errors",+				[]string{"device"}, nil,+			),+			"rx_packets": prometheus.NewDesc(+				"node_ethtool_received_packets_total",+				"Network interface packets received",+				[]string{"device"}, nil,+			),+			"tx_bytes": prometheus.NewDesc(+				"node_ethtool_transmitted_bytes_total",+				"Network interface bytes sent",+				[]string{"device"}, nil,+			),+			"tx_errors": prometheus.NewDesc(+				"node_ethtool_transmitted_errors_total",+				"Number of sent frames with errors",+				[]string{"device"}, nil,+			),+			"tx_packets": prometheus.NewDesc(+				"node_ethtool_transmitted_packets_total",+				"Network interface packets sent",+				[]string{"device"}, nil,+			),+		},+		logger: logger,+	}, nil+}++func init() {+	registerCollector("ethtool", defaultDisabled, NewEthtoolCollector)+}++// NewEthtoolCollector returns a new Collector exposing ethtool stats.+func NewEthtoolCollector(logger log.Logger) (Collector, error) {+	return makeEthtoolCollector(logger)+}++func (c *ethtoolCollector) Update(ch chan<- prometheus.Metric) error {+	netClass, err := c.fs.NetClass()+	if err != nil {+		if errors.Is(err, os.ErrNotExist) || errors.Is(err, os.ErrPermission) {+			level.Debug(c.logger).Log("msg", "Could not read netclass file", "err", err)+			return ErrNoData+		}+		return fmt.Errorf("could not get net class info: %w", err)+	}++	if len(netClass) == 0 {+		return fmt.Errorf("no network devices found")+	}++	for device := range netClass {+		var stats map[string]uint64+		var err error++		stats, err = c.stats.Stats(device)+		if err != nil {

Ok I've added logging for various kinds of errors and expanded the test coverage to include these new branches.

ventifus

comment created time in 12 hours

Pull request review commentprometheus/node_exporter

Add a new ethtool stats collector

+// Copyright 2021 The Prometheus Authors+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+// http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++// +build !noethtool++// The hard work of collecting data from the kernel via the ethtool interfaces is done by+// https://github.com/safchain/ethtool/+// by Sylvain Afchain. Used under the Apache license.++package collector++import (+	"errors"+	"fmt"+	"os"+	"regexp"+	"sort"++	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/log/level"+	"github.com/prometheus/client_golang/prometheus"+	"github.com/prometheus/procfs/sysfs"+	"github.com/safchain/ethtool"+)++var (+	receivedRegex    = regexp.MustCompile(`_rx_`)+	transmittedRegex = regexp.MustCompile(`_tx_`)+)++type EthtoolStats interface {+	Stats(string) (map[string]uint64, error)+}++type ethtoolStats struct {+}++func (e *ethtoolStats) Stats(intf string) (map[string]uint64, error) {+	return ethtool.Stats(intf)+}++type ethtoolCollector struct {+	fs      sysfs.FS+	entries map[string]*prometheus.Desc+	logger  log.Logger+	stats   EthtoolStats+}++// makeEthtoolCollector is the internal constructor for EthtoolCollector.+// This allows NewEthtoolTestCollector to override it's .stats interface+// for testing.+func makeEthtoolCollector(logger log.Logger) (*ethtoolCollector, error) {+	fs, err := sysfs.NewFS(*sysPath)+	if err != nil {+		return nil, fmt.Errorf("failed to open sysfs: %w", err)+	}++	// Pre-populate some common ethtool metrics.+	return &ethtoolCollector{+		fs:    fs,+		stats: &ethtoolStats{},+		entries: map[string]*prometheus.Desc{+			"rx_bytes": prometheus.NewDesc(+				"node_ethtool_received_bytes_total",+				"Network interface bytes received",+				[]string{"device"}, nil,+			),+			"rx_dropped": prometheus.NewDesc(+				"node_ethtool_received_dropped_total",+				"Number of received frames dropped",+				[]string{"device"}, nil,+			),+			"rx_errors": prometheus.NewDesc(+				"node_ethtool_received_errors_total",+				"Number of received frames with errors",+				[]string{"device"}, nil,+			),+			"rx_packets": prometheus.NewDesc(+				"node_ethtool_received_packets_total",+				"Network interface packets received",+				[]string{"device"}, nil,+			),+			"tx_bytes": prometheus.NewDesc(+				"node_ethtool_transmitted_bytes_total",+				"Network interface bytes sent",+				[]string{"device"}, nil,+			),+			"tx_errors": prometheus.NewDesc(+				"node_ethtool_transmitted_errors_total",+				"Number of sent frames with errors",+				[]string{"device"}, nil,+			),+			"tx_packets": prometheus.NewDesc(+				"node_ethtool_transmitted_packets_total",+				"Network interface packets sent",+				[]string{"device"}, nil,+			),+		},+		logger: logger,+	}, nil+}++func init() {+	registerCollector("ethtool", defaultDisabled, NewEthtoolCollector)+}++// NewEthtoolCollector returns a new Collector exposing ethtool stats.+func NewEthtoolCollector(logger log.Logger) (Collector, error) {+	return makeEthtoolCollector(logger)+}++func (c *ethtoolCollector) Update(ch chan<- prometheus.Metric) error {+	netClass, err := c.fs.NetClass()+	if err != nil {+		if errors.Is(err, os.ErrNotExist) || errors.Is(err, os.ErrPermission) {+			level.Debug(c.logger).Log("msg", "Could not read netclass file", "err", err)+			return ErrNoData+		}+		return fmt.Errorf("could not get net class info: %w", err)+	}++	if len(netClass) == 0 {+		return fmt.Errorf("no network devices found")+	}++	for device := range netClass {+		var stats map[string]uint64+		var err error++		stats, err = c.stats.Stats(device)+		if err != nil {

Stats() mostly just passes back what unix.Syscall returns. I think the cases you're talking about is unix.EOPNOTSUPP and possibly unix.EPERM aka os.IsPermission(err). I'll have it log if the error is not EOPNOTSUPP. I don't think an opaque errors_total counter would be that useful, and I definitely don't want to give up on stats entirely because one of the system's network interfaces returned a weird error.

ventifus

comment created time in 13 hours

starteddiscordianfish/docker-backup

started time in a day

Pull request review commentprometheus/node_exporter

Add a new ethtool stats collector

+// Copyright 2021 The Prometheus Authors+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+// http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++// +build !noethtool++// The hard work of collecting data from the kernel via the ethtool interfaces is done by+// https://github.com/safchain/ethtool/+// by Sylvain Afchain. Used under the Apache license.++package collector++import (+	"errors"+	"fmt"+	"os"+	"regexp"+	"sort"++	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/log/level"+	"github.com/prometheus/client_golang/prometheus"+	"github.com/prometheus/procfs/sysfs"+	"github.com/safchain/ethtool"+)++var (+	receivedRegex    = regexp.MustCompile(`_rx_`)+	transmittedRegex = regexp.MustCompile(`_tx_`)+)++type EthtoolStats interface {+	Stats(string) (map[string]uint64, error)+}++type ethtoolStats struct {+}++func (e *ethtoolStats) Stats(intf string) (map[string]uint64, error) {+	return ethtool.Stats(intf)+}++type ethtoolCollector struct {+	fs      sysfs.FS+	entries map[string]*prometheus.Desc+	logger  log.Logger+	stats   EthtoolStats+}++// makeEthtoolCollector is the internal constructor for EthtoolCollector.+// This allows NewEthtoolTestCollector to override it's .stats interface+// for testing.+func makeEthtoolCollector(logger log.Logger) (*ethtoolCollector, error) {+	fs, err := sysfs.NewFS(*sysPath)+	if err != nil {+		return nil, fmt.Errorf("failed to open sysfs: %w", err)+	}++	// Pre-populate some common ethtool metrics.+	return &ethtoolCollector{+		fs:    fs,+		stats: &ethtoolStats{},+		entries: map[string]*prometheus.Desc{+			"rx_bytes": prometheus.NewDesc(+				"node_ethtool_received_bytes_total",+				"Network interface bytes received",+				[]string{"device"}, nil,+			),+			"rx_dropped": prometheus.NewDesc(+				"node_ethtool_received_dropped_total",+				"Number of received frames dropped",+				[]string{"device"}, nil,+			),+			"rx_errors": prometheus.NewDesc(+				"node_ethtool_received_errors_total",+				"Number of received frames with errors",+				[]string{"device"}, nil,+			),+			"rx_packets": prometheus.NewDesc(+				"node_ethtool_received_packets_total",+				"Network interface packets received",+				[]string{"device"}, nil,+			),+			"tx_bytes": prometheus.NewDesc(+				"node_ethtool_transmitted_bytes_total",+				"Network interface bytes sent",+				[]string{"device"}, nil,+			),+			"tx_errors": prometheus.NewDesc(+				"node_ethtool_transmitted_errors_total",+				"Number of sent frames with errors",+				[]string{"device"}, nil,+			),+			"tx_packets": prometheus.NewDesc(+				"node_ethtool_transmitted_packets_total",+				"Network interface packets sent",+				[]string{"device"}, nil,+			),+		},+		logger: logger,+	}, nil+}++func init() {+	registerCollector("ethtool", defaultDisabled, NewEthtoolCollector)+}++// NewEthtoolCollector returns a new Collector exposing ethtool stats.+func NewEthtoolCollector(logger log.Logger) (Collector, error) {+	return makeEthtoolCollector(logger)+}++func (c *ethtoolCollector) Update(ch chan<- prometheus.Metric) error {+	netClass, err := c.fs.NetClass()+	if err != nil {+		if errors.Is(err, os.ErrNotExist) || errors.Is(err, os.ErrPermission) {+			level.Debug(c.logger).Log("msg", "Could not read netclass file", "err", err)+			return ErrNoData+		}+		return fmt.Errorf("could not get net class info: %w", err)+	}++	if len(netClass) == 0 {+		return fmt.Errorf("no network devices found")+	}++	for device := range netClass {+		var stats map[string]uint64+		var err error++		stats, err = c.stats.Stats(device)+		if err != nil {

In all cases if there's an error there's also no stats. I don't think we'd want to do more than skip the interface and continue on, but what I could do is log the error like this:

	var err error
	stats, err = c.stats.Stats(device)
	if err != nil {
		level.Debug(c.logger).Log("msg", "Ethtool stats error", "err", err, "device", device)
	}

	if len(stats) < 1 {
		// No stats returned; device does not support ethtool stats.
		continue
	}

What do you think?

ventifus

comment created time in 2 days

Pull request review commentprometheus/node_exporter

Add a new ethtool stats collector

+// Copyright 2021 The Prometheus Authors+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+// http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package collector++import (+	"bufio"+	"fmt"+	"os"+	"path/filepath"+	"strconv"+	"strings"+	"syscall"+	"testing"++	"github.com/go-kit/kit/log"+	"github.com/prometheus/client_golang/prometheus"+)++type EthtoolFixture struct {+	fixturePath string+}++func (e *EthtoolFixture) Stats(intf string) (map[string]uint64, error) {+	res := make(map[string]uint64)++	fixtureFile, err := os.Open(filepath.Join(e.fixturePath, intf))+	if e, ok := err.(*os.PathError); ok && e.Err == syscall.ENOENT {+		// The fixture for this interface doesn't exist. That's OK because it replicates+		// an interface that doesn't support ethtool.+		return res, nil+	}+	if err != nil {+		return res, err+	}+	defer fixtureFile.Close()++	scanner := bufio.NewScanner(fixtureFile)+	for scanner.Scan() {+		line := scanner.Text()+		if strings.HasPrefix(line, "#") {+			continue+		}+		if strings.HasPrefix(line, "NIC statistics:") {+			continue+		}+		line = strings.Trim(line, " ")+		items := strings.Split(line, ": ")+		val, err := strconv.ParseUint(items[1], 10, 64)+		if err != nil {+			return res, err+		}+		res[items[0]] = val+	}++	return res, err+}++func NewEthtoolTestCollector(logger log.Logger) (Collector, error) {+	collector, err := makeEthtoolCollector(logger)+	collector.stats = &EthtoolFixture{+		fixturePath: "fixtures/ethtool/",+	}+	if err != nil {+		return nil, err+	}+	return collector, nil+}++func TestEthtoolCollector(t *testing.T) {+	testcases := []string{+		prometheus.NewDesc("node_ethtool_align_errors", "Network interface align_errors", []string{"device"}, nil).String(),

This one is an example of a driver-specific metric (in this case r8152). The kernel doesn't tell us if it's a counter or a gauge. Some "well known" metrics are pre-populated in makeEthtoolCollector() and do have the _total suffix. Later when we get support for standardized metrics in kernel 5.13+, we'll be in a position to name and type them appropriately.

ventifus

comment created time in 2 days

push eventprometheus/node_exporter

David Leadbeater

commit sha 0387f55a9b9bc5ed2e01911afc6432da1d077d38

Make node_exporter print usage to STDOUT Matches updated behaviour in Prometheus and other tools. Signed-off-by: David Leadbeater <dgl@dgl.cx>

view details

David Leadbeater

commit sha f29590e0a8288ad9d713461b5082f0ea3fb4dfe2

Merge pull request #2039 from dgl/usage-stdout

view details

push time in 2 days

PR merged prometheus/node_exporter

Make node_exporter print usage to STDOUT

Matches updated behaviour in Prometheus and other tools.

Signed-off-by: David Leadbeater dgl@dgl.cx

+1 -0

0 comment

1 changed file

dgl

pr closed time in 2 days

delete branch discordianfish/blog

delete branch : dependabot/npm_and_yarn/lodash-4.17.21

delete time in 4 days

delete branch discordianfish/blog

delete branch : dependabot/npm_and_yarn/hosted-git-info-2.8.9

delete time in 4 days

PR opened prometheus/node_exporter

Make node_exporter print usage to STDOUT

Matches updated behaviour in Prometheus and other tools.

Signed-off-by: David Leadbeater dgl@dgl.cx

+1 -0

0 comment

1 changed file

pr created time in 4 days

push eventprometheus/node_exporter

Hu Shuai

commit sha 5ee20043a71fed7404a7b70bced546d8ecb1afc3

Fix golint issue caused by typo Signed-off-by: Hu Shuai <hus.fnst@cn.fujitsu.com>

view details

Ben Kochie

commit sha c3d8fc6051eeaaa8224be255da4cc4aad35b4ba2

Merge pull request #2038 from hs0210/work Fix golint issue caused by typo

view details

push time in 4 days

PR merged prometheus/node_exporter

Fix golint issue caused by typo

Signed-off-by: Hu Shuai hus.fnst@cn.fujitsu.com

+1 -1

0 comment

1 changed file

hs0210

pr closed time in 4 days

PR opened prometheus/node_exporter

Fix golint issue caused by typo

Signed-off-by: Hu Shuai hus.fnst@cn.fujitsu.com

+1 -1

0 comment

1 changed file

pr created time in 4 days

Pull request review commentprometheus/node_exporter

Multicluster support to node-exporter mixins

-local g = import 'github.com/grafana/jsonnet-libs/grafana-builder/grafana.libsonnet';+local grafana = import 'github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet';+local dashboard = grafana.dashboard;+local row = grafana.row;+local prometheus = grafana.prometheus;+local template = grafana.template;+local graphPanel = grafana.graphPanel;++local c = import '../config.libsonnet';++local datasourceTemplate = {+  current: {+    text: 'Prometheus',+    value: 'Prometheus',+  },+  hide: 0,+  label: null,+  name: 'datasource',+  options: [],+  query: 'prometheus',+  refresh: 1,+  regex: '',+  type: 'datasource',+};++local clusterTemplate =+  template.new(+    name='cluster',+    datasource='$datasource',+    query='label_values(node_cpu_seconds_total, %s)' % c._config.clusterLabel,+    current='',+    hide=if c._config.showMultiCluster then '' else '2',+    refresh=2,+    includeAll=false,+    sort=1+  );++local CPUUtilisation =+  graphPanel.new(+    'CPU Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local CPUSaturation =+  // TODO: Is this a useful panel? At least there should be some explanation how load+  // average relates to the "CPU saturation" in the title.+  graphPanel.new(+    'CPU Saturation (Load1 per CPU)',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memoryUtilisation =+  graphPanel.new(+    'Memory Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memorySaturation =+  graphPanel.new(+    'Memory Saturation (Major Page Faults)',+    datasource='$datasource',+    span=6,+    format='rds',+    stack=true,+    fill=10,+    legend_show=false,+  );++local networkUtilisation =+  graphPanel.new(+    'Network Utilisation (Bytes Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/Transmit/', stack: 'B', transform: 'negative-Y' });++local networkSaturation =+  graphPanel.new(+    'Network Saturation (Drops Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/ Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/ Transmit/', stack: 'B', transform: 'negative-Y' });++local diskIOUtilisation =+  graphPanel.new(+    'Disk IO Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskIOSaturation =+  graphPanel.new(+    'Disk IO Saturation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskSpaceUtilisation =+  graphPanel.new(+    'Disk Space Utilisation',+    datasource='$datasource',+    span=12,+    format='percentunit',+    stack=false,+    fill=5,+    legend_show=true,+    legend_alignAsTable=true,+    legend_current=true,+    legend_avg=true,+    legend_rightSide=true,+    legend_sortDesc=true,+  );  {   grafanaDashboards+:: {-    'node-cluster-rsrc-use.json':-      local legendLink = '%s/dashboard/file/node-rsrc-use.json' % $._config.grafana_prefix;+                         'node-rsrc-use.json': -      g.dashboard('USE Method / Cluster')-      .addRow(-        g.row('CPU')-        .addPanel(-          g.panel('CPU Utilisation') +-          g.queryPanel(|||-            (-              instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s}-            *-              instance:node_num_cpu:sum{%(nodeExporterSelector)s}-            )-            / scalar(sum(instance:node_num_cpu:sum{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          // TODO: Is this a useful panel? At least there should be some explanation how load-          // average relates to the "CPU saturation" in the title.-          g.panel('CPU Saturation (load1 per CPU)') +-          g.queryPanel(|||-            instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          // TODO: Does `max: 1` make sense? The stack can go over 1 in high-load scenarios.-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Memory')-        .addPanel(-          g.panel('Memory Utilisation') +-          g.queryPanel(|||-            instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Memory Saturation (Major Page Faults)') +-          g.queryPanel('instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes('rps') },-        )-      )-      .addRow(-        g.row('Network')-        .addPanel(-          g.panel('Net Utilisation (Bytes Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'Bps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-        .addPanel(-          g.panel('Net Saturation (Drops Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'rps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-      )-      .addRow(-        g.row('Disk IO')-        .addPanel(-          g.panel('Disk IO Utilisation') +-          // Full utilisation would be all disks on each node spending an average of-          // 1 second per second doing I/O, normalize by metric cardinality for stacked charts.-          // TODO: Does the partition by device make sense? Using the most utilized device per-          // instance might make more sense.-          g.queryPanel(|||-            instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Disk IO Saturation') +-          g.queryPanel(|||-            instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Disk Space')-        .addPanel(-          g.panel('Disk Space Utilisation') +-          g.queryPanel(|||-            sum without (device) (-              max without (fstype, mountpoint) (-                node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s} - node_filesystem_avail_bytes{%(nodeExporterSelector)s, %(fsSelector)s}-              )-            ) -            / scalar(sum(max without (fstype, mountpoint) (node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s})))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        ),-      ),+                           dashboard.new(+                             '%sUSE Method / Node' % $._config.dashboardNamePrefix,+                             time_from='now-1h',+                             tags=($._config.dashboardTags),+                             timezone='utc',+                             refresh='30s',+                             graphTooltip='shared_crosshair'+                           )+                           .addTemplate(datasourceTemplate)+                           .addTemplate(clusterTemplate)+                           .addTemplate(+                             template.new(+                               'instance',+                               '$datasource',+                               'label_values(node_exporter_build_info{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}, instance)' % $._config,+                               refresh='time',+                               sort=1+                             )+                           )+                           .addRow(+                             row.new('CPU')+                             .addPanel(CPUUtilisation.addTarget(prometheus.target('instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Utilisation')))

Could you split it into multiple lines?

ArthurSens

comment created time in 4 days

Pull request review commentprometheus/node_exporter

Multicluster support to node-exporter mixins

     fsSpaceAvailableCriticalThreshold: 5,     fsSpaceAvailableWarningThreshold: 3, -    grafana_prefix: '',-     rateInterval: '5m',+    // Opt-in for multi-cluster support.+    showMultiCluster: false,+    clusterLabel: 'cluster',++    dashboardNamePrefix: 'Node Exporter / ',+    dashboardTags: ['node-exporter-mixin'],

WDYT about doing 2 tags (node-exporter and mixin) instead of one? The mixin tag would fit nicely with the same tag set in kubernetes-mixin dashboards.

ArthurSens

comment created time in 4 days

Pull request review commentprometheus/node_exporter

Multicluster support to node-exporter mixins

-local g = import 'github.com/grafana/jsonnet-libs/grafana-builder/grafana.libsonnet';+local grafana = import 'github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet';+local dashboard = grafana.dashboard;+local row = grafana.row;+local prometheus = grafana.prometheus;+local template = grafana.template;+local graphPanel = grafana.graphPanel;++local c = import '../config.libsonnet';++local datasourceTemplate = {+  current: {+    text: 'Prometheus',+    value: 'Prometheus',+  },+  hide: 0,+  label: null,+  name: 'datasource',+  options: [],+  query: 'prometheus',+  refresh: 1,+  regex: '',+  type: 'datasource',+};++local clusterTemplate =+  template.new(+    name='cluster',+    datasource='$datasource',+    query='label_values(node_cpu_seconds_total, %s)' % c._config.clusterLabel,+    current='',+    hide=if c._config.showMultiCluster then '' else '2',+    refresh=2,+    includeAll=false,+    sort=1+  );++local CPUUtilisation =+  graphPanel.new(+    'CPU Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local CPUSaturation =+  // TODO: Is this a useful panel? At least there should be some explanation how load+  // average relates to the "CPU saturation" in the title.+  graphPanel.new(+    'CPU Saturation (Load1 per CPU)',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memoryUtilisation =+  graphPanel.new(+    'Memory Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memorySaturation =+  graphPanel.new(+    'Memory Saturation (Major Page Faults)',+    datasource='$datasource',+    span=6,+    format='rds',+    stack=true,+    fill=10,+    legend_show=false,+  );++local networkUtilisation =+  graphPanel.new(+    'Network Utilisation (Bytes Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/Transmit/', stack: 'B', transform: 'negative-Y' });++local networkSaturation =+  graphPanel.new(+    'Network Saturation (Drops Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/ Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/ Transmit/', stack: 'B', transform: 'negative-Y' });++local diskIOUtilisation =+  graphPanel.new(+    'Disk IO Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskIOSaturation =+  graphPanel.new(+    'Disk IO Saturation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskSpaceUtilisation =+  graphPanel.new(+    'Disk Space Utilisation',+    datasource='$datasource',+    span=12,+    format='percentunit',+    stack=false,+    fill=5,+    legend_show=true,+    legend_alignAsTable=true,+    legend_current=true,+    legend_avg=true,+    legend_rightSide=true,+    legend_sortDesc=true,+  );

Why this panel is different than others? This one has legend, doesn't stack series, and has 50% background fill whereas others set those options in a different, but uniform way. Setting only one panel in a different way breaks a bit natural perception set by previous panels and can lead to unnecessary confusion.

ArthurSens

comment created time in 4 days

Pull request review commentprometheus/node_exporter

Multicluster support to node-exporter mixins

-local g = import 'github.com/grafana/jsonnet-libs/grafana-builder/grafana.libsonnet';+local grafana = import 'github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet';+local dashboard = grafana.dashboard;+local row = grafana.row;+local prometheus = grafana.prometheus;+local template = grafana.template;+local graphPanel = grafana.graphPanel;++local c = import '../config.libsonnet';++local datasourceTemplate = {+  current: {+    text: 'Prometheus',+    value: 'Prometheus',+  },+  hide: 0,+  label: null,+  name: 'datasource',+  options: [],+  query: 'prometheus',+  refresh: 1,+  regex: '',+  type: 'datasource',+};++local clusterTemplate =+  template.new(+    name='cluster',+    datasource='$datasource',+    query='label_values(node_cpu_seconds_total, %s)' % c._config.clusterLabel,+    current='',+    hide=if c._config.showMultiCluster then '' else '2',+    refresh=2,+    includeAll=false,+    sort=1+  );++local CPUUtilisation =+  graphPanel.new(+    'CPU Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local CPUSaturation =+  // TODO: Is this a useful panel? At least there should be some explanation how load+  // average relates to the "CPU saturation" in the title.+  graphPanel.new(+    'CPU Saturation (Load1 per CPU)',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memoryUtilisation =+  graphPanel.new(+    'Memory Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memorySaturation =+  graphPanel.new(+    'Memory Saturation (Major Page Faults)',+    datasource='$datasource',+    span=6,+    format='rds',+    stack=true,+    fill=10,+    legend_show=false,+  );++local networkUtilisation =+  graphPanel.new(+    'Network Utilisation (Bytes Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/Transmit/', stack: 'B', transform: 'negative-Y' });++local networkSaturation =+  graphPanel.new(+    'Network Saturation (Drops Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/ Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/ Transmit/', stack: 'B', transform: 'negative-Y' });++local diskIOUtilisation =+  graphPanel.new(+    'Disk IO Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskIOSaturation =+  graphPanel.new(+    'Disk IO Saturation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskSpaceUtilisation =+  graphPanel.new(+    'Disk Space Utilisation',+    datasource='$datasource',+    span=12,+    format='percentunit',+    stack=false,+    fill=5,+    legend_show=true,+    legend_alignAsTable=true,+    legend_current=true,+    legend_avg=true,+    legend_rightSide=true,+    legend_sortDesc=true,+  );  {   grafanaDashboards+:: {-    'node-cluster-rsrc-use.json':-      local legendLink = '%s/dashboard/file/node-rsrc-use.json' % $._config.grafana_prefix;+                         'node-rsrc-use.json': -      g.dashboard('USE Method / Cluster')-      .addRow(-        g.row('CPU')-        .addPanel(-          g.panel('CPU Utilisation') +-          g.queryPanel(|||-            (-              instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s}-            *-              instance:node_num_cpu:sum{%(nodeExporterSelector)s}-            )-            / scalar(sum(instance:node_num_cpu:sum{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          // TODO: Is this a useful panel? At least there should be some explanation how load-          // average relates to the "CPU saturation" in the title.-          g.panel('CPU Saturation (load1 per CPU)') +-          g.queryPanel(|||-            instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          // TODO: Does `max: 1` make sense? The stack can go over 1 in high-load scenarios.-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Memory')-        .addPanel(-          g.panel('Memory Utilisation') +-          g.queryPanel(|||-            instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Memory Saturation (Major Page Faults)') +-          g.queryPanel('instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes('rps') },-        )-      )-      .addRow(-        g.row('Network')-        .addPanel(-          g.panel('Net Utilisation (Bytes Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'Bps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-        .addPanel(-          g.panel('Net Saturation (Drops Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'rps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-      )-      .addRow(-        g.row('Disk IO')-        .addPanel(-          g.panel('Disk IO Utilisation') +-          // Full utilisation would be all disks on each node spending an average of-          // 1 second per second doing I/O, normalize by metric cardinality for stacked charts.-          // TODO: Does the partition by device make sense? Using the most utilized device per-          // instance might make more sense.-          g.queryPanel(|||-            instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Disk IO Saturation') +-          g.queryPanel(|||-            instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Disk Space')-        .addPanel(-          g.panel('Disk Space Utilisation') +-          g.queryPanel(|||-            sum without (device) (-              max without (fstype, mountpoint) (-                node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s} - node_filesystem_avail_bytes{%(nodeExporterSelector)s, %(fsSelector)s}-              )-            ) -            / scalar(sum(max without (fstype, mountpoint) (node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s})))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        ),-      ),+                           dashboard.new(+                             '%sUSE Method / Node' % $._config.dashboardNamePrefix,+                             time_from='now-1h',+                             tags=($._config.dashboardTags),+                             timezone='utc',+                             refresh='30s',+                             graphTooltip='shared_crosshair'+                           )+                           .addTemplate(datasourceTemplate)+                           .addTemplate(clusterTemplate)+                           .addTemplate(+                             template.new(+                               'instance',+                               '$datasource',+                               'label_values(node_exporter_build_info{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}, instance)' % $._config,+                               refresh='time',+                               sort=1+                             )+                           )+                           .addRow(+                             row.new('CPU')+                             .addPanel(CPUUtilisation.addTarget(prometheus.target('instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Utilisation')))+                             .addPanel(CPUSaturation.addTarget(prometheus.target('instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Saturation')))

Same as above.

ArthurSens

comment created time in 4 days

Pull request review commentprometheus/node_exporter

Introduce node-exporter dashboard prefixes and tags

     fsSpaceAvailableCriticalThreshold: 5,     fsSpaceAvailableWarningThreshold: 3, -    grafana_prefix: '',+    dashboardNamePrefix: 'Node Exporter / ',+    dashboardTags: ['node-exporter-mixin'],

WDYT about doing 2 tags (node-exporter and mixin) instead of one? The mixin tag would fit nicely with the same tag set in kubernetes-mixin dashboards.

ArthurSens

comment created time in 4 days

create barnchdiscordianfish/blog

branch : dependabot/npm_and_yarn/lodash-4.17.21

created branch time in 4 days

PR opened discordianfish/blog

Bump lodash from 4.17.19 to 4.17.21

Bumps lodash from 4.17.19 to 4.17.21. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/lodash/lodash/commit/f299b52f39486275a9e6483b60a410e06520c538"><code>f299b52</code></a> Bump to v4.17.21</li> <li><a href="https://github.com/lodash/lodash/commit/c4847ebe7d14540bb28a8b932a9ce1b9ecbfee1a"><code>c4847eb</code></a> Improve performance of <code>toNumber</code>, <code>trim</code> and <code>trimEnd</code> on large input strings</li> <li><a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c"><code>3469357</code></a> Prevent command injection through <code>_.template</code>'s <code>variable</code> option</li> <li><a href="https://github.com/lodash/lodash/commit/ded9bc66583ed0b4e3b7dc906206d40757b4a90a"><code>ded9bc6</code></a> Bump to v4.17.20.</li> <li><a href="https://github.com/lodash/lodash/commit/63150ef7645ac07961b63a86490f419f356429aa"><code>63150ef</code></a> Documentation fixes.</li> <li><a href="https://github.com/lodash/lodash/commit/00f0f62a979d2f5fa0287c06eae70cf9a62d8794"><code>00f0f62</code></a> test.js: Remove trailing comma.</li> <li><a href="https://github.com/lodash/lodash/commit/846e434c7a5b5692c55ebf5715ed677b70a32389"><code>846e434</code></a> Temporarily use a custom fork of <code>lodash-cli</code>.</li> <li><a href="https://github.com/lodash/lodash/commit/5d046f39cbd27f573914768e3b36eeefcc4f1229"><code>5d046f3</code></a> Re-enable Travis tests on <code>4.17</code> branch.</li> <li><a href="https://github.com/lodash/lodash/commit/aa816b36d402a1ad9385142ce7188f17dae514fd"><code>aa816b3</code></a> Remove <code>/npm-package</code>.</li> <li>See full diff in <a href="https://github.com/lodash/lodash/compare/4.17.19...4.17.21">compare view</a></li> </ul> </details> <details> <summary>Maintainer changes</summary> <p>This version was pushed to npm by <a href="https://www.npmjs.com/~bnjmnt4n">bnjmnt4n</a>, a new releaser for lodash since your current version.</p> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+4 -4

0 comment

2 changed files

pr created time in 4 days

Pull request review commentprometheus/node_exporter

Multicluster support to node-exporter mixins

-local g = import 'github.com/grafana/jsonnet-libs/grafana-builder/grafana.libsonnet';+local grafana = import 'github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet';+local dashboard = grafana.dashboard;+local row = grafana.row;+local prometheus = grafana.prometheus;+local template = grafana.template;+local graphPanel = grafana.graphPanel;++local c = import '../config.libsonnet';++local datasourceTemplate = {+  current: {+    text: 'Prometheus',+    value: 'Prometheus',+  },+  hide: 0,+  label: null,+  name: 'datasource',+  options: [],+  query: 'prometheus',+  refresh: 1,+  regex: '',+  type: 'datasource',+};++local clusterTemplate =+  template.new(+    name='cluster',+    datasource='$datasource',+    query='label_values(node_cpu_seconds_total, %s)' % c._config.clusterLabel,+    current='',+    hide=if c._config.showMultiCluster then '' else '2',+    refresh=2,+    includeAll=false,+    sort=1+  );++local CPUUtilisation =+  graphPanel.new(+    'CPU Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local CPUSaturation =+  // TODO: Is this a useful panel? At least there should be some explanation how load+  // average relates to the "CPU saturation" in the title.+  graphPanel.new(+    'CPU Saturation (Load1 per CPU)',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memoryUtilisation =+  graphPanel.new(+    'Memory Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memorySaturation =+  graphPanel.new(+    'Memory Saturation (Major Page Faults)',+    datasource='$datasource',+    span=6,+    format='rds',+    stack=true,+    fill=10,+    legend_show=false,+  );++local networkUtilisation =+  graphPanel.new(+    'Network Utilisation (Bytes Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/Transmit/', stack: 'B', transform: 'negative-Y' });++local networkSaturation =+  graphPanel.new(+    'Network Saturation (Drops Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/ Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/ Transmit/', stack: 'B', transform: 'negative-Y' });++local diskIOUtilisation =+  graphPanel.new(+    'Disk IO Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskIOSaturation =+  graphPanel.new(+    'Disk IO Saturation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskSpaceUtilisation =+  graphPanel.new(+    'Disk Space Utilisation',+    datasource='$datasource',+    span=12,+    format='percentunit',+    stack=false,+    fill=5,+    legend_show=true,+    legend_alignAsTable=true,+    legend_current=true,+    legend_avg=true,+    legend_rightSide=true,+    legend_sortDesc=true,+  );  {   grafanaDashboards+:: {-    'node-cluster-rsrc-use.json':-      local legendLink = '%s/dashboard/file/node-rsrc-use.json' % $._config.grafana_prefix;+                         'node-rsrc-use.json': -      g.dashboard('USE Method / Cluster')-      .addRow(-        g.row('CPU')-        .addPanel(-          g.panel('CPU Utilisation') +-          g.queryPanel(|||-            (-              instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s}-            *-              instance:node_num_cpu:sum{%(nodeExporterSelector)s}-            )-            / scalar(sum(instance:node_num_cpu:sum{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          // TODO: Is this a useful panel? At least there should be some explanation how load-          // average relates to the "CPU saturation" in the title.-          g.panel('CPU Saturation (load1 per CPU)') +-          g.queryPanel(|||-            instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          // TODO: Does `max: 1` make sense? The stack can go over 1 in high-load scenarios.-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Memory')-        .addPanel(-          g.panel('Memory Utilisation') +-          g.queryPanel(|||-            instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Memory Saturation (Major Page Faults)') +-          g.queryPanel('instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes('rps') },-        )-      )-      .addRow(-        g.row('Network')-        .addPanel(-          g.panel('Net Utilisation (Bytes Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'Bps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-        .addPanel(-          g.panel('Net Saturation (Drops Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'rps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-      )-      .addRow(-        g.row('Disk IO')-        .addPanel(-          g.panel('Disk IO Utilisation') +-          // Full utilisation would be all disks on each node spending an average of-          // 1 second per second doing I/O, normalize by metric cardinality for stacked charts.-          // TODO: Does the partition by device make sense? Using the most utilized device per-          // instance might make more sense.-          g.queryPanel(|||-            instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Disk IO Saturation') +-          g.queryPanel(|||-            instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Disk Space')-        .addPanel(-          g.panel('Disk Space Utilisation') +-          g.queryPanel(|||-            sum without (device) (-              max without (fstype, mountpoint) (-                node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s} - node_filesystem_avail_bytes{%(nodeExporterSelector)s, %(fsSelector)s}-              )-            ) -            / scalar(sum(max without (fstype, mountpoint) (node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s})))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        ),-      ),+                           dashboard.new(+                             '%sUSE Method / Node' % $._config.dashboardNamePrefix,+                             time_from='now-1h',+                             tags=($._config.dashboardTags),+                             timezone='utc',+                             refresh='30s',+                             graphTooltip='shared_crosshair'+                           )+                           .addTemplate(datasourceTemplate)+                           .addTemplate(clusterTemplate)+                           .addTemplate(+                             template.new(+                               'instance',+                               '$datasource',+                               'label_values(node_exporter_build_info{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}, instance)' % $._config,

Not setting %(nodeExporterSelector)s in this query would enable switching between node_exporter data from different jobs without causing any problems as node_exporter_build_info metric should be unique to a node.

ArthurSens

comment created time in 4 days

PR opened discordianfish/blog

Bump hosted-git-info from 2.8.5 to 2.8.9

Bumps hosted-git-info from 2.8.5 to 2.8.9. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/npm/hosted-git-info/blob/v2.8.9/CHANGELOG.md">hosted-git-info's changelog</a>.</em></p> <blockquote> <h2><a href="https://github.com/npm/hosted-git-info/compare/v2.8.8...v2.8.9">2.8.9</a> (2021-04-07)</h2> <h3>Bug Fixes</h3> <ul> <li>backport regex fix from <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/76">#76</a> (<a href="https://github.com/npm/hosted-git-info/commit/29adfe5">29adfe5</a>), closes <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/84">#84</a></li> </ul> <p><!-- raw HTML omitted --><!-- raw HTML omitted --></p> <h2><a href="https://github.com/npm/hosted-git-info/compare/v2.8.7...v2.8.8">2.8.8</a> (2020-02-29)</h2> <h3>Bug Fixes</h3> <ul> <li><a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/61">#61</a> & <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/65">#65</a> addressing issues w/ url.URL implmentation which regressed node 6 support (<a href="https://github.com/npm/hosted-git-info/commit/5038b18">5038b18</a>), closes <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/66">#66</a></li> </ul> <p><!-- raw HTML omitted --><!-- raw HTML omitted --></p> <h2><a href="https://github.com/npm/hosted-git-info/compare/v2.8.6...v2.8.7">2.8.7</a> (2020-02-26)</h2> <h3>Bug Fixes</h3> <ul> <li>Do not attempt to use url.URL when unavailable (<a href="https://github.com/npm/hosted-git-info/commit/2d0bb66">2d0bb66</a>), closes <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/61">#61</a> <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/62">#62</a></li> <li>Do not pass scp-style URLs to the WhatWG url.URL (<a href="https://github.com/npm/hosted-git-info/commit/f2cdfcf">f2cdfcf</a>), closes <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/60">#60</a></li> </ul> <p><!-- raw HTML omitted --><!-- raw HTML omitted --></p> <h2><a href="https://github.com/npm/hosted-git-info/compare/v2.8.5...v2.8.6">2.8.6</a> (2020-02-25)</h2> <p><!-- raw HTML omitted --><!-- raw HTML omitted --></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/npm/hosted-git-info/commit/8d4b3697d79bcd89cdb36d1db165e3696c783a01"><code>8d4b369</code></a> chore(release): 2.8.9</li> <li><a href="https://github.com/npm/hosted-git-info/commit/29adfe5ef789784c861b2cdeb15051ec2ba651a7"><code>29adfe5</code></a> fix: backport regex fix from <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/76">#76</a></li> <li><a href="https://github.com/npm/hosted-git-info/commit/afeaefdd86ba9bb5044be3c1554a666d007cf19a"><code>afeaefd</code></a> chore(release): 2.8.8</li> <li><a href="https://github.com/npm/hosted-git-info/commit/5038b1891a61ca3cd7453acbf85d7011fe0086bb"><code>5038b18</code></a> fix: <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/61">#61</a> & <a href="https://github-redirect.dependabot.com/npm/hosted-git-info/issues/65">#65</a> addressing issues w/ url.URL implmentation which regressed nod...</li> <li><a href="https://github.com/npm/hosted-git-info/commit/7440afa859162051c191e55d8ecfaf69a193b026"><code>7440afa</code></a> chore(release): 2.8.7</li> <li><a href="https://github.com/npm/hosted-git-info/commit/2d0bb6615ecb8f9ef1019bc0737aab7f6449641f"><code>2d0bb66</code></a> fix: Do not attempt to use url.URL when unavailable</li> <li><a href="https://github.com/npm/hosted-git-info/commit/f2cdfcf33ad2bd3bd1acdba0326281089f53c5b1"><code>f2cdfcf</code></a> fix: Do not pass scp-style URLs to the WhatWG url.URL</li> <li><a href="https://github.com/npm/hosted-git-info/commit/e1b83df5d9cb1f8bb220352e20565560548d2292"><code>e1b83df</code></a> chore(release): 2.8.6</li> <li><a href="https://github.com/npm/hosted-git-info/commit/ff259a6117c62df488e927820e30bec2f7ee453f"><code>ff259a6</code></a> Ensure passwords in hosted Git URLs are correctly escaped</li> <li>See full diff in <a href="https://github.com/npm/hosted-git-info/compare/v2.8.5...v2.8.9">compare view</a></li> </ul> </details> <details> <summary>Maintainer changes</summary> <p>This version was pushed to npm by <a href="https://www.npmjs.com/~nlf">nlf</a>, a new releaser for hosted-git-info since your current version.</p> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+22 -7

0 comment

1 changed file

pr created time in 4 days

delete branch discordianfish/blog

delete branch : dependabot/npm_and_yarn/url-parse-1.5.1

delete time in 4 days

delete branch discordianfish/blog

delete branch : dependabot/npm_and_yarn/ua-parser-js-0.7.28

delete time in 4 days

pull request commentprometheus/node_exporter

Multicluster support to node-exporter mixins

Feedback from contributor office hours: Replace scalar() with other aggregation functions.

The reason is because scalar() will always return something even with an erroneous query. No results is better than wrong results

ArthurSens

comment created time in 5 days

Pull request review commentprometheus/node_exporter

Multicluster support to node-exporter mixins

-local g = import 'github.com/grafana/jsonnet-libs/grafana-builder/grafana.libsonnet';+local grafana = import 'github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet';+local dashboard = grafana.dashboard;+local row = grafana.row;+local prometheus = grafana.prometheus;+local template = grafana.template;+local graphPanel = grafana.graphPanel;++local c = import '../config.libsonnet';++local datasourceTemplate = {+  current: {+    text: 'Prometheus',+    value: 'Prometheus',+  },+  hide: 0,+  label: null,+  name: 'datasource',+  options: [],+  query: 'prometheus',+  refresh: 1,+  regex: '',+  type: 'datasource',+};++local clusterTemplate =+  template.new(+    name='cluster',+    datasource='$datasource',+    query='label_values(node_cpu_seconds_total, %s)' % c._config.clusterLabel,+    current='',+    hide=if c._config.showMultiCluster then '' else '2',+    refresh=2,+    includeAll=false,+    sort=1+  );++local CPUUtilisation =+  graphPanel.new(+    'CPU Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local CPUSaturation =+  // TODO: Is this a useful panel? At least there should be some explanation how load+  // average relates to the "CPU saturation" in the title.+  graphPanel.new(+    'CPU Saturation (Load1 per CPU)',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memoryUtilisation =+  graphPanel.new(+    'Memory Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local memorySaturation =+  graphPanel.new(+    'Memory Saturation (Major Page Faults)',+    datasource='$datasource',+    span=6,+    format='rds',+    stack=true,+    fill=10,+    legend_show=false,+  );++local networkUtilisation =+  graphPanel.new(+    'Network Utilisation (Bytes Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/Transmit/', stack: 'B', transform: 'negative-Y' });++local networkSaturation =+  graphPanel.new(+    'Network Saturation (Drops Receive/Transmit)',+    datasource='$datasource',+    span=6,+    format='Bps',+    stack=true,+    fill=10,+    legend_show=false,+  )+  .addSeriesOverride({ alias: '/ Receive/', stack: 'A' })+  .addSeriesOverride({ alias: '/ Transmit/', stack: 'B', transform: 'negative-Y' });++local diskIOUtilisation =+  graphPanel.new(+    'Disk IO Utilisation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskIOSaturation =+  graphPanel.new(+    'Disk IO Saturation',+    datasource='$datasource',+    span=6,+    format='percentunit',+    stack=true,+    fill=10,+    legend_show=false,+  );++local diskSpaceUtilisation =+  graphPanel.new(+    'Disk Space Utilisation',+    datasource='$datasource',+    span=12,+    format='percentunit',+    stack=false,+    fill=5,+    legend_show=true,+    legend_alignAsTable=true,+    legend_current=true,+    legend_avg=true,+    legend_rightSide=true,+    legend_sortDesc=true,+  );  {   grafanaDashboards+:: {-    'node-cluster-rsrc-use.json':-      local legendLink = '%s/dashboard/file/node-rsrc-use.json' % $._config.grafana_prefix;+                         'node-rsrc-use.json': -      g.dashboard('USE Method / Cluster')-      .addRow(-        g.row('CPU')-        .addPanel(-          g.panel('CPU Utilisation') +-          g.queryPanel(|||-            (-              instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s}-            *-              instance:node_num_cpu:sum{%(nodeExporterSelector)s}-            )-            / scalar(sum(instance:node_num_cpu:sum{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          // TODO: Is this a useful panel? At least there should be some explanation how load-          // average relates to the "CPU saturation" in the title.-          g.panel('CPU Saturation (load1 per CPU)') +-          g.queryPanel(|||-            instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          // TODO: Does `max: 1` make sense? The stack can go over 1 in high-load scenarios.-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Memory')-        .addPanel(-          g.panel('Memory Utilisation') +-          g.queryPanel(|||-            instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}-            / scalar(count(instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Memory Saturation (Major Page Faults)') +-          g.queryPanel('instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes('rps') },-        )-      )-      .addRow(-        g.row('Network')-        .addPanel(-          g.panel('Net Utilisation (Bytes Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'Bps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-        .addPanel(-          g.panel('Net Saturation (Drops Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-              'instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}' % $._config,-            ],-            ['{{instance}} Receive', '{{instance}} Transmit'],-            legendLink,-          ) +-          g.stack +-          {-            yaxes: g.yaxes({ format: 'rps', min: null }),-            seriesOverrides: [-              {-                alias: '/ Receive/',-                stack: 'A',-              },-              {-                alias: '/ Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-      )-      .addRow(-        g.row('Disk IO')-        .addPanel(-          g.panel('Disk IO Utilisation') +-          // Full utilisation would be all disks on each node spending an average of-          // 1 second per second doing I/O, normalize by metric cardinality for stacked charts.-          // TODO: Does the partition by device make sense? Using the most utilized device per-          // instance might make more sense.-          g.queryPanel(|||-            instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-        .addPanel(-          g.panel('Disk IO Saturation') +-          g.queryPanel(|||-            instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}-            / scalar(count(instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))-          ||| % $._config, '{{instance}} {{device}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        )-      )-      .addRow(-        g.row('Disk Space')-        .addPanel(-          g.panel('Disk Space Utilisation') +-          g.queryPanel(|||-            sum without (device) (-              max without (fstype, mountpoint) (-                node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s} - node_filesystem_avail_bytes{%(nodeExporterSelector)s, %(fsSelector)s}-              )-            ) -            / scalar(sum(max without (fstype, mountpoint) (node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s})))-          ||| % $._config, '{{instance}}', legendLink) +-          g.stack +-          { yaxes: g.yaxes({ format: 'percentunit', max: 1 }) },-        ),-      ),+                           dashboard.new(+                             '%sUSE Method / Node' % $._config.dashboardNamePrefix,+                             time_from='now-1h',+                             tags=($._config.dashboardTags),+                             timezone='utc',+                             refresh='30s',+                             graphTooltip='shared_crosshair'+                           )+                           .addTemplate(datasourceTemplate)+                           .addTemplate(clusterTemplate)+                           .addTemplate(+                             template.new(+                               'instance',+                               '$datasource',+                               'label_values(node_exporter_build_info{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}, instance)' % $._config,+                               refresh='time',+                               sort=1+                             )+                           )+                           .addRow(+                             row.new('CPU')+                             .addPanel(CPUUtilisation.addTarget(prometheus.target('instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Utilisation')))+                             .addPanel(CPUSaturation.addTarget(prometheus.target('instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Saturation')))+                           )+                           .addRow(+                             row.new('Memory')+                             .addPanel(memoryUtilisation.addTarget(prometheus.target('instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Utilisation')))+                             .addPanel(memorySaturation.addTarget(prometheus.target('instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Major page Faults')))+                           )+                           .addRow(+                             row.new('Network')+                             .addPanel(+                               networkUtilisation+                               .addTarget(prometheus.target('instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Receive'))+                               .addTarget(prometheus.target('instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Transmit'))+                             )+                             .addPanel(+                               networkSaturation+                               .addTarget(prometheus.target('instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Receive'))+                               .addTarget(prometheus.target('instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='Transmit'))+                             )+                           )+                           .addRow(+                             row.new('Disk IO')+                             .addPanel(diskIOUtilisation.addTarget(prometheus.target('instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='{{device}}')))+                             .addPanel(diskIOSaturation.addTarget(prometheus.target('instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance", %(clusterLabel)s="$cluster"}' % $._config, legendFormat='{{device}}')))+                           )+                           .addRow(+                             row.new('Disk Space')+                             .addPanel(+                               diskSpaceUtilisation.addTarget(prometheus.target(+                                 |||+                                   sort_desc(1 -+                                     (+                                      max without (mountpoint, fstype) (node_filesystem_avail_bytes{%(nodeExporterSelector)s, fstype!="", instance="$instance", %(clusterLabel)s="$cluster"})+                                      /+                                      max without (mountpoint, fstype) (node_filesystem_size_bytes{%(nodeExporterSelector)s, fstype!="", instance="$instance", %(clusterLabel)s="$cluster"})+                                     )+                                   )+                                 ||| % $._config, legendFormat='{{device}}'+                               ))+                             )+                           ), -    'node-rsrc-use.json':-      g.dashboard('USE Method / Node')-      .addTemplate('instance', 'up{%(nodeExporterSelector)s}' % $._config, 'instance')-      .addRow(-        g.row('CPU')-        .addPanel(-          g.panel('CPU Utilisation') +-          g.queryPanel('instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config, 'Utilisation') +-          {-            yaxes: g.yaxes('percentunit'),-            legend+: { show: false },-          },-        )-        .addPanel(-          // TODO: Is this a useful panel? At least there should be some explanation how load-          // average relates to the "CPU saturation" in the title.-          g.panel('CPU Saturation (Load1 per CPU)') +-          g.queryPanel('instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s, instance="$instance"}' % $._config, 'Saturation') +-          {-            yaxes: g.yaxes('percentunit'),-            legend+: { show: false },-          },-        )-      )-      .addRow(-        g.row('Memory')-        .addPanel(-          g.panel('Memory Utilisation') +-          g.queryPanel('instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s, %(nodeExporterSelector)s, instance="$instance"}' % $._config, 'Memory') +-          { yaxes: g.yaxes('percentunit') },-        )-        .addPanel(-          g.panel('Memory Saturation (Major Page Faults)') +-          g.queryPanel('instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config, 'Major page faults') +-          {-            yaxes: g.yaxes('short'),-            legend+: { show: false },-          },-        )-      )-      .addRow(-        g.row('Net')-        .addPanel(-          g.panel('Net Utilisation (Bytes Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config,-              'instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config,-            ],-            ['Receive', 'Transmit'],-          ) +-          {-            yaxes: g.yaxes({ format: 'Bps', min: null }),-            seriesOverrides: [-              {-                alias: '/Receive/',-                stack: 'A',-              },-              {-                alias: '/Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-        .addPanel(-          g.panel('Net Saturation (Drops Receive/Transmit)') +-          g.queryPanel(-            [-              'instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config,-              'instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config,-            ],-            ['Receive drops', 'Transmit drops'],-          ) +-          {-            yaxes: g.yaxes({ format: 'rps', min: null }),-            seriesOverrides: [-              {-                alias: '/Receive/',-                stack: 'A',-              },-              {-                alias: '/Transmit/',-                stack: 'B',-                transform: 'negative-Y',-              },-            ],-          },-        )-      )-      .addRow(-        g.row('Disk IO')-        .addPanel(-          g.panel('Disk IO Utilisation') +-          g.queryPanel('instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config, '{{device}}') +-          { yaxes: g.yaxes('percentunit') },-        )-        .addPanel(-          g.panel('Disk IO Saturation') +-          g.queryPanel('instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, instance="$instance"}' % $._config, '{{device}}') +-          { yaxes: g.yaxes('percentunit') },-        )-      )-      .addRow(-        g.row('Disk Space')-        .addPanel(-          g.panel('Disk Space Utilisation') +-          g.queryPanel(|||-            1 --            (-              max without (mountpoint, fstype) (node_filesystem_avail_bytes{%(nodeExporterSelector)s, %(fsSelector)s, instance="$instance"})-            /-              max without (mountpoint, fstype) (node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s, instance="$instance"})-            )-          ||| % $._config, '{{device}}') +-          {-            yaxes: g.yaxes('percentunit'),-            legend+: { show: false },-          },-        ),-      ),-  },+                         'node-cluster-rsrc-use.json':+                           dashboard.new(+                             '%sUSE Method / Cluster' % $._config.dashboardNamePrefix,+                             time_from='now-1h',+                             tags=($._config.dashboardTags),+                             timezone='utc',+                             refresh='30s',+                             graphTooltip='shared_crosshair'+                           )+                           .addTemplate(datasourceTemplate)+                           .addTemplate(clusterTemplate)+                           .addRow(+                             row.new('CPU')+                             .addPanel(+                               CPUUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   (+                                     instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}+                                   *+                                     instance:node_num_cpu:sum{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}+                                   )+                                   / scalar(sum(instance:node_num_cpu:sum{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}))+                                 ||| % $._config, legendFormat='{{ instance }}'+                               ))+                             )+                             .addPanel(+                               CPUSaturation+                               .addTarget(prometheus.target(+                                 |||+                                   instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}+                                   / scalar(count(instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}))+                                 ||| % $._config, legendFormat='{{instance}}'+                               ))+                             )+                           )+                           .addRow(+                             row.new('Memory')+                             .addPanel(+                               memoryUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}+                                   / scalar(count(instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}))+                                 ||| % $._config, legendFormat='{{instance}}',+                               ))+                             )+                             .addPanel(memorySaturation.addTarget(prometheus.target('instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}' % $._config, legendFormat='{{instance}}')))+                           )+                           .addRow(+                             row.new('Network')+                             .addPanel(+                               networkUtilisation+                               .addTarget(prometheus.target('instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}' % $._config, legendFormat='{{instance}} Receive'))+                               .addTarget(prometheus.target('instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}' % $._config, legendFormat='{{instance}} Transmit'))+                             )+                             .addPanel(+                               networkSaturation+                               .addTarget(prometheus.target('instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}' % $._config, legendFormat='{{instance}} Receive'))+                               .addTarget(prometheus.target('instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}' % $._config, legendFormat='{{instance}} Transmit'))+                             )+                           )+                           .addRow(+                             row.new('Disk IO')+                             .addPanel(+                               diskIOUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}+                                   / scalar(count(instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}))+                                 ||| % $._config, legendFormat='{{instance}} {{device}}'+                               ))+                             )+                             .addPanel(+                               diskIOSaturation+                               .addTarget(prometheus.target(+                                 |||+                                   instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}+                                   / scalar(count(instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s, %(clusterLabel)s="$cluster"}))+                                 ||| % $._config, legendFormat='{{instance}} {{device}}'+                               ))+                             )+                           )+                           .addRow(+                             row.new('Disk Space')+                             .addPanel(+                               diskSpaceUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   sum without (device) (+                                     max without (fstype, mountpoint) (+                                       node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s, %(clusterLabel)s="$cluster"}+                                       -+                                       node_filesystem_avail_bytes{%(nodeExporterSelector)s, %(fsSelector)s, %(clusterLabel)s="$cluster"}+                                     )+                                   )+                                   / scalar(sum(max without (fstype, mountpoint) (node_filesystem_size_bytes{%(nodeExporterSelector)s, %(fsSelector)s, %(clusterLabel)s="$cluster"})))+                                 ||| % $._config, legendFormat='{{instance}}'+                               ))+                             )+                           ),+                       } ++                       if $._config.showMultiCluster then {+                         'node-multicluster-rsrc-use.json':+                           dashboard.new(+                             '%sUSE Method / Multi-cluster' % $._config.dashboardNamePrefix,+                             time_from='now-1h',+                             tags=($._config.dashboardTags),+                             timezone='utc',+                             refresh='30s',+                             graphTooltip='shared_crosshair'+                           )+                           .addTemplate(datasourceTemplate)+                           .addRow(+                             row.new('CPU')+                             .addPanel(+                               CPUUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                     (+                                       instance:node_cpu_utilisation:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                     *+                                       instance:node_num_cpu:sum{%(nodeExporterSelector)s}+                                     )+                                     / scalar(sum(instance:node_num_cpu:sum{%(nodeExporterSelector)s}))+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}}' % $._config+                               ))+                             )+                             .addPanel(+                               CPUSaturation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                     instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}+                                     / scalar(count(instance:node_load1_per_cpu:ratio{%(nodeExporterSelector)s}))+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}}' % $._config+                               ))+                             )+                           )+                           .addRow(+                             row.new('Memory')+                             .addPanel(+                               memoryUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                       instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}+                                       / scalar(count(instance:node_memory_utilisation:ratio{%(nodeExporterSelector)s}))+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}}' % $._config+                               ))+                             )+                             .addPanel(+                               memorySaturation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                       instance:node_vmstat_pgmajfault:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}}' % $._config+                               ))+                             )+                           )+                           .addRow(+                             row.new('Network')+                             .addPanel(+                               networkUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                       instance:node_network_receive_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}} Receive' % $._config+                               ))+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                       instance:node_network_transmit_bytes_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}} Transmit' % $._config+                               ))+                             )+                             .addPanel(+                               networkSaturation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                       instance:node_network_receive_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}} Receive' % $._config+                               ))+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                       instance:node_network_transmit_drop_excluding_lo:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                   ) by (%(clusterLabel)s)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}} Transmit' % $._config+                               ))+                             )+                           )+                           .addRow(+                             row.new('Disk IO')+                             .addPanel(+                               diskIOUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                       instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                       / scalar(count(instance_device:node_disk_io_time_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))+                                   ) by (%(clusterLabel)s, device)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}} {{device}}' % $._config+                               ))+                             )+                             .addPanel(+                               diskIOSaturation+                               .addTarget(prometheus.target(+                                 |||+                                   sum(+                                     instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}+                                     / scalar(count(instance_device:node_disk_io_time_weighted_seconds:rate%(rateInterval)s{%(nodeExporterSelector)s}))+                                   ) by (%(clusterLabel)s, device)+                                 ||| % $._config, legendFormat='{{%(clusterLabel)s}} {{device}}' % $._config+                               ))+                             )+                           )+                           .addRow(+                             row.new('Disk Space')+                             .addPanel(+                               diskSpaceUtilisation+                               .addTarget(prometheus.target(+                                 |||+                                   sum without (device) (

Is this query right?

ArthurSens

comment created time in 5 days

issue closedprometheus/node_exporter

node_exporter

<!-- Please note: GitHub issues should only be used for feature requests and bug reports. For general usage/help/discussions, please refer to one of:

- #prometheus on freenode
- the Prometheus Users list: https://groups.google.com/forum/#!forum/prometheus-users

Before filing a bug report, note that running node_exporter in Docker is
not recommended, for the reasons detailed in the README:

https://github.com/prometheus/node_exporter#using-docker

Finally, also note that node_exporter is focused on *NIX kernels, and the
WMI exporter should be used instead on Windows.

For bug reports, please fill out the below fields and provide as much detail
as possible about your issue.  For feature requests, you may omit the
following template.

-->

Host operating system: output of uname -a

node_exporter version: output of node_exporter --version

<!-- If building from source, run make first. -->

node_exporter command line flags

<!-- Please list all of the command line flags -->

Are you running node_exporter in Docker?

<!-- Please note the warning above. -->

What did you do that produced an error?

What did you expect to see?

What did you see instead?

i did not find out node_export config file to change default port. Can any one please help me

closed time in 6 days

Chandandhani

issue commentprometheus/node_exporter

node_exporter

Thanks for your report. It looks as if this is actually a question about usage and not development.

It makes more sense to ask questions like this on the prometheus-users mailing list or our forums rather than in a GitHub issue. In our community channels, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.

Chandandhani

comment created time in 6 days