profile
viewpoint

micrometer-metrics/micrometer 2154

An application metrics facade for the most popular monitoring tools. Think SLF4J, but for metrics.

openzipkin-attic/zipkin-sparkstreaming 72

A streaming alternative to Zipkin's collector

shakuzen/aggregate-child-update-sample 0

Spring Data REST issue repro project

shakuzen/ansible-modules-core 0

Ansible modules - these modules ship with ansible

shakuzen/app-autoscaler 0

Auto Scaling for CF Applications

shakuzen/b3-propagation 0

Repository that describes and sometimes implements B3 propagation

shakuzen/bitbucket-branch-source-plugin 0

Bitbucket Branch Source Plugin

shakuzen/bobobot 0

bobobot

issue openedmicrometer-metrics/micrometer

Support JOOQ 3.13+

I've opened https://github.com/micrometer-metrics/micrometer/pull/1853 to pin the version to 3.12.x, which fixes the build. I'd propose doing that as a breakfix, and then opening another issue to add support for JOOQ 3.13+

Originally posted by @schmidt-galen-heb in https://github.com/micrometer-metrics/micrometer/issues/1852#issuecomment-586352909

created time in 10 days

push eventmicrometer-metrics/micrometer

Galen S

commit sha fa8476c6db6a0042358e283c42d0ea95d4a2774d

Pin JOOQ to 3.12.x (resolves #1852) (#1853)

view details

push time in 10 days

issue closedmicrometer-metrics/micrometer

Build fails because of JOOQ 3.13.0

JOOQ released version 3.13.0 on February 12th 2020.

Micrometer is incompatible with this release, and because it pins the version to latest.release, this has broken the build:

$ ./gradlew clean compileJava
... snip ...
> Task :micrometer-core:compileJava FAILED
/Users/0000000/git/micrometer-metrics/micrometer/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/db/MetricsDSLContext.java:63: error: MetricsDSLContext is not abstract and does not override abstract method dropTemporaryTableIfExists(Table<?>) in DSLContext
public class MetricsDSLContext implements DSLContext {
       ^
... snip ...

20 errors
47 warnings

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':micrometer-core:compileJava'.
> Compilation failed; see the compiler error output for details.

closed time in 10 days

schmidt-galen-heb

PR merged micrometer-metrics/micrometer

Use Elasticsearch 7.6.0 Docker image for integration tests type: task

This PR changes to use Elasticsearch latest version (7.6.0) Docker image for integration tests

+1 -1

0 comment

1 changed file

izeye

pr closed time in 15 days

push eventmicrometer-metrics/micrometer

Johnny Lim

commit sha f6a2687f13f2b9fd7351ce9126a9f04238da79dd

Use Elasticsearch 7.6.0 Docker image for integration tests (#1849)

view details

push time in 15 days

push eventmicrometer-metrics/micrometer

Johnny Lim

commit sha 46244975a24877d62cb9e6c0c0bf475b441dd765

Add version to deprecated comments in StatsdMetrics (#1848)

view details

push time in 15 days

PR merged micrometer-metrics/micrometer

Add version to deprecated comments in StatsdMetrics

This PR adds version to deprecated comments in StatsdMetrics.

+3 -3

0 comment

2 changed files

izeye

pr closed time in 15 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNull;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.ArrayList;+import java.util.HashMap;+import java.util.List;+import java.util.Map;+import java.util.concurrent.TimeUnit;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ * <p>+ * Meter names have the following convention: {@code kafka.(metric_group).(metric_name)}+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";+    static final String VERSION_METRIC_NAME = "version";+    static final String START_TIME_METRIC_NAME = "start-time-ms";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current set of metrics. When this values change, metrics are bound again.+     */+    private volatile Map<MetricName, Meter> currentMeters = new HashMap<>();++    private String version = "";++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {

I think we'll have to split this out to another class. Compilation will fail when using KafkaMetrics without kafka-streams on the compilation classpath.

jeqo

comment created time in 16 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata+                if (METRIC_GROUP_APP_INFO.equals(name.group())) {+                    currentSize.incrementAndGet();+                    return;+                }+                Meter meter = bindMeter(registry, metric);+                //Collect metrics with same name to validate number of labels+                Set<Meter> meters = boundMeters.get(metric.metricName().name());+                if (meters == null) meters = new HashSet<>();+                meters.add(meter);+                boundMeters.put(metric.metricName().name(), meters);+            });++            //Remove meters with lower number of tags+            boundMeters.forEach((metricName, meters) -> {+                if (meters.size() > 1) {+                    //Find largest number of tags+                    int maxTagsSize = 0;+                    for (Meter meter : meters) {+                        int size = meter.getId().getTags().size();+                        if (maxTagsSize < size) maxTagsSize = size;+                    }+                    //Remove meters with lower number of tags+                    for (Meter meter : meters) {+                        if (meter.getId().getTags().size() < maxTagsSize) registry.remove(meter);+                    }+                }+            });+        }+    }++    @NotNull private Meter bindMeter(MeterRegistry registry, Metric metric) {+        String metricName = metricName(metric);+        Meter meter;+        if (metricName.endsWith("total") || metricName.endsWith("count")) {+            meter = registerCounter(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("min")+                || metricName.endsWith("max")+                || metricName.endsWith("avg")) {+            meter = registerGauge(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("rate")) {+            meter = registerTimeGauge(registry, metric, metricName, extraTags);+        } else {+            meter = registerGauge(registry, metric, metricName, extraTags);+        }+        return meter;+    }++    private TimeGauge registerTimeGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return TimeGauge.builder(metricName, metric, TimeUnit.SECONDS, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private Gauge registerGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return Gauge.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private FunctionCounter registerCounter(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return FunctionCounter.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private ToDoubleFunction<Metric> toMetricValue(MeterRegistry registry) {+        return metric -> {+            //Double-check if new metrics are registered; if not (common scenario)+            //it only adds metrics count validation+            checkAndBindMetrics(registry);+            if (metric.metricValue() instanceof Double) {

The advice in there seems to be do not register a metric if it does not have a numeric value. That seems like good advice. Let's not register metrics with non-numeric values. Then, assuming the value type cannot change for a given metric, we don't need to handle the non-numeric case in the ToDoubleFunction

jeqo

comment created time in 16 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata

version as a tag name can be problematic because it is likely to be used by a user as a common tag (possibly referring to the version of the application). Maybe a separate metric for the kafka client version is better, after all.

jeqo

comment created time in 16 days

push eventjeqo/micrometer

Tommy Ludwig

commit sha d580cb7249bc3b4a27b8b8b6e2c5e3e78c9d00b2

Note ganglia registry's dependency's licensing See #1354

view details

Tommy Ludwig

commit sha 90119b9c73a4d373d8086b9b76f23600d8fe22eb

Merge branch '1.1.x' into 1.3.x

view details

Tommy Ludwig

commit sha 9356d5bdad23d9b532a401eaab3b5e5c6c0d08c9

Merge branch '1.3.x'

view details

Tommy Ludwig

commit sha adba350a8bd42cff83a86ccdd88f2eed8405a41e

Prevent importing other nullability annotations IDEs may suggest other project's nullability annotations in auto-complete, and these may be mistakenly used. It is easy to miss this in code review, so add the packages for these annotations to the Checkstyle IllegalImport check. Resolves #1845

view details

Tommy Ludwig

commit sha 57f85721ef6c03eb4124fabb785f2a8ff39e11b2

Merge branch '1.1.x' into 1.3.x

view details

Tommy Ludwig

commit sha 7373b89349870171ffd5373c944a6a9c96b7f9c7

Merge branch '1.3.x'

view details

Tommy Ludwig

commit sha 4caf3e8ad32f057998ebd90cd32e219494daea11

Remove unnecessary `@NotNull` annotation from test code

view details

Tommy Ludwig

commit sha 9f99c216d0b43aa3781e9e4d3bf0e3e2592d5fe9

Merge branch 'master' into kafka-binder

view details

push time in 16 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha 4caf3e8ad32f057998ebd90cd32e219494daea11

Remove unnecessary `@NotNull` annotation from test code

view details

push time in 16 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha adba350a8bd42cff83a86ccdd88f2eed8405a41e

Prevent importing other nullability annotations IDEs may suggest other project's nullability annotations in auto-complete, and these may be mistakenly used. It is easy to miss this in code review, so add the packages for these annotations to the Checkstyle IllegalImport check. Resolves #1845

view details

Tommy Ludwig

commit sha 57f85721ef6c03eb4124fabb785f2a8ff39e11b2

Merge branch '1.1.x' into 1.3.x

view details

Tommy Ludwig

commit sha 7373b89349870171ffd5373c944a6a9c96b7f9c7

Merge branch '1.3.x'

view details

push time in 16 days

issue closedmicrometer-metrics/micrometer

Prevent accidental import of wrong nullability annotations

We have confused ourselves a couple times using the wrong nullability annotations, which is very easy to do (e.g. is it @NotNull? is it @NonNull? which one?). We should be using the io.micrometer package annotations, and we can help ensure other ones do not sneak in by mistake with a Checkstyle import ban rule.


Should be io.micrometer.core.lang.NonNull instead. But actually, all of the return values are considered @NonNull due to the @NonNullApi annotation. We should add a checkstyle rule to ban the jetbrains package import because this isn't the first time things have gotten mixed up.

_Originally posted by @shakuzen in https://github.com/render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjMzMzI5Njc2OnYy/pull_request_review_threads/discussion

closed time in 16 days

shakuzen

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha adba350a8bd42cff83a86ccdd88f2eed8405a41e

Prevent importing other nullability annotations IDEs may suggest other project's nullability annotations in auto-complete, and these may be mistakenly used. It is easy to miss this in code review, so add the packages for these annotations to the Checkstyle IllegalImport check. Resolves #1845

view details

Tommy Ludwig

commit sha 57f85721ef6c03eb4124fabb785f2a8ff39e11b2

Merge branch '1.1.x' into 1.3.x

view details

push time in 16 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha adba350a8bd42cff83a86ccdd88f2eed8405a41e

Prevent importing other nullability annotations IDEs may suggest other project's nullability annotations in auto-complete, and these may be mistakenly used. It is easy to miss this in code review, so add the packages for these annotations to the Checkstyle IllegalImport check. Resolves #1845

view details

push time in 16 days

issue openedmicrometer-metrics/micrometer

Prevent accidental import of wrong nullability annotations

We have confused ourselves a couple times using the wrong nullability annotations, which is very easy to do (e.g. is it @NotNull? is it @NonNull? which one?). We should be using the io.micrometer package annotations, and we can help ensure other ones do not sneak in by mistake with a Checkstyle import ban rule.


Should be io.micrometer.core.lang.NonNull instead. But actually, all of the return values are considered @NonNull due to the @NonNullApi annotation. We should add a checkstyle rule to ban the jetbrains package import because this isn't the first time things have gotten mixed up.

_Originally posted by @shakuzen in https://github.com/render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjMzMzI5Njc2OnYy/pull_request_review_threads/discussion

created time in 16 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

          <!-- Imports -->         <module name="IllegalImportCheck" >-            <property name="illegalPkgs" value="com.google.common.(?!cache).*,org.apache.commons.text.*,org.slf4j.*"/>+            <property name="illegalPkgs" value="com.google.common.(?!cache).*,org.apache.commons.text.*,org.slf4j.*,org.jetbrains.*"/>

Thanks for this. I'll commit this change to the maintenance branches and merge into master so it has a separate audit trail from this pull request.

jeqo

comment created time in 16 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata+                if (METRIC_GROUP_APP_INFO.equals(name.group())) {+                    currentSize.incrementAndGet();+                    return;+                }+                Meter meter = bindMeter(registry, metric);+                //Collect metrics with same name to validate number of labels+                Set<Meter> meters = boundMeters.get(metric.metricName().name());+                if (meters == null) meters = new HashSet<>();+                meters.add(meter);+                boundMeters.put(metric.metricName().name(), meters);+            });++            //Remove meters with lower number of tags+            boundMeters.forEach((metricName, meters) -> {+                if (meters.size() > 1) {+                    //Find largest number of tags+                    int maxTagsSize = 0;+                    for (Meter meter : meters) {+                        int size = meter.getId().getTags().size();+                        if (maxTagsSize < size) maxTagsSize = size;+                    }+                    //Remove meters with lower number of tags+                    for (Meter meter : meters) {+                        if (meter.getId().getTags().size() < maxTagsSize) registry.remove(meter);+                    }+                }+            });+        }+    }++    @NotNull private Meter bindMeter(MeterRegistry registry, Metric metric) {+        String metricName = metricName(metric);+        Meter meter;+        if (metricName.endsWith("total") || metricName.endsWith("count")) {+            meter = registerCounter(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("min")+                || metricName.endsWith("max")+                || metricName.endsWith("avg")) {+            meter = registerGauge(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("rate")) {+            meter = registerTimeGauge(registry, metric, metricName, extraTags);+        } else {+            meter = registerGauge(registry, metric, metricName, extraTags);+        }+        return meter;+    }++    private TimeGauge registerTimeGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return TimeGauge.builder(metricName, metric, TimeUnit.SECONDS, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private Gauge registerGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return Gauge.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private FunctionCounter registerCounter(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return FunctionCounter.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private ToDoubleFunction<Metric> toMetricValue(MeterRegistry registry) {+        return metric -> {+            //Double-check if new metrics are registered; if not (common scenario)+            //it only adds metrics count validation+            checkAndBindMetrics(registry);+            if (metric.metricValue() instanceof Double) {+                return (double) metric.metricValue();+            } else {+                return 0.0;+            }+        };+    }++    private List<Tag> metricTags(Metric metric) {+        return metric.metricName().tags()+            .entrySet()+            .stream()+            .map(entry -> Tag.of(entry.getKey(), entry.getValue()))+            .collect(Collectors.toList());+    }++    private String metricName(Metric metric) {+        String value =+            METRIC_NAME_PREFIX + metric.metricName().group() + "." + metric.metricName().name();

I was also thinking about compatibility with the previous KafkaConsumerMetrics as Micrometer users may already have dashboards built around those and it would be best if the consumer metrics could continue to be used as is with the new implementation.

jeqo

comment created time in 16 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata

Commit ID doesn't seem as useful unless snapshot versions are used, but maybe we could add the kafka client version as a tag to all the kafka metrics?

jeqo

comment created time in 16 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata+                if (METRIC_GROUP_APP_INFO.equals(name.group())) {+                    currentSize.incrementAndGet();

Why is the currentSize incremented here? Isn't the metric already accounted for in the size of the Map (L177 currentSize.set(metrics.size());)?

jeqo

comment created time in 17 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata+                if (METRIC_GROUP_APP_INFO.equals(name.group())) {+                    currentSize.incrementAndGet();+                    return;+                }+                Meter meter = bindMeter(registry, metric);+                //Collect metrics with same name to validate number of labels+                Set<Meter> meters = boundMeters.get(metric.metricName().name());+                if (meters == null) meters = new HashSet<>();+                meters.add(meter);+                boundMeters.put(metric.metricName().name(), meters);+            });++            //Remove meters with lower number of tags+            boundMeters.forEach((metricName, meters) -> {+                if (meters.size() > 1) {+                    //Find largest number of tags+                    int maxTagsSize = 0;+                    for (Meter meter : meters) {+                        int size = meter.getId().getTags().size();+                        if (maxTagsSize < size) maxTagsSize = size;+                    }+                    //Remove meters with lower number of tags+                    for (Meter meter : meters) {+                        if (meter.getId().getTags().size() < maxTagsSize) registry.remove(meter);+                    }+                }+            });+        }+    }++    @NotNull private Meter bindMeter(MeterRegistry registry, Metric metric) {+        String metricName = metricName(metric);+        Meter meter;+        if (metricName.endsWith("total") || metricName.endsWith("count")) {+            meter = registerCounter(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("min")+                || metricName.endsWith("max")+                || metricName.endsWith("avg")) {+            meter = registerGauge(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("rate")) {+            meter = registerTimeGauge(registry, metric, metricName, extraTags);+        } else {+            meter = registerGauge(registry, metric, metricName, extraTags);+        }+        return meter;+    }++    private TimeGauge registerTimeGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return TimeGauge.builder(metricName, metric, TimeUnit.SECONDS, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private Gauge registerGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return Gauge.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private FunctionCounter registerCounter(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return FunctionCounter.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private ToDoubleFunction<Metric> toMetricValue(MeterRegistry registry) {+        return metric -> {+            //Double-check if new metrics are registered; if not (common scenario)+            //it only adds metrics count validation+            checkAndBindMetrics(registry);+            if (metric.metricValue() instanceof Double) {

Do we know what other kinds of values are expected from Kafka metrics? If it were say a String, us returning 0.0 could be confusing. I wonder if NaN is better when the value isn't a double.

jeqo

comment created time in 17 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;

Should be io.micrometer.core.lang.NonNull instead. But actually, all of the return values are considered @NonNull due to the @NonNullApi annotation. We should add a checkstyle rule to ban the jetbrains package import because this isn't the first time things have gotten mixed up.

jeqo

comment created time in 17 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata+                if (METRIC_GROUP_APP_INFO.equals(name.group())) {+                    currentSize.incrementAndGet();+                    return;+                }+                Meter meter = bindMeter(registry, metric);+                //Collect metrics with same name to validate number of labels+                Set<Meter> meters = boundMeters.get(metric.metricName().name());+                if (meters == null) meters = new HashSet<>();+                meters.add(meter);+                boundMeters.put(metric.metricName().name(), meters);+            });++            //Remove meters with lower number of tags+            boundMeters.forEach((metricName, meters) -> {+                if (meters.size() > 1) {+                    //Find largest number of tags+                    int maxTagsSize = 0;+                    for (Meter meter : meters) {+                        int size = meter.getId().getTags().size();+                        if (maxTagsSize < size) maxTagsSize = size;+                    }+                    //Remove meters with lower number of tags+                    for (Meter meter : meters) {+                        if (meter.getId().getTags().size() < maxTagsSize) registry.remove(meter);+                    }+                }+            });+        }+    }++    @NotNull private Meter bindMeter(MeterRegistry registry, Metric metric) {+        String metricName = metricName(metric);+        Meter meter;+        if (metricName.endsWith("total") || metricName.endsWith("count")) {+            meter = registerCounter(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("min")+                || metricName.endsWith("max")+                || metricName.endsWith("avg")) {+            meter = registerGauge(registry, metric, metricName, extraTags);+        } else if (metricName.endsWith("rate")) {+            meter = registerTimeGauge(registry, metric, metricName, extraTags);+        } else {+            meter = registerGauge(registry, metric, metricName, extraTags);+        }+        return meter;+    }++    private TimeGauge registerTimeGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return TimeGauge.builder(metricName, metric, TimeUnit.SECONDS, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private Gauge registerGauge(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return Gauge.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private FunctionCounter registerCounter(MeterRegistry registry, Metric metric, String metricName, Iterable<Tag> extraTags) {+        return FunctionCounter.builder(metricName, metric, toMetricValue(registry))+            .tags(metricTags(metric))+            .tags(extraTags)+            .description(metric.metricName().description())+            .register(registry);+    }++    private ToDoubleFunction<Metric> toMetricValue(MeterRegistry registry) {+        return metric -> {+            //Double-check if new metrics are registered; if not (common scenario)+            //it only adds metrics count validation+            checkAndBindMetrics(registry);+            if (metric.metricValue() instanceof Double) {+                return (double) metric.metricValue();+            } else {+                return 0.0;+            }+        };+    }++    private List<Tag> metricTags(Metric metric) {+        return metric.metricName().tags()+            .entrySet()+            .stream()+            .map(entry -> Tag.of(entry.getKey(), entry.getValue()))+            .collect(Collectors.toList());+    }++    private String metricName(Metric metric) {+        String value =+            METRIC_NAME_PREFIX + metric.metricName().group() + "." + metric.metricName().name();

I didn't check which part it is coming from but I noticed metrics is included in every metric name. This feels superfluous and just makes the name longer. What do you think about getting rid of the "metrics" part of the name. Unless of course there are some kafka metrics about metrics. Or if Kafka users would expect the name with the metrics part in it, then maybe it is better to keep it. What do you think?

jeqo

comment created time in 17 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.instrument.simple.SimpleMeterRegistry;+import java.time.Duration;+import java.util.Collections;+import java.util.Properties;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.consumer.ConsumerConfig;+import org.apache.kafka.clients.consumer.KafkaConsumer;+import org.apache.kafka.clients.producer.KafkaProducer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.clients.producer.ProducerConfig;+import org.apache.kafka.clients.producer.ProducerRecord;+import org.apache.kafka.common.serialization.StringDeserializer;+import org.apache.kafka.common.serialization.StringSerializer;+import org.junit.jupiter.api.Tag;+import org.junit.jupiter.api.Test;+import org.testcontainers.containers.KafkaContainer;+import org.testcontainers.junit.jupiter.Container;+import org.testcontainers.junit.jupiter.Testcontainers;++import static org.junit.jupiter.api.Assertions.assertEquals;+import static org.junit.jupiter.api.Assertions.assertTrue;++@Testcontainers+@Tag("docker")+class KafkaClientMetricsIT {+    @Container+    private KafkaContainer kafkaContainer = new KafkaContainer();++    @Test+    void shouldManageProducerAndConsumerMetrics() {+        SimpleMeterRegistry registry = new SimpleMeterRegistry();++        assertEquals(0, registry.getMeters().size());++        Properties producerConfigs = new Properties();+        producerConfigs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,+                kafkaContainer.getBootstrapServers());+        Producer<String, String> producer = new KafkaProducer<>(+                producerConfigs, new StringSerializer(), new StringSerializer());++        new KafkaMetrics(producer).bindTo(registry);++        int producerMetrics = registry.getMeters().size();+        assertTrue(producerMetrics > 0);++        Properties consumerConfigs = new Properties();+        consumerConfigs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,+                kafkaContainer.getBootstrapServers());+        consumerConfigs.put(ConsumerConfig.GROUP_ID_CONFIG, "test");+        Consumer<String, String> consumer = new KafkaConsumer<>(+                consumerConfigs, new StringDeserializer(), new StringDeserializer());++        new KafkaMetrics(consumer).bindTo(registry);++        int producerAndConsumerMetrics = registry.getMeters().size();+        assertTrue(producerAndConsumerMetrics > producerMetrics);++        String topic = "test";+        producer.send(new ProducerRecord<>(topic, "key", "value"));+        producer.flush();++        registry.getMeters()+                .forEach(meter -> System.out.println(meter.getId() + " => " + meter.measure()));

This is nice for checking registered metrics and values when running the test locally. 👍

jeqo

comment created time in 17 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata+                if (METRIC_GROUP_APP_INFO.equals(name.group())) {+                    currentSize.incrementAndGet();+                    return;+                }+                Meter meter = bindMeter(registry, metric);+                //Collect metrics with same name to validate number of labels+                Set<Meter> meters = boundMeters.get(metric.metricName().name());+                if (meters == null) meters = new HashSet<>();+                meters.add(meter);+                boundMeters.put(metric.metricName().name(), meters);+            });++            //Remove meters with lower number of tags

Could we prevent both from being registered at the same time? That is, remove the meters with the lower number of tags before adding the meter with more tags? As it is, there is a race condition where meters could be retrieved (published/polled/scraped) with more than one meter registered with the same name but different tags.

jeqo

comment created time in 17 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {+            currentSize.set(metrics.size());+            Map<String, Set<Meter>> boundMeters = new HashMap<>();+            //Register meters+            metrics.forEach((name, metric) -> {+                //Filter out metrics from group "app-info", that includes metadata

Just for my education, is there some information on this in the Kafka documentation?

jeqo

comment created time in 17 days

Pull request review commentmicrometer-metrics/micrometer

Kafka binder without JMX

+/**+ * Copyright 2020 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.core.instrument.binder.kafka;++import io.micrometer.core.annotation.Incubating;+import io.micrometer.core.instrument.FunctionCounter;+import io.micrometer.core.instrument.Gauge;+import io.micrometer.core.instrument.Meter;+import io.micrometer.core.instrument.MeterRegistry;+import io.micrometer.core.instrument.Tag;+import io.micrometer.core.instrument.TimeGauge;+import io.micrometer.core.instrument.binder.MeterBinder;+import io.micrometer.core.lang.NonNullApi;+import io.micrometer.core.lang.NonNullFields;+import java.util.HashMap;+import java.util.HashSet;+import java.util.List;+import java.util.Map;+import java.util.Set;+import java.util.concurrent.TimeUnit;+import java.util.concurrent.atomic.AtomicInteger;+import java.util.function.Supplier;+import java.util.function.ToDoubleFunction;+import java.util.stream.Collectors;+import org.apache.kafka.clients.admin.AdminClient;+import org.apache.kafka.clients.consumer.Consumer;+import org.apache.kafka.clients.producer.Producer;+import org.apache.kafka.common.Metric;+import org.apache.kafka.common.MetricName;+import org.apache.kafka.streams.KafkaStreams;+import org.jetbrains.annotations.NotNull;++import static java.util.Collections.emptyList;++/**+ * Kafka metrics binder.+ * <p>+ * It is based on {@code metrics()} method returning {@link Metric} map exposed by clients and+ * streams interface.+ *+ * @author Jorge Quilcate+ * @see <a href="https://docs.confluent.io/current/kafka/monitoring.html">Kakfa monitoring+ * documentation</a>+ * @since 1.4.0+ */+@Incubating(since = "1.4.0")+@NonNullApi+@NonNullFields+public class KafkaMetrics implements MeterBinder {+    static final String METRIC_NAME_PREFIX = "kafka.";++    static final String METRIC_GROUP_APP_INFO = "app-info";++    private final Supplier<Map<MetricName, ? extends Metric>> metricsSupplier;+    private final Iterable<Tag> extraTags;++    /**+     * Keeps track of current number of metrics. When this value changes, metrics are bound again.+     */+    private AtomicInteger currentSize = new AtomicInteger(0);++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer, Iterable<Tag> tags) {+        this(kafkaProducer::metrics, tags);+    }++    /**+     * Kafka {@link Producer} metrics binder+     *+     * @param kafkaProducer producer instance to be instrumented+     */+    public KafkaMetrics(Producer<?, ?> kafkaProducer) {+        this(kafkaProducer::metrics);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     * @param tags          additional tags+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer, Iterable<Tag> tags) {+        this(kafkaConsumer::metrics, tags);+    }++    /**+     * Kafka {@link Consumer} metrics binder+     *+     * @param kafkaConsumer consumer instance to be instrumented+     */+    public KafkaMetrics(Consumer<?, ?> kafkaConsumer) {+        this(kafkaConsumer::metrics);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     * @param tags         additional tags+     */+    public KafkaMetrics(KafkaStreams kafkaStreams, Iterable<Tag> tags) {+        this(kafkaStreams::metrics, tags);+    }++    /**+     * {@link KafkaStreams} metrics binder+     *+     * @param kafkaStreams instance to be instrumented+     */+    public KafkaMetrics(KafkaStreams kafkaStreams) {+        this(kafkaStreams::metrics);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     * @param tags        additional tags+     */+    public KafkaMetrics(AdminClient adminClient, Iterable<Tag> tags) {+        this(adminClient::metrics, tags);+    }++    /**+     * Kafka {@link AdminClient} metrics binder+     *+     * @param adminClient instance to be instrumented+     */+    public KafkaMetrics(AdminClient adminClient) {+        this(adminClient::metrics);+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier) {+        this(metricsSupplier, emptyList());+    }++    KafkaMetrics(Supplier<Map<MetricName, ? extends Metric>> metricsSupplier,+            Iterable<Tag> extraTags) {+        this.metricsSupplier = metricsSupplier;+        this.extraTags = extraTags;+    }++    @Override+    public void bindTo(MeterRegistry registry) {+        checkAndBindMetrics(registry);+    }++    /**+     * Gather metrics from Kafka metrics API and register Meters.+     * <p>+     * As this is a one-off execution when binding a Kafka client, Meters include a call to this+     * validation to double-check new metrics when returning values. This should only add the cost of+     * validating meters registered counter when no new meters are present.+     */+    void checkAndBindMetrics(MeterRegistry registry) {+        Map<MetricName, ? extends Metric> metrics = metricsSupplier.get();+        //Only happens first time number of metrics change+        if (currentSize.get() != metrics.size()) {

Will metrics only ever be added to the map returned by the Kafka client? If metrics may be removed, it seems like a change that removes and adds the same number of metrics would go unnoticed by this logic.

jeqo

comment created time in 17 days

issue commentmicrometer-metrics/micrometer

gmetric4j license issue

I have added a note to the micrometer-registry-ganglia module's README in d580cb7. I have also added the note to the documentation site with https://github.com/micrometer-metrics/micrometer-docs/issues/118. I am not a lawyer, but I think it is up to users of micrometer-registry-ganglia to determine whether what is documented prevents them from using it or not based on their situation. As far as I can tell, our usage does not force the LGPL license on Micrometer. Popular libraries like Hibernate are LGPL licensed, and we provide optional integration with them as well without being LGPL licensed ourselves.

srdo

comment created time in 17 days

issue closedmicrometer-metrics/micrometer-docs

Document gmetric4j usage by ganglia registry implementation

See https://github.com/micrometer-metrics/micrometer/issues/1354

closed time in 17 days

shakuzen

push eventmicrometer-metrics/micrometer-docs

Tommy Ludwig

commit sha dd9a549b0fb57e28d09ef4e12490ebe1bd0a5c7f

Note Ganglia registry's gmetric4j usage Closes #118

view details

Tommy Ludwig

commit sha 794f74af016eb988dd62535d18a9e330aa24b989

Merge branch 'master' of github.com:micrometer-metrics/micrometer-docs

view details

push time in 17 days

issue openedmicrometer-metrics/micrometer-docs

Document licensing implications of ganglia registry

See https://github.com/micrometer-metrics/micrometer/issues/1354

created time in 17 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha d580cb7249bc3b4a27b8b8b6e2c5e3e78c9d00b2

Note ganglia registry's dependency's licensing See #1354

view details

Tommy Ludwig

commit sha 90119b9c73a4d373d8086b9b76f23600d8fe22eb

Merge branch '1.1.x' into 1.3.x

view details

Tommy Ludwig

commit sha 9356d5bdad23d9b532a401eaab3b5e5c6c0d08c9

Merge branch '1.3.x'

view details

push time in 17 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha d580cb7249bc3b4a27b8b8b6e2c5e3e78c9d00b2

Note ganglia registry's dependency's licensing See #1354

view details

Tommy Ludwig

commit sha 90119b9c73a4d373d8086b9b76f23600d8fe22eb

Merge branch '1.1.x' into 1.3.x

view details

push time in 17 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha d580cb7249bc3b4a27b8b8b6e2c5e3e78c9d00b2

Note ganglia registry's dependency's licensing See #1354

view details

push time in 17 days

issue commentmicrometer-metrics/micrometer

Not sending metrics after stop and start of the StatsdMeterRegistry

@uchandroth could you try again with the latest snapshots now that a fixed has been merged for the NoClassDefFoundError?

uchandroth

comment created time in 20 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha 375f5b78093c252b4ff7bb4f1772038b3c975b82

Move comments about pinned versions to dependencies.gradle These comments lack the context of what version is being pinned when left in the build files, which usually no longer specify a version, deferring to the dependencies.gradle file instead.

view details

push time in 20 days

push eventmicrometer-metrics/micrometer

Anuraag Agrawal

commit sha d50027b38fa992957d072d67d3775cb0daaaa8af

Fix shadowJar compile classpath to use the modern compileClasspath instead of deprecated compile. (#1844)

view details

push time in 20 days

PR merged micrometer-metrics/micrometer

Fix shadowJar compile classpath to use the modern compileClasspath in…

…stead of deprecated compile.

We moved all dependencies from compile to api or implementation - compileClasspath is the one configuration to rule them all

Fixes #1843

+5 -3

0 comment

1 changed file

anuraaga

pr closed time in 20 days

issue closedmicrometer-metrics/micrometer

Shaded dependencies missing in snapshots

micrometer-registry-statsd 1.4.0-snapshot with spring boot 2.2.0.RELEASE, 2.2.4.RELEASE release will throw java.lang.NoClassDefFoundError: io/micrometer/shaded/reactor/core/publisher/FluxSink FluxSink class was part of the micrometer-registry-statsd 1.3.0 jar.

Originally posted by @uchandroth in https://github.com/micrometer-metrics/micrometer/issues/1676#issuecomment-582821986


Looks like the shaded classes are not included in the produced JAR artifact anymore. Possibly related to build changes made recently in master.

closed time in 20 days

shakuzen

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha 364bbefb25ac54064e2494cff51fd00834cbea35

Polish

view details

push time in 21 days

issue commentmicrometer-metrics/micrometer

Integration test against supported Elasticsearch versions

It took a while but we finally got this setup and can ensure compatibility going forward. Thanks for the work on this @izeye

shakuzen

comment created time in 21 days

push eventmicrometer-metrics/micrometer

Johnny Lim

commit sha 0fcd8eea182741b020ae0ff9a6955c4f711a8bf0

Add integration tests for Elasticsearch meter registry (#1434) Tests the ElasticMeterRegistry against a running Elasticsearch instance in a docker container using testcontainers. Currently tested against Elasticsearch 6 and 7. See gh-1837 Closes gh-1429 Co-authored-by: Adrian Cole <adriancole@users.noreply.github.com> Co-authored-by: Tommy Ludwig <8924140+shakuzen@users.noreply.github.com>

view details

push time in 21 days

PR merged micrometer-metrics/micrometer

Add integration tests for Elasticsearch meter registry registry: elastic

This PR adds integration tests on Elasticsearch meter registry for Elasticsearch 5 and 6. We can add one for Elasticsearch 7 once #1428 has been merged.

Closes gh-1429

+194 -0

10 comments

5 changed files

izeye

pr closed time in 21 days

issue closedmicrometer-metrics/micrometer

Integration test against supported Elasticsearch versions

Let's set up some integration tests (probably with testcontainers) so we can ensure via CI that this works with all the versions we say? https://www.testcontainers.org/modules/elasticsearch/

Originally posted by @shakuzen in https://github.com/micrometer-metrics/micrometer/pull/1428#issuecomment-493867872

closed time in 21 days

shakuzen

issue commentmicrometer-metrics/micrometer

Not sending metrics after stop and start of the StatsdMeterRegistry

Thanks for trying things out. Indeed it looks like the shaded dependencies are missing from the latest snapshots. I've opened #1843 to fix that. Once that is fixed, if you could try again, things should work.

uchandroth

comment created time in 21 days

issue commentmicrometer-metrics/micrometer

Shaded dependencies missing in snapshots

Any ideas on this one @anuraaga? I'll try taking a look later but I figured you might be faster than me.

shakuzen

comment created time in 21 days

issue openedmicrometer-metrics/micrometer

Shaded dependencies missing in snapshots

micrometer-registry-statsd 1.4.0-snapshot with spring boot 2.2.0.RELEASE, 2.2.4.RELEASE release will throw java.lang.NoClassDefFoundError: io/micrometer/shaded/reactor/core/publisher/FluxSink FluxSink class was part of the micrometer-registry-statsd 1.3.0 jar.

Originally posted by @uchandroth in https://github.com/micrometer-metrics/micrometer/issues/1676#issuecomment-582821986


Looks like the shaded classes are not included in the produced JAR artifact anymore. Possibly related to build changes made recently in master.

created time in 21 days

Pull request review commentmicrometer-metrics/micrometer

Add integration tests for Elasticsearch meter registry

+/**+ * Copyright 2019 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.elastic;++import com.jayway.jsonpath.JsonPath;+import io.micrometer.core.instrument.Counter;+import io.micrometer.core.ipc.http.HttpSender;+import io.micrometer.core.ipc.http.HttpUrlConnectionSender;+import org.junit.jupiter.api.BeforeEach;+import org.junit.jupiter.api.Test;+import org.testcontainers.elasticsearch.ElasticsearchContainer;+import org.testcontainers.junit.jupiter.Container;++import java.time.Duration;+import java.util.concurrent.TimeUnit;++import static org.assertj.core.api.Assertions.assertThat;++/**+ * Base class for integration tests on {@link ElasticMeterRegistry}.+ *+ * @author Johnny Lim+ */+abstract class AbstractElasticsearchMeterRegistryIntegrationTest {++    private static final String USER = "elastic";+    private static final String PASSWORD = "changeme";++    @Container+    private final ElasticsearchContainer elasticsearch = new ElasticsearchContainer(getDockerImageName(getVersion()));++    private final HttpSender httpSender = new HttpUrlConnectionSender();++    private String host;+    private ElasticMeterRegistry registry;++    protected abstract String getVersion();++    @BeforeEach+    void setUp() {+        host = "http://" + elasticsearch.getHttpHostAddress();++        ElasticConfig config = new ElasticConfig() {+            @Override+            public String get(String key) {+                return null;+            }++            @Override+            public Duration step() {+                return Duration.ofSeconds(10);+            }++            @Override+            public String host() {+                return host;+            }++            @Override+            public String userName() {+                return USER;+            }++            @Override+            public String password() {+                return PASSWORD;+            }+        };+        registry = ElasticMeterRegistry.builder(config).build();+    }++    @Test+    void indexTemplateShouldApply() throws Throwable {+        String response = sendHttpGet(host);+        String versionNumber = JsonPath.parse(response).read("$.version.number");+        assertThat(versionNumber).isEqualTo(getVersion());++        Counter counter = registry.counter("test.counter");+        counter.increment();++        TimeUnit.SECONDS.sleep(20);++        String indexName = registry.indexName();+        String mapping = sendHttpGet(host + "/" + indexName + "/_mapping");+        String countType = JsonPath.parse(mapping).read(getCountTypePath(indexName));+        assertThat(countType).isEqualTo("double");+    }++    protected String getCountTypePath(String indexName) {+        return "$." + indexName + ".mappings.doc.properties.count.type";+    }++    private String sendHttpGet(String uri) throws Throwable {+        return httpSender.get(uri).withBasicAuthentication(USER, PASSWORD).send().body();+    }++    private static String getDockerImageName(String version) {+        return "docker.elastic.co/elasticsearch/elasticsearch:" + version;

We probably want the OSS version due to licensing

        return "docker.elastic.co/elasticsearch/elasticsearch-oss:" + version;
izeye

comment created time in 21 days

Pull request review commentmicrometer-metrics/micrometer

Add integration tests for Elasticsearch meter registry

+/**+ * Copyright 2019 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.elastic;++import com.jayway.jsonpath.JsonPath;+import io.micrometer.core.instrument.Counter;+import io.micrometer.core.ipc.http.HttpSender;+import io.micrometer.core.ipc.http.HttpUrlConnectionSender;+import org.junit.jupiter.api.BeforeEach;+import org.junit.jupiter.api.Test;+import org.testcontainers.elasticsearch.ElasticsearchContainer;+import org.testcontainers.junit.jupiter.Container;++import java.time.Duration;+import java.util.concurrent.TimeUnit;++import static org.assertj.core.api.Assertions.assertThat;++/**+ * Base class for integration tests on {@link ElasticMeterRegistry}.+ *+ * @author Johnny Lim+ */+abstract class AbstractElasticsearchMeterRegistryIntegrationTest {++    private static final String USER = "elastic";+    private static final String PASSWORD = "changeme";++    @Container+    private final ElasticsearchContainer elasticsearch = new ElasticsearchContainer(getDockerImageName(getVersion()));++    private final HttpSender httpSender = new HttpUrlConnectionSender();++    private String host;+    private ElasticMeterRegistry registry;++    protected abstract String getVersion();++    @BeforeEach+    void setUp() {+        host = "http://" + elasticsearch.getHttpHostAddress();++        ElasticConfig config = new ElasticConfig() {+            @Override+            public String get(String key) {+                return null;+            }++            @Override+            public Duration step() {+                return Duration.ofSeconds(10);+            }++            @Override+            public String host() {+                return host;+            }++            @Override+            public String userName() {+                return USER;+            }++            @Override+            public String password() {+                return PASSWORD;+            }+        };+        registry = ElasticMeterRegistry.builder(config).build();+    }++    @Test+    void indexTemplateShouldApply() throws Throwable {+        String response = sendHttpGet(host);+        String versionNumber = JsonPath.parse(response).read("$.version.number");+        assertThat(versionNumber).isEqualTo(getVersion());++        Counter counter = registry.counter("test.counter");+        counter.increment();++        TimeUnit.SECONDS.sleep(20);

Not critical, but it would be nice to not always have to wait 20 seconds in this test. Could we use Awaitility or some other mechanism to wait until a condition is met or a timeout? We can also manually do a publish without waiting for the step.

izeye

comment created time in 21 days

Pull request review commentmicrometer-metrics/micrometer

Add integration tests for Elasticsearch meter registry

+/**+ * Copyright 2019 Pivotal Software, Inc.+ * <p>+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ * <p>+ * https://www.apache.org/licenses/LICENSE-2.0+ * <p>+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package io.micrometer.elastic;++import org.junit.jupiter.api.Tag;+import org.testcontainers.junit.jupiter.Testcontainers;++/**+ * Integration tests on {@link ElasticMeterRegistry} for Elasticsearch 5.+ *+ * @author Johnny Lim+ */+@Testcontainers+@Tag("docker")+class ElasticsearchMeterRegistryElasticsearch5IntegrationTest+        extends AbstractElasticsearchMeterRegistryIntegrationTest {++    @Override+    protected String getVersion() {+        return "5.6.15";

For the record, I agree we shouldn't test this if we don't plan to fix a problem found only in this version. And I say we generally shouldn't fix things only affecting a version no longer maintained.

izeye

comment created time in 22 days

issue commentmicrometer-metrics/micrometer

AWS SDK needs upgrade to v2.10.56 to prevent use of stale IP

I'm assuming you're using the CloudWatch registry which has a dependency on the AWS SDK. You should be able to override the version of the dependency in your application.

tomerzel87

comment created time in 22 days

push eventjeqo/micrometer

Tommy Ludwig

commit sha 9d406fc390e66e4fd0e18d749914230a27456ca7

Update testcontainers dependencies declaration

view details

push time in 23 days

delete branch shakuzen/micrometer

delete branch : yo-dawg-docker

delete time in 24 days

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha 7b5dbfe6373c3c67f3751b416619a86b7aef991f

Support running docker in tests on CircleCI (#1837) Adds a `dockerTest` task to the build that runs the tests tagged with `docker`, which are excluded in the default `test` task. This task will be used in a CircleCI job `docker-test` using a `machine` executor, which is required to use Testcontainers on CircleCI.

view details

push time in 24 days

PR merged micrometer-metrics/micrometer

Support running docker in tests on CircleCI build type: task

We want to use Testcontainers for testing various functionality with real backends running. We need to run such tests on a machine type executor on CircleCI. This makes a separate CircleCI job to run those, using a newly added Gradle task dockerTest that only runs the tests tagged (with JUnit Jupiter's @Tag("docker")) with the docker label (see sample test). The default test task will not run the tests tagged with docker.

Unblocks things like #1429

Also upgrades to CircleCI's version 2.1 configuration which allows some deduplication of the executor, which was in order as different build images were being used in different jobs (by mistake). Also dedups the almost identical steps used in both the build and docker-test jobs.

+53 -21

4 comments

2 changed files

shakuzen

pr closed time in 24 days

pull request commentmicrometer-metrics/micrometer

Support running docker in tests on CircleCI build

Thanks for the review. Let me know if you have any issues or feedback after rebasing your PRs on this post-merge.

shakuzen

comment created time in 24 days

push eventshakuzen/micrometer

Tommy Ludwig

commit sha 9ea0c472fa2429d7f692f6064a0c6be49aa88040

Remove testcontainers dependencies

view details

push time in 24 days

push eventshakuzen/micrometer

Tommy Ludwig

commit sha 7967ca6dcffe43e8b807c5cae31bcd9cb15babeb

Remove POC test class

view details

push time in 24 days

pull request commentmicrometer-metrics/micrometer

Support running docker in tests on CircleCI build

This seems to be working well. Only thing to maybe look into later is getting the docker-test job to run faster. It takes longer than the build that runs thousands of tests when it's only running 1 test. See https://circleci.com/workflow-run/892f1433-6a71-4b02-bdc6-da968442f3d9

I will update the pull request to remove the placeholder test tagged with @Tag("docker") which was only there to ensure this worked.

Please take a look @izeye and @jeqo. Let me know what you think, since you both have a pull request blocked by not being able to use testcontainers on our CI build.

shakuzen

comment created time in 24 days

push eventshakuzen/micrometer

Tommy Ludwig

commit sha d24244d4f8238de72acc6c3b6d98df891aba4631

Make reusable gradlew-build command

view details

push time in 24 days

push eventshakuzen/micrometer

Tommy Ludwig

commit sha a4290fa7650fdfa305b79a2cf3049ad23b2c9ea7

Make reusable gradlew-build command

view details

push time in 24 days

push eventshakuzen/micrometer

Tommy Ludwig

commit sha 7795bec8c118b9099795654f94900a32eed76c33

Separate out docker tests Adds a `dockerTest` task to the build that runs the tests tagged with `docker`, which are excluded in the default `test` task. This will be used in the CircleCI build from a separate `machine` executor, which is required to use Testcontainers on CircleCI.

view details

push time in 24 days

push eventshakuzen/micrometer

Tommy Ludwig

commit sha b73f10ca97734e46621fab34b8f1af9c2bf6edc6

Separate out docker tests Adds a `dockerTest` task to the build that runs the tests tagged with `docker`, which are excluded in the default `test` task. This will be used in the CircleCI build from a separate `machine` executor, which is required to use Testcontainers on CircleCI.

view details

push time in 24 days

pull request commentmicrometer-metrics/micrometer

Allow docker in docker for CircleCI build

Well, that didn't work at all (as mentioned in https://github.com/testcontainers/testcontainers-java/issues/1014#issuecomment-492872176). I'm going to take a new approach of using a JUnit Jupiter tag for tests that need docker support and excluding them from the default run. I'll make a separate machine executor that runs the docker tests in parallel. This will make it nicer for those building locally that don't have docker installed/running and for parallelizing the running of docker tests, which inherently have some extra overhead.

shakuzen

comment created time in 24 days

push eventshakuzen/micrometer

Tommy Ludwig

commit sha 5aa4f0c416486c3721573b1a8bd7024914bb15e0

Curse you, copyright headers

view details

push time in 24 days

PR opened micrometer-metrics/micrometer

Allow docker in docker for CircleCI build

We want to use Testcontainers for testing various functionality with real backends running. Hopefully the setup_remote_docker command allows this to work.

See https://circleci.com/docs/2.0/building-docker-images/

Also upgrades to CircleCI's version 2.1 configuration which allows some deduplication of the executor, which was in order as different build images were being used in different jobs (by mistake).

+35 -18

0 comment

3 changed files

pr created time in 24 days

create barnchshakuzen/micrometer

branch : yo-dawg-docker

created branch time in 24 days

push eventmicrometer-metrics/prometheus-rsocket-proxy

Tommy Ludwig

commit sha 7839625c7136a4b0167e97ac559c5a772ef9fd8e

Upgrade Gradle Wrapper to 5.6.4 Resolves #32

view details

push time in 24 days

issue closedmicrometer-metrics/prometheus-rsocket-proxy

Upgrade Gradle Wrapper to 5.6.4

Some more build changes are required to upgrade to Gradle 6.x I think, so upgrade to the latest 5.x version in the meantime.

closed time in 24 days

shakuzen

issue openedmicrometer-metrics/prometheus-rsocket-proxy

Upgrade Gradle Wrapper to 5.6.4

Some more build changes are required to upgrade to Gradle 6.x I think, so upgrade to the latest 5.x version in the meantime.

created time in 24 days

push eventmicrometer-metrics/prometheus-rsocket-proxy

Tommy Ludwig

commit sha d928ce0b9552e8caa4309b034f89e900ba0953f0

Use JUnit Jupiter for spring module tests

view details

push time in 24 days

push eventmicrometer-metrics/prometheus-rsocket-proxy

Tommy Ludwig

commit sha b5d946d9a8c1f8e4f2f44abe1f1b910b1d58356b

Upgrade to Spring Boot 2.2.4 Resolves #31

view details

push time in 24 days

push eventmicrometer-metrics/prometheus-rsocket-proxy

Toshiaki Maki

commit sha 69d6ffe15f42dd38d17adcb9e44969157722d78d

Support configuring transport and secure in the starter (#30) Also set `localhost` as the default value of management.metrics.export.prometheus.rsocket.host so that an error doesn't occur when launching an app that just adds this library.

view details

push time in 24 days

PR merged micrometer-metrics/prometheus-rsocket-proxy

Support configuring transport and secure in the starter enhancement

This PR adds

  • management.metrics.export.prometheus.rsocket.transport
  • management.metrics.export.prometheus.rsocket.secure

so that client apps that uses the starter can configure WebSocket protocol.

I have tested it worked with following properties

management.metrics.export.prometheus.rsocket.host=prometheus-proxy-ws.cfapps.io
management.metrics.export.prometheus.rsocket.transport=websocket
management.metrics.export.prometheus.rsocket.port=8443
management.metrics.export.prometheus.rsocket.secure=true

# the endpoint to scape is https://prometheus-proxy-scrape.cfapps.io/metrics/proxy
+184 -5

0 comment

5 changed files

making

pr closed time in 24 days

push eventmicrometer-metrics/micrometer

Johnny Lim

commit sha b82e16b59bc30ac68f9eac20588ce5910547c5a7

Time remaining selectFrom() methods for jOOQ (#1836)

view details

push time in 24 days

PR merged micrometer-metrics/micrometer

Time remaining selectFrom() methods for jOOQ polish

This PR changes to time two remaining selectFrom() methods in MetricsDSLContext as it seems to be missed.

+2 -2

1 comment

1 changed file

izeye

pr closed time in 24 days

pull request commentmicrometer-metrics/micrometer

Time remaining selectFrom() methods for jOOQ

Thanks, Johnny.

izeye

comment created time in 24 days

issue commentopenzipkin/zipkin

Top logo missing

URL has been updated, so the 404 is fixed for now. Still the question of whether the logo should be packaged with the application or not.

bigon

comment created time in a month

issue closedopenzipkin/zipkin-classic

Zipkin logo is missing on the Classic Zipkin UI

Hi. The Classic Zipkin UI is serving a 404 for the Zipkin logo. The path it is pointing to is https://zipkin.io/public/img/zipkin-logo-200x119.jpg.

See https://github.com/openzipkin/zipkin-classic/blob/master/templates/layout.mustache#L3

closed time in a month

msmsimondean

issue commentopenzipkin/zipkin-classic

Zipkin logo is missing on the Classic Zipkin UI

Resolved by #7

msmsimondean

comment created time in a month

push eventmicrometer-metrics/micrometer

Ivo de Concini

commit sha b6306b2ff6ca87d0cc1bf24371c84336830d873f

Use histogram snapshot when publishing Distribution Summary in ElasticMeterRegistry (#1833) Fixes #1831

view details

Tommy Ludwig

commit sha 296d78a3eac32207779290bcc94983f38ea002d1

Merge branch '1.1.x' into 1.3.x

view details

Tommy Ludwig

commit sha 05e66eba7c00aaab207ea139ccb5d2674547a113

Merge branch '1.3.x'

view details

push time in a month

issue closedmicrometer-metrics/micrometer

ElasticMeterRegistry#writeSummary takes snapshot but then ignores it

I was looking through the code of the ElasticMeterRegistry and noticed, that when a DistributionSummary is published, a snapshot is taken, but then ignored.

Optional<String> writeSummary(DistributionSummary summary) {
        summary.takeSnapshot();
        return Optional.of(writeDocument(summary, builder -> {
            builder.append(",\"count\":").append(summary.count());
            builder.append(",\"sum\":").append(summary.totalAmount());
            builder.append(",\"mean\":").append(summary.mean());
            builder.append(",\"max\":").append(summary.max());
        }));
    }

Probably it should be something like:

Optional<String> writeSummary(DistributionSummary summary) {
        HistogramSnapshot snapshot = summary.takeSnapshot();
        return Optional.of(writeDocument(summary, builder -> {
            builder.append(",\"count\":").append(snapshot.count());
            builder.append(",\"sum\":").append(snapshot.tota());
            builder.append(",\"mean\":").append(snapshot.mean());
            builder.append(",\"max\":").append(snapshot.max());
        }));
    }

... to ensure that the metrics are consistent between each other.

If you agree, I would gladly provide a pull request.

Cheers, Ivo

closed time in a month

ideco

push eventmicrometer-metrics/micrometer

Ivo de Concini

commit sha b6306b2ff6ca87d0cc1bf24371c84336830d873f

Use histogram snapshot when publishing Distribution Summary in ElasticMeterRegistry (#1833) Fixes #1831

view details

Tommy Ludwig

commit sha 296d78a3eac32207779290bcc94983f38ea002d1

Merge branch '1.1.x' into 1.3.x

view details

push time in a month

push eventmicrometer-metrics/micrometer

Ivo de Concini

commit sha b6306b2ff6ca87d0cc1bf24371c84336830d873f

Use histogram snapshot when publishing Distribution Summary in ElasticMeterRegistry (#1833) Fixes #1831

view details

push time in a month

PR merged micrometer-metrics/micrometer

Use histogram snapshot when publishing Distribution Summary

in Elastic Meter Registry. Fixes #1831

+6 -5

0 comment

1 changed file

ideco

pr closed time in a month

issue commentmicrometer-metrics/micrometer

Support InfluxDB line protocol over UDP

Looking ahead to InfluxDB 2, I found this in their README (emphasis on UDP is mine):

What is NOT planned?

  • Direct support by InfluxDB for CollectD, StatsD, Graphite, or UDP. ACTION REQUIRED: Leverage Telegraf 1.9+ along with the InfluxDB v2.0 output plugin to translate these protocols/formats.
wei-hai

comment created time in a month

issue commentmicrometer-metrics/micrometer

InfluxMeterRegistry - Support for sending data using UDP

@cmallwitz thank you for offering to help. Let's continue the discussion over on the preexisting issue. For reference, in our StatsD registry, we have a configuration option to select the protocol.

cmallwitz

comment created time in a month

issue closedmicrometer-metrics/micrometer

InfluxMeterRegistry - Support for sending data using UDP

Hi

I would be interested in an option to send data using UDP to InfluxDB and even willing to try to contribute some code. Questions:

  1. as far as I can tell there is no existing Influx UDP backend for mircometer-metrics, isn't it?
  2. What should the config to trigger this be? I could see "influx.uri=udp://localhost:8089" although udp isn't really a URI schema...

Thanks Christian

closed time in a month

cmallwitz

issue commentmicrometer-metrics/micrometer

ElasticMeterRegistry#writeSummary takes snapshot but then ignores it

Indeed, this is a bug. Thank you for catching this and offering to send a fix. Would you please target the pull request at the 1.1.x branch so that we can include it in the maintenance releases?

ideco

comment created time in a month

issue commentmicrometer-metrics/micrometer

[Memory leak] Can not garbage collection with Micrometer Objects ?

If meters are continually being made with unique tags, the amount of memory will continue to increase. It would help to take a look at what meters there are to determine if this is happening. Are you creating any meters yourself, or is it only what the auto-configured instrumentation is making? What specific versions of Spring Boot, Spring Batch, and Micrometer are you using? Of course, I would suggest you use the latest available to ensure you aren't encountering a bug that's been fixed in a later version.

If you want to disable metrics entirely, you could exclude the io.micrometer:micrometer-core dependency from the spring-boot-starter-actuator dependency, which would stop auto-configuration of metrics things from happening. Disabling the metrics actuator endpoint does not affect metrics from being configured; it just prevents them from being exposed on that endpoint.

ducpm1310

comment created time in a month

push eventmicrometer-metrics/micrometer

Johnny Lim

commit sha b5ba5440a2eddab02a4eb64ff369e02592ff9ce8

Polish Stackdriver contribution (#1829)

view details

push time in a month

PR merged micrometer-metrics/micrometer

Polish Stackdriver contribution polish registry: stackdriver

This PR polishes Stackdriver contribution.

+13 -20

0 comment

3 changed files

izeye

pr closed time in a month

push eventmicrometer-metrics/micrometer

Johnny Lim

commit sha 3d2a7ee7657c6e7a69d3a8c798cfce4c96a7ee64

Add Gradle Wrapper Validation GitHub Action (#1824) See https://github.com/gradle/wrapper-validation-action

view details

Tommy Ludwig

commit sha d7a4f382e5b88148f0f180ef6516e0df77c8916b

Fix more NaN assertions

view details

Tommy Ludwig

commit sha b340f93006b184f26a54b2b5e66f5042fdc85fad

Merge branch '1.1.x' into 1.3.x

view details

Tommy Ludwig

commit sha 4b6f4ca60f1b88d344f41a7940d7e725e07f6b32

Merge branch '1.3.x'

view details

push time in a month

push eventmicrometer-metrics/micrometer

Johnny Lim

commit sha 3d2a7ee7657c6e7a69d3a8c798cfce4c96a7ee64

Add Gradle Wrapper Validation GitHub Action (#1824) See https://github.com/gradle/wrapper-validation-action

view details

Tommy Ludwig

commit sha d7a4f382e5b88148f0f180ef6516e0df77c8916b

Fix more NaN assertions

view details

Tommy Ludwig

commit sha b340f93006b184f26a54b2b5e66f5042fdc85fad

Merge branch '1.1.x' into 1.3.x

view details

push time in a month

push eventmicrometer-metrics/micrometer

Tommy Ludwig

commit sha d7a4f382e5b88148f0f180ef6516e0df77c8916b

Fix more NaN assertions

view details

push time in a month

more