profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/leonkyneur/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

leonkyneur/packer-qemu 2

Docker image for building qemu images with packer

leonkyneur/acme-dns 0

Limited DNS server with RESTful HTTP API to handle ACME DNS challenges easily and securely.

leonkyneur/bitnami-docker-mariadb 0

Bitnami MariaDB Docker Image

leonkyneur/centos 0

Virtual machine templates for CentOS

leonkyneur/dnsproviders 0

DNS providers adapted for use in Caddy to solve the ACME DNS challenge

leonkyneur/docker 0

Docker - the open-source application container engine

leonkyneur/docker-autodiscover 0

My email Autodiscovery container

leonkyneur/docker-compose-elasticsearch-kibana 0

Docker Compose for Elasticsearch and Kibana

leonkyneur/dotfiles 0

YADR - The best vim,git,zsh plugins and the cleanest vimrc you've ever seen

startedTypeStrong/ts-node

started time in 2 hours

startedgoogle/crfs

started time in 5 hours

startedrichardwilkes/cef

started time in 8 hours

pull request commentsolarkennedy/puppet-consul

Make home directory location setting optional

@bastelfreak I am not sure I picked the right label here - feel free to change if I goofed

genebean

comment created time in 9 hours

issue commentlsc-project/lsc

Synchronised Updates are not appearing

You made a typo: dn=ca instead of dc=ca

GSACRD-TECH

comment created time in 11 hours

pull request commentsolarkennedy/puppet-consul

consul user: set correct home

Based on #559 I have added the backwards-incompatible label here so the change log will be updated to reflect that this is a breaking change for existing users. A workaround is included in #575

bastelfreak

comment created time in 11 hours

PR opened solarkennedy/puppet-consul

Make home directory location setting optional

Fixes #559

+27 -1

0 comment

3 changed files

pr created time in 11 hours

issue commentlsc-project/lsc

Synchronised Updates are not appearing

Here is the error with the changes you have suggested. No sure why the syncronization is renaming when the changeid is set to true? Hence why I set it to false, but I understand why It needs to be true.

                       <mainIdentifier>js:"cn=" + javax.naming.ldap.Rdn.escapeValue(srcBean.getDatasetFirstValueById("cn")) + ",CN=Users,DN=gsacrd,DN=ab,dn=ca"</mainIdentifier>
                        <defaultDelimiter>;</defaultDelimiter>
                        <defaultPolicy>FORCE</defaultPolicy>
                        <conditions>
                                <create>true</create>
                                <update>true</update>
                                <delete>true</delete>
                                <changeId>true</changeId>
                        </conditions>

May 17 08:44:45 - ERROR - Error while renaming entry CN=skta-staff,CN=Users,DC=gsacrd,DC=ab,DC=ca in directory :javax.naming.NamingException: [LDAP: error code 80 - 00002089: UpdErr: DSID-031B0DCE, problem 5012 (DIR_ERROR), data 2
]; remaining name 'CN=skta-staff,CN=Users'
May 17 08:44:45 - ERROR - Error while synchronizing ID CN=skta-staff,CN=Users,DC=gsacrd,DC=ab,DC=ca: java.lang.Exception: Technical problem while applying modifications to the destination
# Mon May 17 08:44:45 MDT 2021
dn: CN=skta-staff,CN=Users,DC=gsacrd,DC=ab,DC=ca
changetype: modrdn
newrdn: cn=skta-staff
deleteoldrdn: 1
newsuperior: CN=Users,DN=gsacrd,DN=ab,dn=ca

May 17 08:44:45 - ERROR - All entries: 47, to modify entries: 47, successfully modified entries: 0, errors: 47

GSACRD-TECH

comment created time in 11 hours

startedknex/knex

started time in 12 hours

PR opened prometheus/snmp_exporter

added support for aes192c and aes256c privacy protocol

Signed-off-by: Theunis Botha theunisb@jurumani.com

Hi @SuperQ

Following #595, I added support for Cisco's AES192 and AES256 PrivacyProtocol encryptions that was available in the GoSNMP package. We ran into some issues where we weren't able to walk some Cisco devices, and realised it was using their own implementation, AES256C, instead of AES256.

+5 -1

0 comment

1 changed file

pr created time in 13 hours

push eventdovecot/core

Markus Valentin

commit sha ad4c2b77a8ca302b80e2d5689b1c92b77f9e3291

acl: Fix broken LIST for shared namespaces Due to the recent changes in the usage of the acl_ignore_namespace setting shared namespaces where trying to use fast listing too. This resulted in wrong LIST IMAP command outputs when using acl plugin. Broken by dc8ecd38a7e54b8bb80ae97712a0d8ad4edcbed3

view details

push time in 15 hours

startedbuildkite/elastic-ci-stack-for-aws

started time in a day

startednotify-rs/notify

started time in a day

issue commentlsc-project/lsc

Synchronised Updates are not appearing

You set changeId condition to false and your mainIdentifier value does not seems valid, it should contain the full entry DN. It this value is not valide and change id is disabled, then all other modifications are blocked.

GSACRD-TECH

comment created time in 2 days

issue closedriemann/riemann

Auto-reload the Kafka Consumer

I'm running Riemann 0.3.6, Dockerised, in an AWS ECS task, producing and consuming events from Kafka. Occasionally, for a reason yet to be determined, I see these messages in the logs:

ERROR [2021-01-21 20:58:34,560] clojure-agent-send-off-pool-3 - riemann.kafka - Interrupted consumption
java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.base/sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
at java.base/sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:312)
at java.base/sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:316)
at java.base/sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:153)
at java.base/java.io.OutputStreamWriter.flush(OutputStreamWriter.java:254)
at riemann.graphite.GraphiteTCPClient.send_lines(graphite.clj:41)
at riemann.graphite$graphite$fn__10762.invoke(graphite.clj:174)
at riemann.core$stream_BANG_$fn__10000.invoke(core.clj:20)
at riemann.core$stream_BANG_.invokeStatic(core.clj:19)
at riemann.core$stream_BANG_.invoke(core.clj:15)
at riemann.kafka$start_kafka_thread$fn__11398.invoke(kafka.clj:86)
at clojure.core$binding_conveyor_fn$fn__5476.invoke(core.clj:2022)
at clojure.lang.AFn.call(AFn.java:18)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)

DEBUG [2021-01-21 20:58:34,570] kafka-coordinator-heartbeat-thread | metrics - org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=riemann-consumer-2-252, groupId=metrics] Heartbeat thread has closed

DEBUG [2021-01-21 20:58:34,575] clojure-agent-send-off-pool-3 - org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-closed:
... (more similar "Removed sensor with name [sensor]" messages)

DEBUG [2021-01-21 20:58:34,585] clojure-agent-send-off-pool-3 - org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=riemann-consumer-2-252, groupId=metrics] Kafka consumer has been closed

WARN [2021-01-22 22:15:06,631] Thread-4 - riemann.core - instrumentation service caught
java.net.SocketException: Connection reset
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.base/sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
at java.base/sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:312)
at java.base/sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:316)
at java.base/sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:153)
at java.base/java.io.OutputStreamWriter.flush(OutputStreamWriter.java:254)
at riemann.graphite.GraphiteTCPClient.send_lines(graphite.clj:41)
at riemann.graphite$graphite$fn__10762.invoke(graphite.clj:174)
at riemann.core$stream_BANG_$fn__10000.invoke(core.clj:20)
at riemann.core$stream_BANG_.invokeStatic(core.clj:19)
at riemann.core$stream_BANG_.invoke(core.clj:15)
at riemann.core$instrumentation_service$measure__10009.invoke(core.clj:59)
at riemann.service.ThreadService$thread_service_runner__6715$fn__6716.invoke(service.clj:101)
at riemann.service.ThreadService$thread_service_runner__6715.invoke(service.clj:100)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.lang.Thread.run(Thread.java:844)

At this point, the Kafka consumer has exited and no metrics are being pulled from Kafka. However, other agents such as graphite and the Riemann task itself happily continue executing. As the ECS task is healthy I receive no alerts nor is the task automatically replaced by AWS.

Is there a mechanism to force Riemann to stop if one or all of the Kafka threads exits? Or, alternatively, can the Kafka consumer thread be respawned?

closed time in 2 days

vitorbrandao

issue commentriemann/riemann

Auto-reload the Kafka Consumer

Please re-open if you still have issues.

vitorbrandao

comment created time in 2 days

issue closedriemann/riemann

Questions: prometheus-batch exceptions

Describe the bug I try to use prometheus-batch but meet "Unable to resolve symbol", use prometheus is fine, could any expert give me some advice? https://riemann.io/api/riemann.prometheus.html#var-prometheus-batch

To Reproduce Steps to reproduce the behavior:

  1. Add prometheus-batch to riemann.conf
; -*- mode: clojure; -*-
; vim: filetype=clojure

(logging/init {:file "/var/log/riemann/riemann.log"})

; Listen on the local interface over TCP (5555), UDP (5555), and websockets
; (5556)
(let [host "127.0.0.1"]
  (tcp-server {:host host})
  (udp-server {:host host})
  (ws-server  {:host host}))

; Expire old events from the index every 5 seconds.
(periodically-expire 5)

(let [index (index)]
  ; Inbound events will be passed to these streams:
  (streams
    (default :ttl 60

;Test Prometheus-batch
        (batch 1000 5 (prometheus-batch {:host "infradev-prometheus-pushgateway.com"
                                 :port 80
                                 :job "test"
        }))

;Test prometheus
;        (prometheus {:host "infradev-prometheus-pushgateway.com"
;                    :port 80
;                     :job "test"
;        })

      ; Index all events immediately.
      index

      ; Log expired events.
      (expired
        (fn [event] (info "expired" event))))))
  1. Start riemann and meet the exception
[root@centos /]# java -cp /usr/lib/riemann/riemann.jar: riemann.bin start /etc/riemann/riemann.config
INFO [2021-01-14 09:02:03,517] main - riemann.bin - Loading /etc/riemann/riemann.config
INFO [2021-01-14 09:02:03,529] main - riemann.bin - PID 252
ERROR [2021-01-14 09:02:03,651] main - riemann.bin - Couldn't start
clojure.lang.Compiler$CompilerException: java.lang.RuntimeException: Unable to resolve symbol: prometheus-batch in this context, compiling:(/etc/riemann/riemann.config:22:23)
        at clojure.lang.Compiler.analyze(Compiler.java:6792)
        at clojure.lang.Compiler.analyze(Compiler.java:6729)
        at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3813)
        at clojure.lang.Compiler.analyzeSeq(Compiler.java:7005)
        at clojure.lang.Compiler.analyze(Compiler.java:6773)
        at clojure.lang.Compiler.analyze(Compiler.java:6729)
        at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3881)
        at clojure.lang.Compiler.analyzeSeq(Compiler.java:7005)
        at clojure.lang.Compiler.analyze(Compiler.java:6773)
        at clojure.lang.Compiler.analyze(Compiler.java:6729)
        at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3881)
        at clojure.lang.Compiler.analyzeSeq(Compiler.java:7005)
        at clojure.lang.Compiler.analyze(Compiler.java:6773)
        at clojure.lang.Compiler.analyze(Compiler.java:6729)
        at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3881)
        at clojure.lang.Compiler.analyzeSeq(Compiler.java:7005)
        at clojure.lang.Compiler.analyze(Compiler.java:6773)
        at clojure.lang.Compiler.analyze(Compiler.java:6729)
        at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6100)
        at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6420)
        at clojure.lang.Compiler.analyzeSeq(Compiler.java:7003)
        at clojure.lang.Compiler.analyze(Compiler.java:6773)
        at clojure.lang.Compiler.analyze(Compiler.java:6729)
        at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6100)
        at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5460)
        at clojure.lang.Compiler$FnExpr.parse(Compiler.java:4022)
        at clojure.lang.Compiler.analyzeSeq(Compiler.java:7001)
        at clojure.lang.Compiler.analyze(Compiler.java:6773)
        at clojure.lang.Compiler.eval(Compiler.java:7059)
        at clojure.lang.Compiler.load(Compiler.java:7514)
        at clojure.lang.Compiler.loadFile(Compiler.java:7452)
        at clojure.lang.RT$3.invoke(RT.java:325)
        at riemann.config$include.invokeStatic(config.clj:466)
        at riemann.config$include.invoke(config.clj:443)
        at riemann.bin$_main$fn__14764.invoke(bin.clj:147)
        at riemann.bin$run_app_BANG_.invokeStatic(bin.clj:131)
        at riemann.bin$run_app_BANG_.invoke(bin.clj:123)
        at riemann.bin$_main.invokeStatic(bin.clj:147)
        at riemann.bin$_main.doInvoke(bin.clj:135)
        at clojure.lang.RestFn.invoke(RestFn.java:425)
        at clojure.lang.AFn.applyToHelper(AFn.java:156)
        at clojure.lang.RestFn.applyTo(RestFn.java:132)
        at riemann.bin.main(Unknown Source)
Caused by: java.lang.RuntimeException: Unable to resolve symbol: prometheus-batch in this context
        at clojure.lang.Util.runtimeException(Util.java:221)
        at clojure.lang.Compiler.resolveIn(Compiler.java:7299)
        at clojure.lang.Compiler.resolve(Compiler.java:7243)
        at clojure.lang.Compiler.analyzeSymbol(Compiler.java:7204)
        at clojure.lang.Compiler.analyze(Compiler.java:6752)
        ... 42 common frames omitted

Expected behavior Riemann should start normally and prometheus-batch work normally.

Screenshots If applicable, add configuration, data, output, or screenshots to help explain your problem. image

Background (please complete the following information):

  • OS: CentOS Linux release 7.8.2003 (Core)
  • Java/JVM version: java-1.8.0-openjdk-headless-1.8.0.275.b01-0.el7_9.x86_64
  • Riemann version: riemann-0.3.6-1.noarch

Additional context Add any other context about the problem here.

Use prometheus is fine riemann.conf

; -*- mode: clojure; -*-
; vim: filetype=clojure

(logging/init {:file "/var/log/riemann/riemann.log"})

; Listen on the local interface over TCP (5555), UDP (5555), and websockets
; (5556)
(let [host "127.0.0.1"]
  (tcp-server {:host host})
  (udp-server {:host host})
  (ws-server  {:host host}))

; Expire old events from the index every 5 seconds.
(periodically-expire 5)

(let [index (index)]
  ; Inbound events will be passed to these streams:
  (streams
    (default :ttl 60

;Test Prometheus-batch
;        (batch 1000 5 (prometheus-batch {:host "infradev-prometheus-pushgateway.com"
;                                 :port 80
;                                 :job "test"
;        }))

;Test prometheus
        (prometheus {:host "infradev-prometheus-pushgateway.com"
                    :port 80
                     :job "test"
        })

      ; Index all events immediately.
      index

      ; Log expired events.
      (expired
        (fn [event] (info "expired" event))))))

Run riemann

[root@centos /]# java -cp /usr/lib/riemann/riemann.jar: riemann.bin start /etc/riemann/riemann.config
INFO [2021-01-14 08:59:05,966] main - riemann.bin - Loading /etc/riemann/riemann.config
INFO [2021-01-14 08:59:05,976] main - riemann.bin - PID 181
INFO [2021-01-14 08:59:06,190] clojure-agent-send-off-pool-3 - riemann.transport.udp - UDP server 127.0.0.1 5555 16384 -1 online
INFO [2021-01-14 08:59:06,208] clojure-agent-send-off-pool-1 - riemann.transport.websockets - Websockets server 127.0.0.1 5556 online
INFO [2021-01-14 08:59:06,220] clojure-agent-send-off-pool-2 - riemann.transport.tcp - TCP server 127.0.0.1 5555 online
INFO [2021-01-14 08:59:06,222] main - riemann.core - Hyperspace core online

closed time in 2 days

keyboardfann

issue commentriemann/riemann

Questions: prometheus-batch exceptions

Please re-open if you still have issues.

keyboardfann

comment created time in 2 days

issue closedriemann/riemann

`forward` does not reconnect to when connection interrupted

This is more of a question regarding one of the functions forward.

I was trying to set up a froward of streams between 2 riemanns. Riemann 1 forwards events to Riemann 2 using forward. However if for some reason if Riemann 2 restarts or if Riemann 2 starts up after Riemann 1 the events are not. forwarded.

However, if I were to write in Riemann 1code as say:

(def riemann-client (r/tcp-client {:host "riemann2" :port 5556}))
(batch 5000 1
            (async-queue! :a-queue
                         {:queue-size 1e4}
                           (partial r/send-events riemann-client)))

In this case Riemann 1 forwards events to Riemann 2 irrespective of whether Riemann 2 starts up after Riemann 1 or Riemann 2 restarts multiple times in between. basically the tcp-client is able to reconnect no matter what.

The riemann-clojure-client doc states that the clients will try to auto-reconnect.

But somehow this doesn't seem to be happening with forward. Does it have something to do with the dereferencing happening in forward?

How can forward be made to reconnect (retry) with a client that its lost connection with ?

closed time in 2 days

vipinvkmenon

issue commentriemann/riemann

`forward` does not reconnect to when connection interrupted

Please re-open if you still have issues.

vipinvkmenon

comment created time in 2 days

issue closedriemann/riemann

Riemann writes not all messages to InfluxDB

Hello. Installed a bunch of Riemann version 0.3.1 and Influx version 1.8

I accept messages from web servers to Riemann, process them and add them to InfluxDB. But comparing the number of records in the Riemann log and in the Database, I noticed a difference in the number of messages, i.e. there are more entries in the log than in the database.

Please, tell me what to look for and what to read? Riemann config below:

`(logging/init {:file "/var/log/riemann/riemann.log"})

(let [host "0.0.0.0" port 5555] (tcp-server {:host host}) (udp-server {:host host}) (ws-server {:host host}) )

(periodically-expire 5)

(def influxdb-creds {:host "localhost" :port 8086 :username "username" :password "password" :db "riemann" })

(def influx (batch 100 1/10 (async-queue! :agg {:queue-size 1000 :core-pool-size 1 :max-pool-size 4 :keep-alive-time 60000} (influxdb influxdb-creds))))

(require '[cheshire.core :as json])

(defn safe-json [^String s] (try (json/parse-string s true) (catch Exception _ nil)))

(defn restore-payload [event] (update-in event [:payload] safe-json))

(defn app-event? [e] (some-> e restore-payload :payload :application)) (let [index (index)] (streams (default :ttl 60 index (where (service "nginx_json") (smap (fn [ev] (assoc ev :service "RPS" :metric 1 :payload 1)) (rate 1 influx) #(info (:time %) (:host %) (:service %) (:metric %)) ) )))) `

closed time in 2 days

Mahabharata871

issue commentriemann/riemann

Riemann writes not all messages to InfluxDB

Please re-open if you still have questions.

Mahabharata871

comment created time in 2 days

push eventriemann/riemann

Pradeep Chhetri

commit sha 2d590cfd9064687e149461b6d290f6bcbbc6af3e

InfluxDB v2 support (#996)

view details

push time in 2 days

PR merged riemann/riemann

InfluxDB v2 support

Closes https://github.com/riemann/riemann/issues/993

I have tested it with my dev InfluxDB v2 environment.

+134 -2

1 comment

5 changed files

chhetripradeep

pr closed time in 2 days

issue closedriemann/riemann

Implement InfluxDB 2.0 plugin

Hello! I am evaluating Riemann and realized that InfluxDB 2.0 doesn't appear to be supported yet as a destination to forward events. Do you have any plans to implement it?

closed time in 2 days

bobkocisko

pull request commentriemann/riemann

InfluxDB v2 support

Awesome! Thanks.

chhetripradeep

comment created time in 2 days

PR opened riemann/riemann

InfluxDB v2 support

Closes https://github.com/riemann/riemann/issues/993

I have tested it with my dev InfluxDB v2 environment.

+134 -2

0 comment

5 changed files

pr created time in 2 days

issue commentprometheus/snmp_exporter

EnumAsInfo vs EnumAsStateSet

Yes I understand the lookups thing, but ifType is an enumerator, when you include it as a lookup, you get the integer value, not the associated string/text

daenney

comment created time in 2 days

issue commentprometheus/snmp_exporter

EnumAsInfo vs EnumAsStateSet

I think what you're looking for is lookups: https://github.com/prometheus/snmp_exporter/tree/main/generator#file-format and in the generator: https://github.com/prometheus/snmp_exporter/blob/623b659678ad565217e93ac77b74bbc84c40660c/generator/generator.yml#L294-L298

It results in something like this in snmp.yml:

- name: ifHCOutOctets
    oid: 1.3.6.1.2.1.31.1.1.1.10
    type: counter
    help: The total number of octets transmitted out of the interface, including framing
      characters - 1.3.6.1.2.1.31.1.1.1.10
    indexes:
    - labelname: ifIndex
      type: gauge
    lookups:
    - labels:
      - ifIndex
      labelname: ifDescr
      oid: 1.3.6.1.2.1.2.2.1.2
      type: DisplayString
    - labels: []
      labelname: ifIndex
daenney

comment created time in 2 days

startedgpestana/rdoc

started time in 2 days