profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/mnp/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

mnp/dispwatch 9

Emacs - Watch the current display for changes (ie plugging in a monitor) and run a hook.

mnp/bacnet-docker 8

Experiment with BACnet stack clients and servers in a single machine docker compose framework

mnp/ds-auto-subtitle-downloader 7

Automatically download subtitles for videos on a Synology Diskstation

mnp/docker-registry-diff 5

Standalone tool to determine if several private docker registries are in sync

mnp/erlang-ntp 5

Simple client demonstrating NTP query and decode

mnp/connwatch 1

work in progress, report linux kernel network connections as they occur, by pid

mnp/garage 1

A raspberry pi application to monitor and actuate garage doors, and other things.

mnp/2048 0

A small clone of 1024 (https://play.google.com/store/apps/details?id=com.veewo.a1024)

mnp/bucardo 0

Bucardo multimaster and master/slave Postgres replication

pull request commentthomseddon/traefik-forward-auth

Add support to forward header with oidc token

Is this PR going to move forward or should I look to maybe a Lua filter if I want to log fields from inside a token?

gcavalcante8808

comment created time in 23 days

startedhasura/graphql-engine

started time in 24 days

issue commentfluent/fluent-bit

[Question] Loki plugin out of order error behaviour?

This should not be closed; it's still a concern.

stevehipwell

comment created time in a month

issue openedreplicatedhq/kots

Cluster local domain name should be configurable

In a textbook normal cluster, pods are given the expected local domain stack with local resolver, like svc.cluster.local.

somepod$ cat /etc/resolv.conf 
nameserver 10.76.16.10
search mycorp.svc.cluster.local svc.cluster.local cluster.local us-central1-c.c.mycorp-dev-12345.internal c.mycorp-dev-12345.internal google.internal
options ndots:5

However, if a user wants external DNS, such as VPC-scoped DNS but this is probably not a gke specific problem, all pods will be created with a cluster-themed name stack and an external resolver (the cluster is named mycorp-test):

somepod$ cat /etc/resolv.conf
nameserver 169.254.169.254
search mycorp.svc.mycorp-test svc.mycorp-test mycorp-test us-central1-c.c.mycorp-dev-12345.internal c.mycorp-dev-12345.internal google.internal
options ndots:5

This is great until you see Kots has some hardcoded dependencies on svc.cluster.local.

299:2021/08/06 19:11:38 unable to connect to api: failed to connect: dial tcp: lookup kotsadm.mycorp.svc.cluster.local on ***HIDDEN***:53: no such host
300:2021/08/06 19:11:40 connecting to api at http://kotsadm.mycorp.svc.cluster.local:3000

Maybe this should be configurable somewhere?

created time in a month

issue commentminio/operator

Please document how Helm tenant templating is supposed to be used

Yes, I am still interested in documentation. Thank you.

mnp

comment created time in 2 months

push eventmnp/dotfiles

Mitchell Perilstein

commit sha f92674b3d3728ac9487ff29572c77972923697f9

No beep. Highlight mode for yaml and python.

view details

push time in 2 months

issue commentkubernetes/kubernetes

kubectl port-forward broken pipe

I had success with @anthcor 's solution: let him decide the local port.

kubectl  port-forward svc/foo :8080
Forwarding from 127.0.0.1:57708 -> 8080
Forwarding from [::1]:57708 -> 8080
Handling connection for 57708
Handling connection for 57708 ... etc```

I ALSO had success by specifying local address as 127.0.0.1 (I don't need ipv6).   The WEIRD thing is after doing this I can go back to the original form,  `kubectl  port-forward svc/foo 1234:8080` and it works again. This smells like a socket reuse issue.

This is docker-desktop on a mac.

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

dwbrown2

comment created time in 2 months

push eventmnp/dotfiles

Mitchell Perilstein

commit sha 9fc8b50703c42371582e5d28317992ea710c1425

Fixed elpa pull problem and maggit one

view details

push time in 2 months

startedalphapapa/emacs-sandbox.sh

started time in 2 months

issue openedfluent/fluent-bit

Fast dummy input causes 'entry out of order' from Loki - rate dependent?

Bug Report

Describe the bug The dummy input plugin running at 50/sec rate, with Loki output plugin, will cause "entry out of order" messages from Loki as logged by fluent-bit. Fluent bit usually exits here.

To Reproduce

  • fluent-bit helm chart version 0.15.14
  • fluent-bit helm values:
config:
  inputs: |
    [INPUT]      
        name dummy
        rate 50
        samples 1000000

  customParsers: ""

  filters: ""
    
  outputs: |
    [OUTPUT]
        name loki
        match *
        tenant_id my-loki
        host lok1-loki
        port 3100
        line_format json
        auto_kubernetes_labels off
        labels job=dummy
  • loki helm chart version 2.5.0
  • loki helm values
persistence:
  enabled: true
  • Steps to reproduce the problem:
    • helm install the charts
    • tail the fluent-bit log
  • See logs below indicating the out of order response from Loki

Expected behavior

  • I expected a possible throttling problem, at some rate, possibly a 429 Too many requests from Loki.
  • I did not expect to trigger an ordering problem with the dummy input, which should be strictly ordered.
  • I did not expect to encounter this at 50/s which seems pretty low, hinting perhaps at some nondeterminism somewhere.

Screenshots

Fluent Bit v1.7.8
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2021/06/18 15:57:19] [ info] [engine] started (pid=1)
[2021/06/18 15:57:19] [ info] [storage] version=1.1.1, initializing...
[2021/06/18 15:57:19] [ info] [storage] in-memory
[2021/06/18 15:57:19] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/06/18 15:57:19] [ info] [output:loki:loki.0] configured, hostname=lok1-loki:3100
[2021/06/18 15:57:19] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2021/06/18 15:57:19] [ info] [sp] stream processor started
[2021/06/18 15:57:44] [error] [output:loki:loki.0] lok1-loki:3100, HTTP status=400
entry with timestamp 2021-06-18 15:57:43.6231467 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
total ignored: 1 out of 50

[2021/06/18 15:57:44] [ warn] [engine] failed to flush chunk '1-1624031863.517629600.flb', retry in 6 seconds: task_id=0, input=dummy.0 > output=loki.0 (out_id=0)
[2021/06/18 15:57:50] [error] [output:loki:loki.0] lok1-loki:3100, HTTP status=400
entry with timestamp 2021-06-18 15:57:43.5176231 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.5379462 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.5583711 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.5776186 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.5981335 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.6177588 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.6377638 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.6231467 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.6432401 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
entry with timestamp 2021-06-18 15:57:43.6635918 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="dummy"},
total ignored: 49 out of 50

[2021/06/18 15:57:50] [ warn] [engine] chunk '1-1624031863.517629600.flb' cannot be retried: task_id=0, input=dummy.0 > output=loki.0

Meanwhile, Loki said...

level=warn ts=2021-06-18T15:57:44.4683349Z caller=grpc_logging.go:38 method=/logproto.Pusher/Push duration=1.2639ms err="rpc error: code = Code(400) desc = entry with timestamp 2021-06-18 15:57:43.6231467 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\ntotal ignored: 1 out of 50" msg="gRPC\n"
level=warn ts=2021-06-18T15:57:50.4672932Z caller=grpc_logging.go:38 method=/logproto.Pusher/Push duration=957.6µs err="rpc error: code = Code(400) desc = entry with timestamp 2021-06-18 15:57:43.5176231 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.5379462 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.5583711 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.5776186 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.5981335 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.6177588 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.6377638 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.6231467 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.6432401 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\nentry with timestamp 2021-06-18 15:57:43.6635918 +0000 UTC ignored, reason: 'entry out of order' for stream: {job=\"dummy\"},\ntotal ignored: 49 out of 50" msg="gRPC\n"

Your Environment <!--- Include as many relevant details about the environment you experienced the bug in -->

  • Version used: fluent bit 1.7.8
  • Configuration: The effective fluent-bit config looks like this after the helm creates it:
custom_parsers.conf:
----

fluent-bit.conf:
----
[SERVICE]
    Flush 1
    Daemon Off
    Log_Level info
    Parsers_File parsers.conf
    Parsers_File custom_parsers.conf
    HTTP_Server On
    HTTP_Listen 0.0.0.0
    HTTP_Port 2020

[INPUT]
    name dummy
    rate 50
    samples 1000000

[OUTPUT]
    name loki
    match *
    tenant_id my-loki
    host lok1-loki
    port 3100
    line_format json
    auto_kubernetes_labels off
    labels job=dummy
  • Environment name and version (e.g. Kubernetes? What version?): docker-desktop 3.3.1
  • Server type and version: kubernetes 1.19.7
  • Operating System and version: x86 mac, Catalina
  • Filters and plugins: see config above

Additional context <!--- How has this issue affected you? What are you trying to accomplish? -->

When I run my app logging fast I am seeing this same OOO behavior, but I'm presenting the dummy case here as obviously the OOO is log independent and rate dependent.

<!--- Providing context helps us come up with a solution that is most useful in the real world -->

I want the opposite of Issue #3082: I'm trying to preserve all logs even if they're out of order. I'm okay with using collection time instead of log time. While experimenting with time_keep, I realized even the dummy input will generate "out of order" messages!

So this is a showstopper for our usecase.

created time in 3 months

issue commentfluent/fluent-bit

[Question] Loki plugin out of order error behaviour?

We are seeing this also and it's a showstopper for us as well.

Our usecase is the opposite of OP's: In our case, we DON'T want ANYTHING discarded, ever, but we don't care about the log time being precise; collection time is good enough as long as it passes the original through somehow. So to achieve that, we're trying json logging of kubernetes stdout containers so we have time_keep off in the parser.

custom_parsers.conf:
----
[PARSER]
    Name docker_no_time
    Format json
    Time_Keep off

fluent-bit.conf:
----
[SERVICE]
    Flush 1
    Daemon Off
    Log_Level info
    Parsers_File parsers.conf
    Parsers_File custom_parsers.conf
    HTTP_Server On
    HTTP_Listen 0.0.0.0
    HTTP_Port 2020

[INPUT]
    Name tail
    Path /var/log/containers/*.log
    Parser docker_no_time
    Tag kube.*
    Mem_Buf_Limit 5MB
    Skip_Long_Lines On
[INPUT]
    Name systemd
    Tag host.*
    Systemd_Filter _SYSTEMD_UNIT=kubelet.service
    Read_From_Tail On

[FILTER]
    Name kubernetes
    Match *
    Merge_Log Off
    Keep_Log Off
    K8S-Logging.Parser On
    K8S-Logging.Exclude On

[OUTPUT]
    name loki
    match kube.*
    tenant_id my-loki
    host lok1-loki
    port 3100
    line_format json
    auto_kubernetes_labels off

Is that the right way to ask for collection time instead of log time?

Anyway we still get out of order and dropped chunks.

[2021/06/17 20:31:19] [ warn] [engine] chunk '1-1623961610.585590800.flb' cannot be retried: task_id=43, input=tail.0 > output=loki.0
[2021/06/17 20:31:24] [error] [output:loki:loki.0] lok1-loki:3100, HTTP status=400
entry with timestamp 2021-06-17 20:27:02.5381778 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="fluent-bit"},
entry with timestamp 2021-06-17 20:27:02.5381858 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="fluent-bit"},
entry with timestamp 2021-06-17 20:27:02.5381926 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="fluent-bit"},
entry with timestamp 2021-06-17 20:27:02.5382005 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="fluent-bit"},
entry with timestamp 2021-06-17 20:27:02.5382069 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="fluent-bit"},
entry with timestamp 2021-06-17 20:27:02.5382131 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="fluent-bit"},
entry with timestamp 2021-06-17 20:27:02.538361 +0000 UTC ignored, reason: 'entry out of order' for stream: {job="fluent-bit"},
total ignored: 7 out of 7

Thanks for letting me tag along.

stevehipwell

comment created time in 3 months