If you are wondering where the data of this site comes from, please visit GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Matt Mendick mattmendick Grafana Labs Rochester, NY

mattmendick/cortex 0

A horizontally scalable, highly available, multi-tenant, long term Prometheus.

mattmendick/doorman 0

Doorman: Global Distributed Client Side Rate Limiting.

mattmendick/goofys 0

a high-performance, POSIX-ish Amazon S3 file system written in Go

mattmendick/pg_tail 0

' tail -f ' your PostgreSQL tables.

mattmendick/TiModules 0

A collection of JavaScript Modules for Titanium mobile

mattmendick/yakyak 0

Desktop chat client for Google Hangouts

issue openedgrafana/agent

Potential inefficiency retrieving configurations from consul in scraping service mode

Per an offline conversation with @mattdurham and @jdbaldry, there may be an inefficiency in how the agent's scraping service mode is retrieving scraping configs from consul. Currently it seems that the agent will perform one request to obtain the list of keys only, and then iterates over each of the keys to retrieve the value. As the agent has more and more configurations loaded, this will increase load on consul linearly to a breaking point of consul being unable to handle the requests.

Relevant code in consul's api code:

This would be more efficient if we are able to do a single call to request the keys and values from consul so that all the configs are retrieved with a single call (or at least fewer than the n number of configurations that exist).

created time in a month


started time in 3 months