profile
viewpoint
Lucas Servén Marín squat Red Hat Berlin https://squat.ai working on Kubernetes, Prometheus, and Thanos

coreos/terraform-aws-kubernetes 116

Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more

coreos/issue-sync 95

A tool for synchronizing issue tracking between GitHub and JIRA

prometheus-community/promql-langserver 76

PromQL language server

coreos/terraform-azurerm-kubernetes 22

Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more

redhat-developer/vscode-promql 20

This is supposed to become a PromQL extension for vs code.

squat/drae 20

A RESTful API for el Diccionario de la Real Academia Española

observatorium/up 8

a simple Prometheus and Loki API testing tool

squat/configmap-to-disk 3

configmap-to-disk synchronizes a ConfigMap from the Kubernetes API to disk

squat/darkapi 2

An API for Darknet image detection neural networks like YOLO

Pull request review commentobservatorium/observatorium

Add a test/run-local.sh script run via make run for development

+#!/bin/bash++# Runs a minimal setup with local binaries locally.+# This should be most helpful when working on Observatorium API specific features.++set -euo pipefail++result=1+trap 'kill $(jobs -p); exit $result' EXIT++(./tmp/bin/dex serve ./test/config/dex.yaml) &++(+  ./observatorium \+    --web.listen=0.0.0.0:8443 \+    --web.internal.listen=0.0.0.0:8448 \+    --web.healthchecks.url=https://localhost:8443 \+    --tls.server.cert-file=./tmp/certs/server.pem \+    --tls.server.key-file=./tmp/certs/server.key \+    --tls.healthchecks.server-ca-file=./tmp/certs/ca.pem \+    --logs.read.endpoint=http://127.0.0.1:3100 \+    --logs.tail.endpoint=http://127.0.0.1:3100 \+    --logs.write.endpoint=http://127.0.0.1:3100 \+    --metrics.read.endpoint=http://127.0.0.1:9091 \+    --metrics.write.endpoint=http://127.0.0.1:19291 \+    --rbac.config=./test/config/rbac.yaml \+    --tenants.config=./test/config/tenants.yaml \+    --log.level=debug+) &++(+  ./tmp/bin/thanos receive \+    --receive.hashrings-file=./test/config/hashrings.json \+    --receive.local-endpoint=127.0.0.1:10901 \+    --receive.default-tenant-id="1610b0c3-c509-4592-a256-a1871353dbfa" \+    --grpc-address=127.0.0.1:10901 \+    --http-address=127.0.0.1:10902 \+    --remote-write.address=127.0.0.1:19291 \+    --log.level=error \+    --tsdb.path="$(mktemp -d)"+) &++(+  ./tmp/bin/thanos query \+    --grpc-address=127.0.0.1:10911 \+    --http-address=127.0.0.1:9091 \+    --store=127.0.0.1:10901 \+    --log.level=error \+    --web.external-prefix=/ui/metrics/v1

I think this external prefix is not quite correct. I am finding that the static assets aren't loading correctly when I run this

metalmatze

comment created time in 2 days

PullRequestReviewEvent

Pull request review commentobservatorium/observatorium

Remove ksonnet from Kubernetes objects

 GOLANGCILINT ?= $(FIRST_GOPATH)/bin/golangci-lint GOLANGCILINT_VERSION ?= v1.21.0 EMBEDMD ?= $(BIN_DIR)/embedmd JSONNET ?= $(BIN_DIR)/jsonnet-JSONNET_BUNDLER ?= $(BIN_DIR)/jb

Can we also remove the jsonnet bundler installation target on line 222?

metalmatze

comment created time in 2 days

PullRequestReviewEvent

PR opened observatorium/observatorium

Local Development Improvements for Dex

This PR fixes the callback URLs for local development using the OIDC authentication method.

It also adds the bare minimum HTML to support password login and to thus be able to use the authorization code flow with Dex locally.

cc @observatorium/maintainers

+45 -3

0 comment

4 changed files

pr created time in 2 days

create barnchsquat/observatorium

branch : local-dev-oidc

created branch time in 2 days

push eventsquat/kilo

Lucas Servén Marín

commit sha 5e970d8b42c088a67f19306a8e5e58b8fadaf0b5

pkg/mesh: small change for clarity Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 2 days

issue commentsquat/kilo

How to connect two clusters, networking and kubectl?

yes exactly and please share the generated YAML files here so we can inspect :)

tetricky

comment created time in 2 days

issue commentsquat/kilo

How to connect two clusters, networking and kubectl?

Yes, there is definitely an issue with the fact that the private IPs of the VMs (192.168.122.101, 192.168.122.102, 192.168.122.103) are also the same as the "public" IP, i.e. 192.168.122.101. Note that the generated config uses 192.168.122.101 as the IP for the endpoint, but ALSO tries to route 192.168.122.101 via WireGuard. This will not work.

Question for you: are the private IPs for the VMs (192.168.122.101, 192.168.122.102, 192.168.122.103) actually reachable from the KVM host?

If not, then we cannot specify the endpoint here; we'll need the VMs to connect to the KVM host via WireGuard and for the endpoint to be added automatically and for the connection to be kept open via WireGuard's persistent keepalive.

If yes, then the private IPs do not need to be specified in the allowed IPs section of the configuration, since we don't need to route connections to those addresses over the VPN for connectivity.

The thing that we are more interested in is why the VM cluster's API became inaccessible. The best thing to do determine the cause would be to recreate the cluster (sorry :/) and then, rather than generating the Peer configurations and applying them directly using the snippet in the doc, simply generate them and share them here so that we can inspect further. In other words:

# Perform a dry-run of registering the nodes in cluster1 as peers of cluster2.
for n in $(kubectl --kubeconfig $KUBECONFIG1 get no -o name | cut -d'/' -f2); do
    # Specify the service CIDR as an extra IP range that should be routable.
    # Save the generated configurations to disk rather than applying it directly to the cluster.
    kgctl --kubeconfig $KUBECONFIG1 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR1 > $n.yaml
done
tetricky

comment created time in 2 days

issue commentsquat/kilo

How to connect two clusters, networking and kubectl?

Ack, the clusters looks good 👍 Yes the problem in the first set of logs was that the env variables for SERVICECIDRX were not set.

Just to be sure, everything directly above is the output to the terminal from running the second snippet of registering the VMs with the kvm host?

Did any of the VMs successfully register? Maybe only the first VM and the rest are causing issues in kgctl?

This seems like something caused the kvm API server to become unusable. Can you still use kubectl against it? E.g. kubectl get ns?

tetricky

comment created time in 2 days

PullRequestReviewEvent

issue commentsquat/kilo

How to connect two clusters, networking and kubectl?

Hi @tetricky the doc mentions this in the second paragraph:

https://github.com/squat/kilo/blob/72f5107979281076a32fb1639d135db6bf89ac80/docs/multi-cluster-services.md#L8

Please let me know if as a user this isn't clear enough and maybe how it could be improved! I'm always very happy for any help and contributions :)

Regarding modifying the Kilo CIDR, the Kilo binary that runs on all your nodes has a --subnet flag that can be used to modify the WireGuard CIDR: https://github.com/squat/kilo/blob/master/cmd/kg/main.go#L97 These command line options definitely need a reference document. I'll work on this now :)

tetricky

comment created time in 2 days

issue commentsquat/kilo

How to connect two clusters, networking and kubectl?

Hi @tetricky, that's a lot of questions; I'll try my best to answer them all:

  • yes, Kilo will need to be installed on both clusters;
  • as the doc mentions, the Kilo CIDR must be different on the clusters: e.g. one can be 10.4/16 and the other 10.5/16;
  • the Service CIDRs for the clusters (10.43/16 by default on k3s) must be different if you are connecting from one cluster to another cluster's Service IP
  • the Pod CIDRs for the clusters (10.42/16 by default on k3s) must be different if you are connecting from one cluster to another cluster's Pod IP
  • SERVICECIDR refers to the cluster's Service CIDR
tetricky

comment created time in 3 days

delete branch squat/kilo

delete branch : peer-dns-names

delete time in 3 days

push eventsquat/kilo

Lucas Servén Marín

commit sha 116fb7337ab1b2617b6f76b8f8be7f45d1748f97

pkg/k8s: enable peers to use DNS names This commit enables peers defined using the Peer CRD to declare their endpoints using DNS names. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha ac7fa37fd0bbc084bd029cc618adef551cfc1c45

Merge pull request #42 from squat/peer-dns-names pkg/k8s: enable peers to use DNS names

view details

push time in 3 days

PR merged squat/kilo

pkg/k8s: enable peers to use DNS names

This commit enables peers defined using the Peer CRD to declare their endpoints using DNS names.

Signed-off-by: Lucas Servén Marín lserven@gmail.com

+102 -15

0 comment

6 changed files

squat

pr closed time in 3 days

pull request commentopenshift/telemeter

jsonnet/telemeter: Use `host_type` in recording rule join

/retitle Bug 1879965: jsonnet/telemeter: Use host_type in recording rule join

smarterclayton

comment created time in 3 days

push eventsquat/kilo

Lucas Servén Marín

commit sha 77d0863cccc2e25c06c3aff99ec277e574960fc9

vendor: bump to go 1.14 Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha 79a131572a7a86524c894fd5286410d7717bbd9e

Merge pull request #47 from squat/go114 vendor: bump to go 1.14

view details

Lucas Servén Marín

commit sha 968d13148fe6b9b1e1876eee4afcd16c11ad722b

pkg/mesh: update persistent keepalive on change Previously, when udpdating the persistent keepalive of a node via annotations, the node's WireGuard configuration was not updated. This corrects the behavior. Fixes: #54 Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha b188abf0b6be3b62f45951a6cb26cc6e107d3dd3

manifests: ensure ip6tables kernel module can load Fixes: #55 Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha 9b19bbe69c1528ed5a81a9edbe0c9ef4a9b489af

pkg/iptables: remove nil rules from list on error Previously, when `deleteFromIndex` exited early due to an error, nil rules would be left in the controller's list of rules, which could provoke a panic on the next reconciliation. This commit ensures that nil rules are removed before an early exit. Fixes: #51 Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha 82c819659d8c63a2159e7ee2812f4af2cde53ea8

pkg/mesh: introduce kilo_leader guage metric This commit introduces a new Prometheus metric to detect if the node is a leader of its location, from its own point of view. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha ddab6930d86e0e712fa3f743842c78cd2b0249dc

Dockerfile: change Alpine pkg CDN The current Alpine package CDN is timing out for aarch64. This commit updates it to another mirror. This commit also changes the channel Alpine channel from edge to v3.12. Note: the Dockerfile overrides the Alpine CDN settings to ensure that a mirror with support for TLS is used. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

dependabot[bot]

commit sha 88327cd6573c5c09d2d4076dbb843d0c33180473

build(deps): bump websocket-extensions from 0.1.3 to 0.1.4 in /website Bumps [websocket-extensions](https://github.com/faye/websocket-extensions-node) from 0.1.3 to 0.1.4. - [Release notes](https://github.com/faye/websocket-extensions-node/releases) - [Changelog](https://github.com/faye/websocket-extensions-node/blob/master/CHANGELOG.md) - [Commits](https://github.com/faye/websocket-extensions-node/compare/0.1.3...0.1.4) Signed-off-by: dependabot[bot] <support@github.com>

view details

Lucas Servén Marín

commit sha bc0ba422899161f1b7ff129c57610b457fd076e3

Merge pull request #59 from squat/dependabot/npm_and_yarn/website/websocket-extensions-0.1.4 build(deps): bump websocket-extensions from 0.1.3 to 0.1.4 in /website

view details

Lucas Servén Marín

commit sha dc8fb2dd466667c1efbf5b56e0d1b6bac34858e4

website: update dependencies Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Eddie Wang

commit sha a3bc74d27f6f696ba363cdd4e398fa1b026d63e9

add notes for k3s setup

view details

Eddie Wang

commit sha b6461181460c99f89514c4df4cf61d92c87920be

fix typo and add to k3s-flannel yaml

view details

Lucas Servén Marín

commit sha 3948f5e97a90a32766b03aaae2a495a3bc1d5263

Merge pull request #61 from eddiewang/rancher-usage-notes Add quick note for k3s setup

view details

Ruben Vermeersch

commit sha 858502744bb5bde5b3795962f581c91bc051f47b

Fix typo

view details

Lucas Servén Marín

commit sha ab8df1306eff509b716d5282e1fbc5df8760011c

Merge pull request #65 from rubenv/patch-1 Fix typo

view details

Lucas Servén Marín

commit sha b5cadfe3de3fab5e0595e4b4e05f524f7f2c182c

.travis.yml: only tag latest image if not git tag If we tag a release for, e.g. 0.1.1, after we've already cut a 0.2.0 tag, then CI would tag the 0.1.1 image as `latest`, which is confusing. This commit ensures that we only tag the `latest` image when building from master. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha 5d7fb962749bfb3e454a4f7d4767e8c3d72065cb

website/yarn.lock: bump npm deps Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha 7750a08019d94326963b23af22b4931f62000495

website: update syntax for new docusaurus version Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha d3492a72cbc81ac41eba87714117037d6ab466f2

website: add dependency resolutions Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha e3cb7d795891bb25093a284222bb5b5d8d158097

.travis.yml: only tag latest images on master Ensure that only images built from the master branch get tagged with `latest`. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 3 days

push eventsquat/kilo

Lucas Servén Marín

commit sha e3cb7d795891bb25093a284222bb5b5d8d158097

.travis.yml: only tag latest images on master Ensure that only images built from the master branch get tagged with `latest`. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 3 days

issue commentsquat/kilo

How to connect two clusters, networking and kubectl?

Hi @tetricky this is definitely the right place to ask :))

It sounds like what you're trying to do is to create multi-cluster services, i.e. run a Kubernetes Service in one cluster and make use of it from a Pod running in another cluster. This is totally possible with Kilo and in fact I run this setup all of the time :))

There's a doc in the Kilo repo that explains how to do this: https://github.com/squat/kilo/blob/master/docs/multi-cluster-services.md

Please take a look at it and see if it answers your questions. Note: the scripts in the doc assume you have kgctl installed on your local machine, e.g. the laptop from which you run kubectl, but kgctl is not required on any of the cluster machines.

tetricky

comment created time in 3 days

push eventsquat/kilo

Lucas Servén Marín

commit sha 8f89b3dfc617172957af8f54ee98b0f4260f30ce

pkg/k8s: enable peers to use DNS names This commit enables peers defined using the Peer CRD to declare their endpoints using DNS names. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 3 days

push eventsquat/kilo

Lucas Servén Marín

commit sha d3492a72cbc81ac41eba87714117037d6ab466f2

website: add dependency resolutions Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 3 days

pull request commentsquat/kilo

build(deps): bump prismjs from 1.20.0 to 1.21.0 in /website

@dependabot rebase

dependabot[bot]

comment created time in 3 days

push eventsquat/kilo

Lucas Servén Marín

commit sha 5d7fb962749bfb3e454a4f7d4767e8c3d72065cb

website/yarn.lock: bump npm deps Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha 7750a08019d94326963b23af22b4931f62000495

website: update syntax for new docusaurus version Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 3 days

push eventthanos-io/thanos

Raphael Noriode

commit sha 727a351b5a1e0b817c7d894edf8dd9b7cf5e73e1

fixed the broken links that leads to 404 error, Signed-off-by:raphlbrume@gmail.com (#3182) Signed-off-by: Oghenebrume50 <raphlbrume@gmail.com>

view details

push time in 3 days

PR merged thanos-io/thanos

fixed the broken links that leads to 404 error, Signed-off-by:raphlbr…

…ume@gmail.com

Signed-off-by: Oghenebrume50 raphlbrume@gmail.com

<!-- Keep PR title verbose enough and add prefix telling about what components it touches e.g "query:" or ".*:" -->

<!-- Don't forget about CHANGELOG!

Changelog entry format:
- [#<PR-id>](<PR-URL>) Thanos <Component> ...

<PR-id> Id of your pull request.
<PR-URL> URL of your PR such as https://github.com/thanos-io/thanos/pull/<PR-id>
<Component> Component affected by your changes such as Query, Store, Receive.

-->

  • [ ] I added CHANGELOG entry for this change.
  • [x] Change is not relevant to the end user.

Changes

Following this issue #3135 I discovered that the broken link(https://thanos.io/tip/tracing.md/#configuration) reflects in other parts of the documentation I changed it to https://thanos.io/tip/thanos/tracing.md/#configuration which works perfectly well,

This PR doesn't fix #3135 but it was inspired by it

Verification

<!-- How you tested it? How do you know it works? --> I ran make web-serve and the correct link shows up properly

+33 -33

0 comment

9 changed files

Oghenebrume50

pr closed time in 3 days

issue closedthanos-io/thanos

404 errors for links in CLI flag text

<!-- Template relevant to bug reports only!

Keep issue title verbose enough and add prefix telling about what components it touches e.g "query:" or ".*:" -->

<!-- In case of issues related to exact bucket implementation, please ping corresponded maintainer from list here: https://github.com/thanos-io/thanos/blob/master/docs/storage.md -->

Thanos, Prometheus and Golang version used: Thanos: v0.15.0 <!-- Output of "thanos --version" or docker image:tag used. (Double-check if all deployed components/services have expected versions)

If you are using custom build from master branch, have you checked out the tip of the master? -->

Object Storage Provider: N/A

What happened: While working on the thanos helm-charts I noticed the links provided in the flag descriptions were 404ing ex: https://thanos.io/tip/tracing.md/#configuration should be https://thanos.io/v0.15/thanos/tracing.md/#configuration

What you expected to happen: Proper linking to help onboarding and doc referencing :smile:

How to reproduce it (as minimally and precisely as possible): Confirmed with thanos query --help will look to verify on the other components today as I work on charts.

Full logs to relevant components: N/A <!-- Uncomment if you would like to post collapsible logs:

<details>Logs <p>

</p> </details> -->

Anything else we need to know:

<!-- Uncomment and fill if you use not casual environment or if it might be relevant.

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

-->

closed time in 3 days

spencergilbert
PullRequestReviewEvent
PullRequestReviewEvent

issue commentobservatorium/operator

Pod affinity Support

Hmm, I don't see any pods using the deprecated topology keys; I am pretty sure we made a push to update them several months ago

clyang82

comment created time in 3 days

push eventobservatorium/observatorium

clyang82

commit sha 4e0818557a4f55699ce15791a31c5a873043585a

Create sa instead of use default

view details

clyang82

commit sha 3fe5941f980ca06282fb4b7268d7a9efd82887f6

make generate

view details

Lucas Servén Marín

commit sha 62b04aa21710b81038c0ae33ceceaf6b7938d56d

Merge pull request #94 from clyang82/sa Create sa instead of use default

view details

push time in 3 days

PR merged observatorium/observatorium

Create sa instead of use default

Pod analysis shows most observability pods use the default service account with cluster-admin access. A separate service account should be used by the pods so they can each be bound to only the privileges they need. None of them should be using the default service account or the wild card privileges [map[apiGroups:[*] resources:[*] verbs:[*]] map[nonResourceURLs:[*] verbs:[*]]] that the default service account currently uses.

+31 -0

3 comments

5 changed files

clyang82

pr closed time in 3 days

PullRequestReviewEvent

push eventobservatorium/operator

clyang82

commit sha c35112ba3b1d110bd92044fd233f6b9ac3a9bafc

Update API and deployments dependency

view details

Lucas Servén Marín

commit sha d861409997eecf76c31db90e16e6ad464c56fddf

Merge pull request #9 from clyang82/api_crds Update API and deployments dependency

view details

push time in 3 days

PR merged observatorium/operator

Update API and deployments dependency

Some fixes in this PR:

  1. absorb the latest deployments dependency to have https://github.com/observatorium/deployments/pull/342
  2. update the API to replace QueryCache with QueryFrontend
  3. remove APIQuery which is not used anymore
  4. correct default-config.libsonnet location

/assign @metalmatze @squat

+22 -71

0 comment

8 changed files

clyang82

pr closed time in 3 days

PullRequestReviewEvent

Pull request review commentthanos-io/thanos

Save allocations using yoloString in label unmarshal; kill storepb.Labels usage.

+// Package containing Zero Copy Labels adapter.++package zcpylabels++import (+	"fmt"+	"io"+	"strings"+	"unsafe"++	"github.com/prometheus/prometheus/pkg/labels"+)++func noAllocString(buf []byte) string {+	return *((*string)(unsafe.Pointer(&buf)))+}++// LabelsFromPromLabels converts Thanos proto no alloc labels to Prometheus labels in type unsafe manner.+// It reuses the same memory. Caller should abort using passed labels.Labels.+func LabelsFromPromLabels(lset labels.Labels) []Label {+	return *(*[]Label)(unsafe.Pointer(&lset))+}++// LabelsToPromLabels converts Prometheus labels to Thanos proto no alloc labels in type unsafe manner.+// It reuses the same memory. Caller should abort using passed []NoAllocLabels

nit: there is no type called NoAllocLabels

brancz

comment created time in 4 days

PullRequestReviewEvent

PR opened thanos-io/thanos

website/layour: fix TOC title

The correct term is Table of Contents in plural. This commit fixes the title for the TOC.

Signed-off-by: Lucas Servén Marín lserven@gmail.com

  • [x] Change is not relevant to the end user.

cc @thisisobate

+2 -2

0 comment

1 changed file

pr created time in 4 days

push eventsquat/thanos

Lucas Servén Marín

commit sha eee604fad1c33051a1b489a84600fcb3a44dec4a

layouts: small fixes to website index (#3124) Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

Simon Pasquier

commit sha 6c2e7728c5478b23c4cc82bfb00478c78c1dd077

pkg/reloader: improve detection of directory changes (#3136) * pkg/reloader: improve detection of directory changes When watching for changes in directories, the reloader used to rely only on the watch interval and not the inotify events. This commit implements a more efficient detection of changes for watched directories. The change also adds a new `DelayInterval` option that allows to delay the config reload after no additional event are received. Finally a new metric, `thanos_sidecar_reloader_config_apply_operations_total`, is added and `thanos_sidecar_reloader_config_apply_errors_total` has been renamed to `thanos_sidecar_reloader_config_apply_operations_failed_total` for consistency. Signed-off-by: Simon Pasquier <spasquie@redhat.com> * Updates after Kemal's review Signed-off-by: Simon Pasquier <spasquie@redhat.com>

view details

Ben Ye

commit sha e0b7f7b32e9c1071586cb5fb1b4ed4f134235824

querier: Support store matchers and time range filter on labels API (#3133) * support store matchers on labels API Signed-off-by: Ben Ye <yb532204897@gmail.com> Add more unit tests in proxy store Signed-off-by: Ben Ye <yb532204897@gmail.com> * Add changelog Signed-off-by: Ben Ye <yb532204897@gmail.com> * address review comments Signed-off-by: Ben Ye <yb532204897@gmail.com> Co-authored-by: Giedrius Statkevičius <giedriuswork@gmail.com>

view details

Uchechukwu Obasi

commit sha e31e93be758fc4c1e5befff5aa2e3f5ca2340062

fixed search to filter results based on the version content pages (#3153) Signed-off-by: thisisobate <obasiuche62@gmail.com>

view details

Kunal Kushwaha

commit sha 787cd80a9794730af22a528c99bd99ae75d1bb05

added key prop in Navigation (#3161) Signed-off-by: Kunal Kushwaha <kunalkushwaha453@gmail.com>

view details

Simon Pasquier

commit sha b1fdb6385550fc6706690f833d2eb8d5c8d6919f

pkg/gate: Prefix gate metrics for selects (#3154) * pkg/thanos: prefix gate metrics for concurrent selects Thanos query exposed `gate_queries_in_flight` and `gate_duration_seconds_bucket` metrics for concurrent selects. These metrics are now prefixed by `thanos_query_concurrent_selects_`. Signed-off-by: Simon Pasquier <spasquie@redhat.com> * *: refactor instrumentation of the gate package This change deprecates the gate.(*Keeper) struct. When Keeper is used to create several gates, the metric tracking the number of in-flight metric isn't meaningful because it is hard to say whether requests are being blocked or not. As such the `thanos_query_concurrent_selects_gate_queries_in_flight` is removed. The following metrics have been added to record the maximum number of concurrent requests per gate: * `thanos_query_gate_queries_max` * `thanos_bucket_store_series_gate_queries_max`, previously known as `thanos_bucket_store_queries_concurrent_max.` * `thanos_memcached_getmulti_gate_queries_max` Signed-off-by: Simon Pasquier <spasquie@redhat.com> * Decompose registry wrapping for concurrent selects Signed-off-by: Simon Pasquier <spasquie@redhat.com> * Rename gateFn to gateProviderFn Signed-off-by: Simon Pasquier <spasquie@redhat.com>

view details

Kemal Akkoyun

commit sha a0ef9821508631de8f6f507cb5d9493d0ee4c7c5

Fix broken link in release notes (#3165) Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

view details

Simon Pasquier

commit sha c19f24b2dd4fb5fe394b46f48750b72997919abe

CHANGELOG.md: include #3154 changes (#3167) Signed-off-by: Simon Pasquier <spasquie@redhat.com>

view details

Bartlomiej Plotka

commit sha dcd703a499fb519acc870323f40b420702a471ad

Extending GH issues staleness to kick in only after 2mo. (#3159) Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

view details

Daksh Sagar

commit sha 24a8b584d14bf1c7849a23df1d59504dd95b9ab0

React-UI: Replace Alert components with UncontrolledAlert (#3169) * replace Alert component with UncontrolledAlert Signed-off-by: Daksh Sagar <daksh.sagar@yahoo.com> * Fix necessary test suites Signed-off-by: Daksh Sagar <daksh.sagar@yahoo.com> * run >> deleting asset file >> writing assets make[1]: Entering directory '/home/daksh/go/src/github.com/thanos-io/thanos' >> formatting go code >> formatting shell scripts make[1]: Leaving directory '/home/daksh/go/src/github.com/thanos-io/thanos' Signed-off-by: Daksh Sagar <daksh.sagar@yahoo.com>

view details

Simon Pasquier

commit sha 1078fe701ac2c58c06342de91dad23ca7963728a

*: improve latency when streaming series from Prometheus (#3146) I've found that when requesting many series (in the order of ten thousands), the Thanos sidecar spends half of its time computing the number of received series. To calculate the number of series, it needs to build a label-based identifier for each chunked series and compare it with the previous identifier. Eventually this number is only used for logging and tracing so it doesn't feel like it's worth the penalty. This change adds an histogram metric, `thanos_sidecar_prometheus_store_received_frames`, which tracks the number of frames per request received from the Prometheus remote read API (buckets: 10, 100, 1000, 10000, 100000). It can be used to evaluate whether expensive Series requests are being performed. Signed-off-by: Simon Pasquier <spasquie@redhat.com>

view details

Uchechukwu Obasi

commit sha e3ac46868fe85a561e3ff81573f0d63872942fda

fixed TOC to contain h1 to h4 tags (#3175) Signed-off-by: thisisobate <obasiuche62@gmail.com>

view details

Lucas Servén Marín

commit sha 6cc130a65a15d8b1ebac674041b1ebf766413c17

website/layour: fix TOC title The correct term is `Table of Contents` in plural. This commit fixes the title for the TOC. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 4 days

create barnchsquat/thanos

branch : toc

created branch time in 4 days

issue commentsquat/kilo

How to achieve install of Kilo on k3s cluster, behind NAT?

I'm very glad everything worked out! Thanks for the update :tada:

tetricky

comment created time in 4 days

pull request commentobservatorium/deployments

Make loki optional

ack, I think we use this pattern more frequently in the repo

clyang82

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent
MemberEvent

issue closedsquat/kilo

Tag a release

Hi Squat!

Could you tag releases with the updates/breaking changes?

Pulling from latest is pretty risky as it could take down a node if there is a bug. At the moment I pull from a sha hash instead but it becomes hard to figure out which version it corresponds to and whether there are any important changes/fixes I need to update to get.

closed time in 5 days

SerialVelocity

issue commentsquat/kilo

Tag a release

wooo! the tag is on dockerhub :))) https://hub.docker.com/layers/squat/kilo/0.1.0/images/sha256-4fd831af093a047e6ed16b0a6cdf09afcb56ee9069864b710fe354bc28855f3e?context=explore

I'll closed this as resolved for now

SerialVelocity

comment created time in 5 days

push eventsquat/kilo

Lucas Servén Marín

commit sha b5cadfe3de3fab5e0595e4b4e05f524f7f2c182c

.travis.yml: only tag latest image if not git tag If we tag a release for, e.g. 0.1.1, after we've already cut a 0.2.0 tag, then CI would tag the 0.1.1 image as `latest`, which is confusing. This commit ensures that we only tag the `latest` image when building from master. Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

view details

push time in 5 days

MemberEvent

issue commentsquat/kilo

Tag a release

I just cut release 0.1.0 :tada: https://github.com/squat/kilo/releases/tag/0.1.0. As a followup, I'll make sure that CI creates images for tags.

SerialVelocity

comment created time in 5 days

release squat/kilo

0.1.0

released time in 5 days

created tagsquat/kilo

tag0.1.0

Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

created time in 5 days

create barnchsquat/kilo

branch : release-0.1

created branch time in 5 days

delete tag squat/kilo

delete tag : 0.1.0

delete time in 5 days

delete branch squat/kilo

delete branch : 0.1

delete time in 5 days

created tagsquat/kilo

tag0.1.0

Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

created time in 5 days

create barnchsquat/kilo

branch : 0.1

created branch time in 5 days

issue commentsquat/kilo

Tag a release

Hi @SerialVelocity this sounds very good to me. Kilo has been fairly stable for a while and has advanced a lot since last year. I think it's time to at least tag a v0.1.0. Does that SGTY?

SerialVelocity

comment created time in 5 days

issue commentsquat/kilo

How to achieve install of Kilo on k3s cluster, behind NAT?

Hi, thanks for starting this issue :) this looks like something is corrupt in the kubeconfig you copied from one node to the other. Specifically, the Kilo Pod on the master is starting perfectly fine but only the Pods on the worker nodes are failing, so there must be a difference in the file. Can you please share a diff of the files on each node? I think that would help us track down the problem.

tetricky

comment created time in 5 days

MemberEvent

push eventobservatorium/operator

clyang82

commit sha e9d7ea7657d1cebbd4d8edf617f622307167f25a

Support configure replicas for each component

view details

clyang82

commit sha 6b75cccdf283c00bb3d66b0ff1daa614b4317f3f

update deployments dependency

view details

Lucas Servén Marín

commit sha 5016f31848d38789b36bf6c836f718533e5e4d38

Merge pull request #5 from clyang82/replicas Support configure replicas for each component

view details

push time in 5 days

created repositoryobservatorium/opentelemetry-generate-span-java

created time in 5 days

PullRequestReviewEvent

issue commentsquat/kilo

k3s kilo pods crashlooping

Ok, that's clear to me now :) sounds like you have a path forward. If you run into more issues, or need help, let's follow up in a new issue particular to this special use-case, or we can chat in the kubernetes slack :)

ibalajiarun

comment created time in 5 days

issue commentsquat/kilo

k3s kilo pods crashlooping

Aha, I see. Are these all VMs on the same machine? If so, then localhost or 127.0.0.1 is likely ok. Kilo doesn't care if the API endpoint is 127.0.0.1 or a DNS name or anything else. The only important thing is that the endpoint (host+port) be accessible from every worker node before the VPN is established, this is why the endpoint should typically be a public IP, as the nodes are often in different cloud providers or networks

ibalajiarun

comment created time in 5 days

pull request commentobservatorium/operator

Support configure replicas for each component

hm btw, if the vendor directory had these weird changes, we should really modify the build test so that this would fail. Otherwise, we will almost certainly merge changes like this in the future

clyang82

comment created time in 5 days

issue commentsquat/kilo

k3s kilo pods crashlooping

hi @tetricky, in the future, let's tackle these issues in a new issue :)) regarding the problem you're experiencing, please take note of the comment in the k3s manifest: https://github.com/squat/kilo/blob/master/manifests/kilo-k3s.yaml#L169-L172

We need to copy the k3s.yaml file from the master to the same place on the worker nodes. In your case, that means you should:

  1. delete the Kilo daemonset;
  2. delete the /etc/rancher/k3s/k3s.yaml from the worker nodes;
  3. make sure the k3s.yaml file from the master node uses a server address that is routable from the internet, e.g. an externally resolvable DNS name that resolves to a public IP address;
  4. copy the file to the worker nodes; and then
  5. reapply the kilo manifests to the cluster.
ibalajiarun

comment created time in 5 days

delete branch observatorium/operator

delete branch : kakkoyun-patch-1

delete time in 5 days

push eventobservatorium/operator

Kemal Akkoyun

commit sha 9f8e9dbf5652ef306560112b038486b4f0e6e065

Create README.md

view details

Kemal Akkoyun

commit sha 20e6916a60f6974ca209ded44f93414e03fa354b

Fix directory

view details

Kemal Akkoyun

commit sha abea01a40c2d5a3baaee1124551a5e5c3c69d663

Update README.md Co-authored-by: Lucas Servén Marín <lserven@gmail.com>

view details

Kemal Akkoyun

commit sha bfd2fb4146de822958d9510da6bbe547f42f719b

Update README.md Co-authored-by: Lucas Servén Marín <lserven@gmail.com>

view details

Lucas Servén Marín

commit sha b94b238ecf5a5fbf438ceae9000dcd30cca37b2b

Merge pull request #6 from observatorium/kakkoyun-patch-1 Create README.md

view details

push time in 5 days

PR merged observatorium/operator

Create README.md
+11 -0

2 comments

1 changed file

kakkoyun

pr closed time in 5 days

PullRequestReviewEvent

Pull request review commentobservatorium/operator

Create README.md

+# Observatorium Operator++[![Build Status](https://circleci.com/gh/observatorium/operator.svg?style=svg)](https://circleci.com/gh/observatorium/operator)++Operator deploying the Observatorium project.+Currently, this includes:++0. operator itself.

not exactly, this is the result of

0. x
1. y
2. z
  1. x
  2. y
  3. z

See that it becomes zero-indexed

kakkoyun

comment created time in 5 days

PullRequestReviewEvent
PullRequestEvent
PullRequestReviewEvent

Pull request review commentobservatorium/operator

Create README.md

+# Observatorium Operator++[![Build Status](https://circleci.com/gh/observatorium/operator.svg?style=svg)](https://circleci.com/gh/observatorium/operator)++Operator deploying the Observatorium project.+Currently, this includes:++0. operator itself.+0. example configuration for deploying Observatorium via the Observatorium Operator.
1. example configuration for deploying Observatorium via the Observatorium Operator.
kakkoyun

comment created time in 5 days

Pull request review commentobservatorium/operator

Create README.md

+# Observatorium Operator++[![Build Status](https://circleci.com/gh/observatorium/operator.svg?style=svg)](https://circleci.com/gh/observatorium/operator)++Operator deploying the Observatorium project.+Currently, this includes:++0. operator itself.
1. the operator itself; and
kakkoyun

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentobservatorium/operator

Create README.md

+# Observatorium Operator++[![Build Status](https://circleci.com/gh/observatorium/operator.svg?style=svg)](https://circleci.com/gh/observatorium/operator)++Operator deploying the Observatorium project.+Currently, this includes:++0. operator itself.

I don't have a strong opinion here, but this makes the list zero-indexed, should we make it start with a 1?

kakkoyun

comment created time in 5 days

PullRequestReviewEvent

pull request commentopenshift/telemeter

Bug 1879009: jsonnet/telemeter: Fix cluster_subscribed recording rule

/bugzilla refresh

smarterclayton

comment created time in 5 days

pull request commentopenshift/telemeter

Bug 1879009: jsonnet/telemeter: Fix cluster_subscribed recording rule

/bugzilla refresh

smarterclayton

comment created time in 5 days

pull request commentopenshift/telemeter

jsonnet/telemeter: Fix cluster_subscribed recording rule

/retitle Bug 1879009: jsonnet/telemeter: Fix cluster_subscribed recording rule

smarterclayton

comment created time in 5 days

pull request commentopenshift/telemeter

jsonnet/telemeter: Fix cluster_subscribed recording rule

/title Bug 1879009: jsonnet/telemeter: Fix cluster_subscribed recording rule

smarterclayton

comment created time in 5 days

push eventobservatorium/deployments

clyang82

commit sha 08d872ccdceb13188ec3eb5742f65e639d1544b4

Support replicas and update api version

view details

Lucas Servén Marín

commit sha 0f3097bded073e691bdd799f0edd12d531cc3821

Merge pull request #339 from clyang82/replicas Support replicas and update api version

view details

push time in 5 days

PullRequestReviewEvent

Pull request review commentobservatorium/deployments

Support replicas and update api version

     retentionResolutionRaw: '14d',     retentionResolution5m: '1s',     retentionResolution1h: '1s',+    replicas: 1,

looks good with https://github.com/observatorium/operator/pull/5 :+1:

clyang82

comment created time in 5 days

PullRequestReviewEvent

pull request commentopenshift/telemeter

jsonnet/telemeter: Fix cluster_subscribed recording rule

/restest

smarterclayton

comment created time in 6 days

pull request commentopenshift/telemeter

jsonnet/telemeter: Fix cluster_subscribed recording rule

Semi urgent since the recording rules are kind of expensive right now and we don't want to unnecessarily abuse the ruler

smarterclayton

comment created time in 6 days

pull request commentopenshift/telemeter

jsonnet/telemeter: Fix cluster_subscribed recording rule

Let's fix this in prod now and when can make it pretty after

smarterclayton

comment created time in 6 days

pull request commentobservatorium/deployments

Remove the Operator after moving to separate repository

This is looking pretty good. It looks like the build test is still doing something related to the operator and thus failing

metalmatze

comment created time in 6 days

delete branch thanos-io/thanos

delete branch : fix_broken_link

delete time in 6 days

more