profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/linki/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Martin Linkhorst linki @zalando Berlin

helm/charts 15079

⚠️(OBSOLETE) Curated applications for Kubernetes

linki/chaoskube 1349

chaoskube periodically kills random pods in your Kubernetes cluster.

Eneco/landscaper 340

Deprecated. Takes a set of Helm Chart references with values (a desired state), and realizes this in a Kubernetes cluster

linki/cloudformation-operator 93

A Kubernetes operator for managing CloudFormation stacks via a CustomResource

linki/cryptoprom 17

CryptoProm is a Prometheus metrics exporter for Cryptocurrency market prices.

linki/0x-go 7

[Experimental] A collection of tools relating to Ethereum's 0xProject (v2)

linki/armor-ingress-controller 6

A Kubernetes Ingress Controller for @labstack's Armor

linki/Android_Pusher 1

A simple pusher implementation and activity for Android!

linki/dm-polymorphic 1

enables active record style polymorphism to datamapper. this fork is completely integrated into hassox/dm-polymorphic.

linki/0x-mesh 0

A peer-to-peer network for sharing 0x orders

created repositorytamalsaha/kube-objectref

created time in 2 hours

issue commentgraphprotocol/graph-node

Unexpected RPC error, error: Transport("Unexpected response status code: 502 Bad Gateway"), component: BlockStream

This is an issue with our BSC provider; we are working with them to get it resolved, but it might take a little longer before that is available.

What is the status of this issue? I'm getting the same issue and just want to know when it gets fixed, or are there any workaround?

mohamed-nasir

comment created time in 3 hours

created repositorywandrs/gitea

created time in 4 hours

Pull request review commentgraphprotocol/graph-node

instance manager: Abstract over `Blockchain::TriggersAdapter`

 impl TriggersAdapterTrait<Chain> for TriggersAdapter {             }))         }))     }++    async fn parent_ptr(&self, block: &BlockPtr) -> Result<BlockPtr, Error> {+        use futures::stream::Stream;+        use graph::prelude::LightEthereumBlockExt;++        let blocks = self+            .eth_adapter+            .load_blocks(+                self.logger.cheap_clone(),+                self.chain_store.cheap_clone(),+                HashSet::from_iter(Some(block.hash_as_h256())),+            )+            .collect()+            .compat()+            .await?;+        assert_eq!(blocks.len(), 1);+        Ok(blocks[0]+            .parent_ptr()+            .expect("genesis block cannot be reverted"))

It's just that the method is called parent_ptr - maybe add a comment to it that it will never be called for the genesis block (and is therefore ok to panic in that case) and have a comment by the expect that it's only used for reverts

leoyvens

comment created time in 4 hours

Pull request review commentgraphprotocol/graph-node

instance manager: Abstract over `Blockchain::TriggersAdapter`

 impl TriggersAdapterTrait<Chain> for TriggersAdapter {      async fn triggers_in_block(         &self,+        logger: &Logger,

urgh .. I am having a really hard time understanding which loggers are decorated how, but it's a good enough reason.

leoyvens

comment created time in 4 hours

issue commentgraphprotocol/graph-node

Running local graph node with ganache-cli results in `(node:10343) UnhandledPromiseRejectionWarning: Error: Incompatible EIP155-based V 0 and chain id 1. See the second parameter of the Transaction constructor to set the chain id.`

I think ganache may inherit the chainId from ALCHEMY - do you get the same error if you pass a localhost chainId (e.g. 1337)

Same error when passing --chainId 1337 too.

n1punp

comment created time in 5 hours

pull request commentkubernetes-sigs/external-dns

Ingress class filtering

/kind feature

dsalisbury

comment created time in 6 hours

PR opened graphprotocol/graph-node

Fix failpoints

The data-source-revert integration test wasn't really doing what it should, the feature wasn't set so the failpoints! macro was a noop.

+1 -3

0 comment

3 changed files

pr created time in 7 hours

create barnchgraphprotocol/graph-node

branch : leo/fix-failpoints

created branch time in 7 hours

push eventgraphprotocol/graph-node

tilacog

commit sha 0a4cf2ec4603cff50b9950ddbccda253e30d9539

leo's comments - pt 3

view details

push time in 7 hours

issue commentgraphprotocol/graph-node

Out of gas tx triggers subgraph call handler

Got it, thanks for the explanation!

fnanni-0

comment created time in 7 hours

Pull request review commentgraphprotocol/graph-node

instance manager: Abstract over `Blockchain::TriggersAdapter`

 impl TriggersAdapterTrait<Chain> for TriggersAdapter {      async fn triggers_in_block(         &self,+        logger: &Logger,

It differs because of https://github.com/graphprotocol/graph-node/blob/6e1dfbd31f777d7f9b1d3cf6e199570dd1870f7a/core/src/subgraph/instance_manager.rs#L708-L711

leoyvens

comment created time in 7 hours

Pull request review commentgraphprotocol/graph-node

instance manager: Abstract over `Blockchain::TriggersAdapter`

 impl TriggersAdapterTrait<Chain> for TriggersAdapter {             }))         }))     }++    async fn parent_ptr(&self, block: &BlockPtr) -> Result<BlockPtr, Error> {+        use futures::stream::Stream;+        use graph::prelude::LightEthereumBlockExt;++        let blocks = self+            .eth_adapter+            .load_blocks(+                self.logger.cheap_clone(),+                self.chain_store.cheap_clone(),+                HashSet::from_iter(Some(block.hash_as_h256())),+            )+            .collect()+            .compat()+            .await?;+        assert_eq!(blocks.len(), 1);+        Ok(blocks[0]+            .parent_ptr()+            .expect("genesis block cannot be reverted"))

This is getting at parent_ptr being used only for reverts, and for that reason it will never be called for genesis.

leoyvens

comment created time in 7 hours

push eventgraphprotocol/graph-node

tilacog

commit sha bd06f660552d111854ccb2e44840b3777c017a32

leo's comments - pt 1

view details

tilacog

commit sha fb2bbd9bf9b729d575475c1172840b776b66b0a6

revert

view details

tilacog

commit sha c7cac222141edf0279c87aac43944936495f66c8

leo's comments - pt 2

view details

push time in 8 hours

Pull request review commentgraphprotocol/graph-node

instance manager: Abstract over `Blockchain::TriggersAdapter`

 impl TriggersAdapterTrait<Chain> for TriggersAdapter {             }))         }))     }++    async fn parent_ptr(&self, block: &BlockPtr) -> Result<BlockPtr, Error> {+        use futures::stream::Stream;+        use graph::prelude::LightEthereumBlockExt;++        let blocks = self+            .eth_adapter+            .load_blocks(+                self.logger.cheap_clone(),+                self.chain_store.cheap_clone(),+                HashSet::from_iter(Some(block.hash_as_h256())),+            )+            .collect()+            .compat()+            .await?;+        assert_eq!(blocks.len(), 1);+        Ok(blocks[0]+            .parent_ptr()+            .expect("genesis block cannot be reverted"))

That error should probably say the genesis block does not have a parent

leoyvens

comment created time in 8 hours

Pull request review commentgraphprotocol/graph-node

instance manager: Abstract over `Blockchain::TriggersAdapter`

 impl TriggersAdapterTrait<Chain> for TriggersAdapter {      async fn triggers_in_block(         &self,+        logger: &Logger,

Do we need the explicit logger? How does that differ from self.logger?

leoyvens

comment created time in 8 hours

Pull request review commentgraphprotocol/graph-node

instance manager: Abstract over `Blockchain::TriggersAdapter`

 where                     // First, load the block in order to get the parent hash.                     if let Err(e) = ctx                         .inputs-                        .eth_adapter-                        .load_blocks(-                            logger.cheap_clone(),-                            ctx.inputs.chain_store.cheap_clone(),-                            HashSet::from_iter(Some(subgraph_ptr.hash_as_h256())),-                        )-                        .collect()-                        .compat()+                        .triggers_adapter+                        .parent_ptr(&subgraph_ptr)                         .await-                        .map(|blocks| {-                            assert_eq!(blocks.len(), 1);-                            blocks.into_iter().next().unwrap()-                        })-                        .and_then(|block| {-                            // Produce pointer to parent block (using parent hash).-                            let parent_ptr = block-                                .parent_ptr()-                                .expect("genesis block cannot be reverted");-+                        .and_then(|parent_ptr| {

Nice simplification!

leoyvens

comment created time in 8 hours

PR opened graphprotocol/graph-node

Reviewers
instance manager: Abstract over `Blockchain::TriggersAdapter`

Part over multiblockchain refactor.

+101 -166

0 comment

7 changed files

pr created time in 8 hours

create barnchgraphprotocol/graph-node

branch : leo/instance-manager-triggers-adapter

created branch time in 8 hours

issue commentkubernetes-sigs/external-dns

Unable to get external-dns to work with CoreDNS and etcd

I was able to figure out the issue. The example deployment manifest for external-dns is set to watch ingress, so I needed to change that to service. Also, I missed the part indicated on the homepage that the DNS entries are controlled by annotations. After adding an annotation external-dns.alpha.kubernetes.io/hostname: my-nginx.default.svc.example.org to the service I was then able to see the A record being created and stored in etcd which then CoreDNS replied back with the correct IP.

eroji

comment created time in 9 hours

issue closedkubernetes-sigs/external-dns

Unable to get external-dns to work with CoreDNS and etcd

I followed the instructions found on https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md for setting up a CoreDNS cluster with etcd backend. The instruction is quite out-of-date but I believe I have it configured properly.

The etcd cluster is deployed via Helm 3 and etcd-operator.

# helm install etcd --set customResources.createEtcdClusterCRD=true stable/etcd-operator
WARNING: This chart is deprecated
W0510 16:08:34.384993   30971 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
NAME: etcd
LAST DEPLOYED: Mon May 10 16:08:34 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Watch etcd cluster start
  kubectl get pods -l etcd_cluster=etcd-cluster --namespace default -w

2. Confirm etcd cluster is healthy
  $ kubectl run --rm -i --tty --env="ETCDCTL_API=3" --env="ETCDCTL_ENDPOINTS=http://etcd-cluster-client:2379" --namespace default etcd-test --image quay.io/coreos/etcd --restart=Never -- /bin/sh -c 'watch -n1 "etcdctl  member list"'

3. Interact with the cluster!
  $ kubectl run --rm -i --tty --env ETCDCTL_API=3 --namespace default etcd-test --image quay.io/coreos/etcd --restart=Never -- /bin/sh
  / # etcdctl --endpoints http://etcd-cluster-client:2379 put foo bar
  / # etcdctl --endpoints http://etcd-cluster-client:2379 get foo
  OK
  (ctrl-D to exit)
  
4. Optional
  Check the etcd-operator logs
  export POD=$(kubectl get pods -l app=etcd-etcd-operator-etcd-operator --namespace default --output name)
  kubectl logs $POD --namespace=default

etcd cluster comes up healthy

/ # etcdctl cluster-health
member 465ee2db5eb50225 is healthy: got healthy result from http://etcd-cluster-fc9rn6q6sg.etcd-cluster.default.svc:2379
member 70439035f5b6d55f is healthy: got healthy result from http://etcd-cluster-fps7vjhj85.etcd-cluster.default.svc:2379
member 7123bb086e4f0231 is healthy: got healthy result from http://etcd-cluster-l4cjm7kkfg.etcd-cluster.default.svc:2379
cluster is healthy

etcd services are also online

etcd-cluster            ClusterIP      None            <none>         2379/TCP,2380/TCP   29m
etcd-cluster-client     ClusterIP      10.43.251.108   <none>         2379/TCP            29m
etcd-restore-operator   ClusterIP      10.43.141.142   <none>         19999/TCP           30m

Following which I deploy CoreDNS with Helm 3 and the following modified values.yaml

# Default values for coredns.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

image:
  repository: coredns/coredns
  tag: "1.7.1"
  pullPolicy: IfNotPresent

replicaCount: 1

resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi

## Create HorizontalPodAutoscaler object.
##
# autoscaling:
#   minReplicas: 1
#   maxReplicas: 10
#   metrics:
#   - type: Resource
#     resource:
#       name: cpu
#       targetAverageUtilization: 60
#   - type: Resource
#     resource:
#       name: memory
#       targetAverageUtilization: 60

rollingUpdate:
  maxUnavailable: 1
  maxSurge: 25%

# Under heavy load it takes more that standard time to remove Pod endpoint from a cluster.
# This will delay termination of our pod by `preStopSleep`. To make sure kube-proxy has
# enough time to catch up.
# preStopSleep: 5
terminationGracePeriodSeconds: 30

podAnnotations: {}
#  cluster-autoscaler.kubernetes.io/safe-to-evict: "false"

serviceType: "ClusterIP"

prometheus:
  service:
    enabled: false
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9153"
  monitor:
    enabled: false
    additionalLabels: {}
    namespace: ""

service:
# clusterIP: ""
# loadBalancerIP: ""
# externalIPs: []
# externalTrafficPolicy: ""
  annotations: {}

serviceAccount:
  create: false
  # The name of the ServiceAccount to use
  # If not set and create is true, a name is generated using the fullname template
  name:

rbac:
  # If true, create & use RBAC resources
  create: true
  # If true, create and use PodSecurityPolicy
  pspEnable: false
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  # name:

# isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.
isClusterService: true

# Optional priority class to be used for the coredns pods. Used for autoscaler if autoscaler.priorityClassName not set.
priorityClassName: ""

# Default zone is what Kubernetes recommends:
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options
servers:
- zones:
  - zone: .
  port: 53
  plugins:
  - name: errors
  # Serves a /health endpoint on :8080, required for livenessProbe
  - name: health
    configBlock: |-
      lameduck 5s
  # Serves a /ready endpoint on :8181, required for readinessProbe
  - name: ready
  # Required to query kubernetes API for data
  - name: kubernetes
    parameters: cluster.local in-addr.arpa ip6.arpa
    configBlock: |-
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
      ttl 30
  # Serves a /metrics endpoint on :9153, required for serviceMonitor
  - name: prometheus
    parameters: 0.0.0.0:9153
  - name: forward
    parameters: . /etc/resolv.conf
  - name: cache
    parameters: 30
  - name: loop
  - name: reload
  - name: loadbalance
  - name: etcd
    parameters: example.org
    configBlock: |-
      stubzones
      path /skydns
      endpoint http://10.43.251.108:2379

# Complete example with all the options:
# - zones:                 # the `zones` block can be left out entirely, defaults to "."
#   - zone: hello.world.   # optional, defaults to "."
#     scheme: tls://       # optional, defaults to "" (which equals "dns://" in CoreDNS)
#   - zone: foo.bar.
#     scheme: dns://
#     use_tcp: true        # set this parameter to optionally expose the port on tcp as well as udp for the DNS protocol
#                          # Note that this will not work if you are also exposing tls or grpc on the same server
#   port: 12345            # optional, defaults to "" (which equals 53 in CoreDNS)
#   plugins:               # the plugins to use for this server block
#   - name: kubernetes     # name of plugin, if used multiple times ensure that the plugin supports it!
#     parameters: foo bar  # list of parameters after the plugin
#     configBlock: |-      # if the plugin supports extra block style config, supply it here
#       hello world
#       foo bar

# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
# for example:
#   affinity:
#     nodeAffinity:
#      requiredDuringSchedulingIgnoredDuringExecution:
#        nodeSelectorTerms:
#        - matchExpressions:
#          - key: foo.bar.com/role
#            operator: In
#            values:
#            - master
affinity: {}

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}

# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
# for example:
#   tolerations:
#   - key: foo.bar.com/role
#     operator: Equal
#     value: master
#     effect: NoSchedule
tolerations: []

# https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
podDisruptionBudget: {}

# configure custom zone files as per https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
zoneFiles: []
#  - filename: example.db
#    domain: example.com
#    contents: |
#      example.com.   IN SOA sns.dns.icann.com. noc.dns.icann.com. 2015082541 7200 3600 1209600 3600
#      example.com.   IN NS  b.iana-servers.net.
#      example.com.   IN NS  a.iana-servers.net.
#      example.com.   IN A   192.168.99.102
#      *.example.com. IN A   192.168.99.102

# optional array of extra volumes to create
extraVolumes: []
# - name: some-volume-name
#   emptyDir: {}
# optional array of mount points for extraVolumes
extraVolumeMounts: []
# - name: some-volume-name
#   mountPath: /etc/wherever

# optional array of secrets to mount inside coredns container
# possible usecase: need for secure connection with etcd backend
extraSecrets: []
# - name: etcd-client-certs
#   mountPath: /etc/coredns/tls/etcd
# - name: some-fancy-secret
#   mountPath: /etc/wherever

# Custom labels to apply to Deployment, Pod, Service, ServiceMonitor. Including autoscaler if enabled.
customLabels: {}

## Alternative configuration for HPA deployment if wanted
#
hpa:
  enabled: false
  minReplicas: 1
  maxReplicas: 2
  metrics: {}

## Configue a cluster-proportional-autoscaler for coredns
# See https://github.com/kubernetes-incubator/cluster-proportional-autoscaler
autoscaler:
  # Enabled the cluster-proportional-autoscaler
  enabled: false

  # Number of cores in the cluster per coredns replica
  coresPerReplica: 256
  # Number of nodes in the cluster per coredns replica
  nodesPerReplica: 16
  # Min size of replicaCount
  min: 0
  # Max size of replicaCount (default of 0 is no max)
  max: 0
  # Whether to include unschedulable nodes in the nodes/cores calculations - this requires version 1.8.0+ of the autoscaler
  includeUnschedulableNodes: false
  # If true does not allow single points of failure to form
  preventSinglePointFailure: true

  image:
    repository: k8s.gcr.io/cluster-proportional-autoscaler-amd64
    tag: "1.8.0"
    pullPolicy: IfNotPresent

  # Optional priority class to be used for the autoscaler pods. priorityClassName used if not set.
  priorityClassName: ""

  # expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
  affinity: {}

  # Node labels for pod assignment
  # Ref: https://kubernetes.io/docs/user-guide/node-selection/
  nodeSelector: {}

  # expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
  tolerations: []

  # resources for autoscaler pod
  resources:
    requests:
      cpu: "20m"
      memory: "10Mi"
    limits:
      cpu: "20m"
      memory: "10Mi"

  # Options for autoscaler configmap
  configmap:
    ## Annotations for the coredns-autoscaler configmap
    # i.e. strategy.spinnaker.io/versioned: "false" to ensure configmap isn't renamed
    annotations: {}

Lastly I deploy external-dns with the following manifest.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
  namespace: kube-system
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: k8s.gcr.io/external-dns/external-dns:v0.7.6
        args:
        - --source=ingress
        - --provider=coredns
        - --log-level=debug # debug only
        env:
        - name: ETCD_URLS
          value: http://10.43.251.108:2379

I'm using MetalLB for on-prem L2 LoadBalancer IP assignment. I then create an nginx deployment and service with LoadBalancer IP. However, when I try to query the CoreDNS instance for the IP value, I get no response at all.

# k get svc
NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)             AGE
coredns-coredns         ClusterIP      10.43.22.140    <none>         53/UDP,53/TCP       34m
etcd-cluster            ClusterIP      None            <none>         2379/TCP,2380/TCP   39m
etcd-cluster-client     ClusterIP      10.43.251.108   <none>         2379/TCP            39m
etcd-restore-operator   ClusterIP      10.43.141.142   <none>         19999/TCP           39m
kubernetes              ClusterIP      10.43.0.1       <none>         443/TCP             114d
my-nginx                LoadBalancer   10.43.254.89    10.64.10.150   80:31706/TCP        18m
dnstools# dig @10.43.22.140 google.com

; <<>> DiG 9.11.3 <<>> @10.43.22.140 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9982
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 2048
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             30      IN      A       216.58.193.206

;; Query time: 7 msec
;; SERVER: 10.43.22.140#53(10.43.22.140)
;; WHEN: Mon May 10 23:49:40 UTC 2021
;; MSG SIZE  rcvd: 65

dnstools# dig @10.43.22.140 my-nginx.example.org

; <<>> DiG 9.11.3 <<>> @10.43.22.140 my-nginx.example.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 22069
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 2048
;; QUESTION SECTION:
;my-nginx.example.org.          IN      A

;; AUTHORITY SECTION:
example.org.            30      IN      SOA     ns.icann.org. noc.dns.icann.org. 2021022335 7200 3600 1209600 3600

;; Query time: 13 msec
;; SERVER: 10.43.22.140#53(10.43.22.140)
;; WHEN: Mon May 10 23:49:51 UTC 2021
;; MSG SIZE  rcvd: 125

closed time in 9 hours

eroji

delete branch graphprotocol/indexer

delete branch : zac/allocation-receipts

delete time in 9 hours

push eventgraphprotocol/indexer

Zac Burns

commit sha 35898d15c073d8c1a9a5fa662fad291c2a5d7d22

indexer-agent: Set header when getting voucher

view details

Zac Burns

commit sha 4637fde84f4efefa733c22bcc4219578179d2d8d

indexer-native: Build in release

view details

push time in 9 hours

push eventgraphprotocol/indexer

push time in 9 hours

push eventgraphprotocol/indexer

Jannis Pohlmann

commit sha 76466153958746016f1b3dd714df4ce415ecde53

indexer-agent: Fix content type of /collect-receipts requests

view details

push time in 9 hours

starteddgtlmoon/changedetection.io

started time in 9 hours

startedkubernetes-sigs/gateway-api

started time in 10 hours

startedockam-network/ockam

started time in 10 hours

push eventgraphprotocol/indexer

Zac Burns

commit sha 95375da72f65dc080b70324e45437d514d7ba0fd

indexer-native: Build in release

view details

push time in 10 hours