profile
viewpoint
Jay Wineinger jwineinger SPS Commerce Minneapolis, MN

jwineinger/django-icecast-auth 10

Small Django app which provides models and views to control icecast2 URL authentication

jwineinger/django-inotifier 10

Django app providing a wrapper around pyinotify

jwineinger/django-exportable-admin 8

Provides a custom ModelAdmin which adds CSV export for changelist views

jwineinger/CuteTime 2

CuteTime is a customizable jQuery plugin that automatically converts timestamps to formats much cuter. Also has the ability to dynamically re-update and/or automatically update timestamps on a controlled interval.

jwineinger/caffe-open-nsfw 1

Dockerfile for building caffe and the Yahoo OpenNSFW image classifer

jwineinger/django-cms 1

An Advanced Django CMS.

jwineinger/django-multiformset 1

Provides a view that helps you use multiple formsets on the same page

jwineinger/dotvim 1

Vim directory and config files

jwineinger/Kate-Plot 1

First taste of matplotlib to graph my premature daughter's growth against the Fenton 2003 Growth Chart

startedideasman42/nerd-dictation

started time in 2 days

issue closedistio/istio

External tracing.zipkin.address results in requests with host=zipkin

Bug Description

We've been migrating from a cluster.local zipkin collector to one behind an ELB so that in a multi-cluster scenario, our traces are unified. However, when we made that change, we found that we got no traces from istio into it. After some debugging, we found the ELB logs that the host of the request is being set to just "zipkin" instead of the FQDN of the address we configured, and so the request was being rejected. ELB log exerpt:

"POST https://zipkin:443/api/v2/spans HTTP/1.1"

https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/trace/v3/zipkin.proto#config-trace-v3-zipkinconfig shows a collector_hostname field with the following description:

(string) Optional hostname to use when sending spans to the collector_cluster. Useful for collectors that require a specific hostname. Defaults to collector_cluster above.

Here is the tracing config from one of our proxies:

"tracing": { 
  "http": { 
    "name": "envoy.tracers.zipkin",
    "typed_config": { 
      "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig",
      "collector_cluster": "zipkin",
      "collector_endpoint": "/api/v2/spans",
      "trace_id_128bit": true,
      "shared_span_context": false,
      "collector_endpoint_version": "HTTP_JSON"
    } 
  } 
}

As you can see, collector_hostname is not set, so I assume that from the description in the envoy docs that it will just use the collector_cluster as the hostname, which is "zipkin".

I would expect that the tracer.zipkin.address would be the hostname, or that there is a configuration option that I can use to force it to be so.

To temporarily workaround this issue, we had to configure the system ingress behind the ELB to accept a "zipkin" hostname, which feels quite odd.

Version

$ istioctl version
client version: 1.11.2
control plane version: 1.11.3
data plane version: 1.11.3 (132 proxies)

$ kubectl version --short
Client Version: v1.20.4-dirty
Server Version: v1.20.7-eks-d88609

Additional Information

No response

closed time in 6 days

jwineinger

issue commentistio/istio

External tracing.zipkin.address results in requests with host=zipkin

Yeah, I think so. 1.12 and soon 1.11 will set the collector_hostname, so that seems to satisfy this issue.

jwineinger

comment created time in 6 days

issue commentistio/istio

External tracing.zipkin.address results in requests with host=zipkin

Excellent. I see a PR to cherry pick it into a 1.11 release as well. I do think we may try otel-collector anyway, because we're evaluating different trace backends and that seems to be a easy way to switch between them.

jwineinger

comment created time in 12 days

startedistio-ecosystem/dns-discovery

started time in a month

issue commentistio/istio

External tracing.zipkin.address results in requests with host=zipkin

@zirain sorry for the delay. We changed spec.values.globla.tracer.zipkin.address on the IstioOperator resource.

jwineinger

comment created time in a month

startedkubernetes/community

started time in a month

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha c7621cb66c5c4114900b40e700865e7796a0dcfc

Update vanee.service

view details

push time in 2 months

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha 250871ba2d9c5c5d5de903034e5d8d7cd7f0b492

Update README.md

view details

push time in 2 months

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha 2bed6d876ca441e9364edff3ab1897c8582b48e2

Update README.md

view details

push time in 2 months

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha fdf649e81fc4b57d2a61d7d228e37f7857065691

Update README.md

view details

push time in 2 months

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha dee37401a268d9e729b2409582edd886c6432277

Update README.md

view details

push time in 2 months

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha dbe09a3909fb937c467a23a0cffacff7a54856ce

Update vanee_controller.py

view details

push time in 2 months

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha d93f4d4d6ab71d9823d64fa0cee048f113dea862

Create vanee.service

view details

push time in 2 months

push eventjwineinger/vanee-controller

Jay Wineinger

commit sha 2ac7ea26044d246142723a06a22deb6e8c3d8c95

Create vanee_controller.py

view details

push time in 2 months

create barnchjwineinger/vanee-controller

branch : main

created branch time in 2 months

created repositoryjwineinger/vanee-controller

raspberry pi relay controller for a vanee air exchanger

created time in 2 months

issue openedistio/istio

External tracing.zipkin.address results in requests with host=zipkin

Bug Description

We've been migrating from a cluster.local zipkin collector to one behind an ELB so that in a multi-cluster scenario, our traces are unified. However, when we made that change, we found that we got no traces from istio into it. After some debugging, we found the ELB logs that the host of the request is being set to just "zipkin" instead of the FQDN of the address we configured, and so the request was being rejected.

https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/trace/v3/zipkin.proto#config-trace-v3-zipkinconfig shows a collector_hostname field with the following description:

(string) Optional hostname to use when sending spans to the collector_cluster. Useful for collectors that require a specific hostname. Defaults to collector_cluster above.

Here is the tracing config from one of our proxies:

"tracing": { 
  "http": { 
    "name": "envoy.tracers.zipkin",
    "typed_config": { 
      "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig",
      "collector_cluster": "zipkin",
      "collector_endpoint": "/api/v2/spans",
      "trace_id_128bit": true,
      "shared_span_context": false,
      "collector_endpoint_version": "HTTP_JSON"
    } 
  } 
}

As you can see, collector_hostname is not set, so I assume that from the description in the envoy docs that it will just use the collector_cluster as the hostname, which is "zipkin".

I would expect that the tracer.zipkin.address would be the hostname, or that there is a configuration option that I can use to force it to be so.

To temporarily workaround this issue, we had to configure the system ingress behind the ELB to accept a "zipkin" hostname, which feels quite odd.

Version

$ istioctl version
client version: 1.11.2
control plane version: 1.11.3
data plane version: 1.11.3 (132 proxies)

$ kubectl version --short
Client Version: v1.20.4-dirty
Server Version: v1.20.7-eks-d88609

Additional Information

No response

Affected product area

  • [ ] Docs
  • [ ] Installation
  • [ ] Networking
  • [ ] Performance and Scalability
  • [X] Extensions and Telemetry
  • [ ] Security
  • [ ] Test and Release
  • [ ] User Experience
  • [ ] Developer Infrastructure
  • [ ] Upgrade
  • [ ] Multi Cluster
  • [ ] Virtual Machine
  • [ ] Control Plane Revisions

Is this the right place to submit this?

  • [X] This is not a security vulnerability
  • [X] This is not a question about how to use Istio

created time in 2 months

issue openedopen-policy-agent/gatekeeper

AssignMetadata for multiple labels

We have a use case where pods are created with a single label that is a reference to an ID in another system. We want to take data from that system (based on that ID) and add several (1-20 labels) to the pods. Currently, it seems that the AssignMetadata mutating CRD allows only a single label to be set at a time. It seems suboptimal to manage that many resources for a single pod/deployment. We'd like to describe a set of labels in a single resource.

Currently the AssignMetadata mutation ignores labels that already exist on the pod. One options might be allowing location: metadata.labels, and setting properties.assign.value to a mapping of label: value pairs (similar to the last example in the mutation docs, for Assign). The controller could still silently ignore changes to existing labels as it currently does for single changes.

  • Gatekeeper version: 3.6.0
  • Kubernetes version: 1.20

created time in 3 months

startedf1ren/recorder

started time in 3 months

more