profile
viewpoint

carlpett/acs-engine 0

Azure Container Service Engine - a place for community to collaborate and build the best open Docker container infrastructure for Azure.

carlpett/advent_of_code_2017 0

Contribute your solutions to Advent of Code 2017 and be inspired by others.

carlpett/alertmanager 0

Prometheus Alertmanager

carlpett/ark 0

Heptio Ark is a utility for managing disaster recovery, specifically for your Kubernetes cluster resources and persistent volumes. Brought to you by Heptio.

carlpett/azure-rest-api-specs 0

The source for REST API specifications for Microsoft Azure.

carlpett/azure-sdk-for-go 0

Microsoft Azure SDK for Go

carlpett/azure_metrics_exporter 0

Azure metrics exporter for Prometheus

carlpett/beats 0

:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash

carlpett/blackbox_exporter 0

Blackbox prober exporter

carlpett/blackfn_exporter 0

blackbox_exporter for function-as-a-service

issue openedkubernetes/website

shell-demo.yaml not suitable for general use?

This is a Bug Report

<!-- Thanks for filing an issue! Before submitting, please fill in the following information. --> <!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->

<!--Required Information--> Problem: #14892 made shell-demo.yaml, included in the "how to get a shell in the cluster" doc, run with host networking. This seems like an unwise first suggested configuration for someone who needs a shell in a cluster? In part because cluster-admins may have forbidden hostNetwork: true in many cases, and in part because if the cluster-admin hasn't, a lot of people will open up ports to the internet (default managed Kubernetes offerings tend to have public IPs on all nodes). Additionally, the configuration will lead to the pod being unable to resolve cluster.local names.

And finally, I believe the rationale for adding it in the first place was dependent on configuration errors on the user's side? Non-host networked pods should be able to resolve external DNS.

Proposed Solution: Revert #14892

Page to Update: https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

<!--Optional Information (remove the comment tags around information you would like to include)--> <!--Kubernetes Version:-->

<!--Additional Information:-->

created time in 19 hours

fork carlpett/website-1

Kubernetes website and documentation repo:

https://kubernetes.io

fork in 19 hours

issue openedhashicorp/terraform-provider-google

Perma-diff container_node_pool if image_type non-canonical casing

<!--- Please leave this line, it helps our automation: [issue-type:bug-report] ---> <!--- Please keep this note for the community --->

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

<!--- Thank you for keeping this note for the community --->

Terraform Version

$ terraform version
Terraform v0.12.29
+ provider.google v3.40.0
+ provider.google-beta v3.40.0

Affected Resource(s)

<!--- Please list the affected resources and data sources. --->

  • google_container_node_pool

Terraform Configuration Files

If image_type is given in non-all-upper-case, it results in a perma-diff due to GKE returning all upper-case.

resource "google_container_node_pool" "node_pool" {
  [...]
  node_config {
    [...]
    image_type = "cos_containerd"
    [...]
  }
}

Expected Behavior

Either,

  1. Do not report diff after being applied, or
  2. Reject lower-case

Actual Behavior

Perma-diff:

[...]
      ~ node_config {
            [...]
          ~ image_type        = "COS_CONTAINERD" -> "cos_containerd"

Steps to Reproduce

  1. terraform apply
  2. terraform plan

Important Factoids

<!--- Are there anything atypical about your accounts that we should know? For example: authenticating as a user instead of a service account? --->

References

<!--- Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests

Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example: --->

created time in 21 hours

issue commenthashicorp/terraform-provider-google

Artifact registry import broken

It might also be noteworthy that the import specs in the docs say that some formats work which actually give errors:

1: $ terraform import google_artifact_registry_repository.default projects/{{project}}/locations/{{location}}/repositories/{{repository_id}}
2: $ terraform import google_artifact_registry_repository.default {{project}}/{{location}}/{{repository_id}}
3: $ terraform import google_artifact_registry_repository.default {{location}}/{{repository_id}}
4: $ terraform import google_artifact_registry_repository.default {{repository_id}}

Format 2 and 3 works (apart from the bug in the above post), but Format 1 duplicates projects/, maps project to location, and seems to drop the location and repository:

$ terraform import google_artifact_registry_repository.docker-mirror projects/PROJECT_NAME/location/us/repositories/mirror
...
Error: Error when reading or editing ArtifactRegistryRepository "projects/projects/locations/PROJECT_NAME/repositories/": googleapi: Error 403: Permission denied on resource project projects.
Details:
[
  {
    "@type": "type.googleapis.com/google.rpc.Help",
    "links": [
      {
        "description": "Google developer console API key",
        "url": "https://console.developers.google.com/project/projects/apiui/credential"
      }
    ]
  }
]

Format 4 doesn't populate location or repository:

$ terraform import google_artifact_registry_repository.docker-mirror mirror
google_artifact_registry_repository.docker-mirror: Importing from ID "mirror"...
google_artifact_registry_repository.docker-mirror: Import prepared!
  Prepared google_artifact_registry_repository for import
google_artifact_registry_repository.docker-mirror: Refreshing state... [id=projects/PROJECT_NAME/locations//repositories/]

Error: Error when reading or editing ArtifactRegistryRepository "projects/PROJECT_NAME/locations//repositories/": googleapi: Error 400: Fail to resolve resource 'projects/PROJECT_NAME/locations/'

I'm not sure if those would also have been fixed by original PR?

carlpett

comment created time in 2 days

issue openedhashicorp/terraform-provider-google

Artifact registry import broken

<!--- Please leave this line, it helps our automation: [issue-type:bug-report] ---> <!--- Please keep this note for the community --->

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

<!--- Thank you for keeping this note for the community --->

Terraform Version

 $ terraform version
Terraform v0.12.29
+ provider.google v3.40.0
+ provider.google-beta v3.40.0

Affected Resource(s)

<!--- Please list the affected resources and data sources. --->

  • google_artifact_registry_repository

Terraform Configuration Files

<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->

resource "google_artifact_registry_repository" "docker-mirror" {
  provider = google-beta

  location      = "us" # multi-region
  repository_id = "mirror"
  description   = "Images mirrored from third party registries"
  format        = "DOCKER"
}

Debug Output

https://gist.github.com/carlpett/5d18a83adcb3495b8a8f6b5a9ae3ecf0

Panic Output

Expected Behavior

Import should give a correct state.

Actual Behavior

Import gives a mostly blank state:

$ terraform state show google_artifact_registry_repository.docker-mirror
# google_artifact_registry_repository.docker-mirror:
resource "google_artifact_registry_repository" "docker-mirror" {
    id            = "projects/PROJECT_NAME/locations/us/repositories/"
    labels        = {}
    location      = "us"
    project       = "PROJECT_NAME"
    repository_id = "mirror"

    timeouts {}
}

Steps to Reproduce

Create a registry, then import

References

<!--- Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests

Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example: ---> This was supposed to be fixed already, with this issue being closed:

  • #6936

There is a PR which contains a fix, but the two PRs actually merged into terraform-provider-google (#6957) and terraform-provider-google-beta (#2345) don't actually contain that code. Not sure how that happened?

Pinging @ndmckinley since you seemed to be involved in this last time around.

created time in 2 days

PullRequestReviewEvent

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

+package proxmoxclone++import (+	proxmoxapi "github.com/Telmate/proxmox-api-go/proxmox"+	"github.com/hashicorp/hcl/v2/hcldec"+	proxmox "github.com/hashicorp/packer/builder/proxmox/common"+	"github.com/hashicorp/packer/helper/communicator"+	"github.com/hashicorp/packer/helper/multistep"+	"github.com/hashicorp/packer/packer"++	"context"+	"fmt"+)++// The unique id for the builder+const BuilderID = "proxmox.clone"++type Builder struct {+	config Config+}++// Builder implements packer.Builder+var _ packer.Builder = &Builder{}++func (b *Builder) ConfigSpec() hcldec.ObjectSpec { return b.config.FlatMapstructure().HCL2Spec() }++func (b *Builder) Prepare(raws ...interface{}) ([]string, []string, error) {+	return b.config.Prepare(raws...)+}++func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (packer.Artifact, error) {+	state := new(multistep.BasicStateBag)+	state.Put("clone-config", &b.config)+	state.Put("comm", &b.config.Comm)++	preSteps := []multistep.Step{+		&StepSshKeyPair{+			Debug:        b.config.PackerDebug,+			DebugKeyPath: fmt.Sprintf("%s.pem", b.config.PackerBuildName),+			Comm:         &b.config.Comm,+		},+	}+	postSteps := []multistep.Step{}++	sb := proxmox.NewSharedBuilder(BuilderID, b.config.Config, preSteps, postSteps, &cloneVMCreator{})+	return sb.Run(ctx, ui, hook, state)+}++type cloneVMCreator struct{}++func (*cloneVMCreator) Create(vmRef *proxmoxapi.VmRef, config proxmoxapi.ConfigQemu, state multistep.StateBag) error {+	client := state.Get("proxmoxClient").(*proxmoxapi.Client)+	c := state.Get("clone-config").(*Config)+	comm := state.Get("comm").(*communicator.Config)++	fullClone := 1+	if c.FullClone == false {+		fullClone = 0+	}

This sort of "tri-state booleans" (true/false/not set) are a bit tricky, and I think this probably doesn't behave like you intended. If the user didn't specify full_clone at all, it'll be false, meaning it doesn't match the documented default of true. I don't think mapstructure allows *bool types, sadly. The easier options would be either using a string full/shallow (with corresponding input validation), or switching the naming around to shallow_clone, since the default 0 would then turn into full as you want here.

featheredtoast

comment created time in 2 days

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

 func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (pack 	}  	// Set up the state-	state := new(multistep.BasicStateBag) 	state.Put("config", &b.config) 	state.Put("proxmoxClient", b.proxmoxClient) 	state.Put("hook", hook) 	state.Put("ui", ui) -	// Build the steps-	steps := []multistep.Step{-		&common.StepDownload{-			Checksum:    b.config.ISOChecksum,-			Description: "ISO",-			Extension:   b.config.TargetExtension,-			ResultKey:   downloadPathKey,-			TargetPath:  b.config.TargetPath,-			Url:         b.config.ISOUrls,-		}}--	for idx := range b.config.AdditionalISOFiles {-		steps = append(steps, &common.StepDownload{-			Checksum:    b.config.AdditionalISOFiles[idx].ISOChecksum,-			Description: "additional ISO",-			Extension:   b.config.AdditionalISOFiles[idx].TargetExtension,-			ResultKey:   b.config.AdditionalISOFiles[idx].downloadPathKey,-			TargetPath:  b.config.AdditionalISOFiles[idx].downloadPathKey,-			Url:         b.config.AdditionalISOFiles[idx].ISOUrls,-		})+	comm := &b.config.Comm+	if state.Get("comm") != nil {+		comm = state.Get("comm").(*communicator.Config)

Isn't state.Get("comm") always the same object as &b.config.Comm when it is set (ie when building with the clone builder)?

featheredtoast

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventcarlpett/opa

Torin Sandall

commit sha 7559ad0f51c051e56667b5d723f52cb42c979b0e

build: Move VERSION into version/version.go The decision logger unit tests had to be updated to always set the version.Version value because they are sensitive to changes in the payload sizes. Signed-off-by: Torin Sandall <torinsandall@gmail.com>

view details

dependabot[bot]

commit sha 724c0bb0be203db7d29f7ccdb6a928431d5fcbfa

build(deps): bump node-fetch in /docs/website/scripts/live-blocks Bumps [node-fetch](https://github.com/bitinn/node-fetch) from 2.6.0 to 2.6.1. - [Release notes](https://github.com/bitinn/node-fetch/releases) - [Changelog](https://github.com/node-fetch/node-fetch/blob/master/docs/CHANGELOG.md) - [Commits](https://github.com/bitinn/node-fetch/compare/v2.6.0...v2.6.1) Signed-off-by: dependabot[bot] <support@github.com>

view details

rtfee

commit sha cba68b1dda63aaa9dc3143895ab0e37cbd23f0f8

updated the Scalr integration title Signed-off-by: rtfee <ryan.fee625@gmail.com>

view details

AlexsJones

commit sha 59a7e430ecc1a532ed88f24f71d39c58ee638c24

chore(spelling): fix spelling in extensions.md and rest-api.md docs Signed-off-by: AlexsJones <alexsimonjones@gmail.com>

view details

Teemu Koponen

commit sha 5433a93d20d7e83f88953e456381bdd5ba80f194

topdown: Use an underscore instead of a dot in http.send metric name. Currently the recorded metric name is "timer_rego_builtin_http.send_ns" which causes issues for some implementations. Signed-off-by: Teemu Koponen <koponen@styra.com>

view details

Ashutosh Narkar

commit sha 8cb34e48c285a4448f8f91a215aba5c05c7ab9b1

topdown: Address negative duration for the current age of http response The current age of a http response is calculated as the difference between the current time and the value contained in the "Date" response header. There are couple of scenarios that could lead to the current age being represented as a negative duration. 1. Since the value of "Date" response header is parsed using Go's time.Parse method, it does not contain a monotonic clock reading. As a result, the time.Sub method uses wall clock readings to determine the difference between current time and the parsed version of the response time. 2. The server could set a value for the "Date" response header which may not be a true indication of when the response was generated. This change updates the logic that determines whether a cached response is fresh or not, to treat the resposne as stale if the current response age is represented as a negative duration. Signed-off-by: Ashutosh Narkar <anarkar4387@gmail.com>

view details

Torin Sandall

commit sha 7a713a63efd042cbc191db8d9b224658ebe36fa6

build: Disable the 'project' status in codecov We were seeing too many false positives from the 'project' status check. The 'patch' status check seems fine so leave that one alone for now. Signed-off-by: Torin Sandall <torinsandall@gmail.com>

view details

Peter Sullivan

commit sha 7f0399b3f2c170c2c19ee60371edb1f888daffc2

docs/content: Update Envoy Authorization Tutorial Updated envoy version and links. Updated service type and minikube commands to tunnel to the service for consistency across minikube versions and drivers Signed-off-by: Peter Sullivan <pvsone@gmail.com>

view details

Calle Pettersson

commit sha 97e4f4bb54bdcaa2cd0ea86c31340e4f1426f338

topdown: Add base64.is_valid builtin Adds a builtin to check if a string is valid base64 Fixes: #2690 Signed-off-by: Calle Pettersson <calle@cape.nu>

view details

push time in 2 days

pull request commentopen-policy-agent/opa

topdown: Add base64.verify builtin

Ah, didn't see those ones. I'll rename to match!

carlpett

comment created time in 2 days

issue commentprometheus/node_exporter

Collector netclass/bonding leads to scrape timeouts

Noticed that our containers were getting cpu throttled on the affected clusters. Removed the CPU limit to tune for actual usage (which seems higher than previous releases) @sepich do you have cpu limits set on your daemonset?

sepich

comment created time in 6 days

issue commentprometheus/node_exporter

Collector netclass/bonding leads to scrape timeouts

I tried building a minimal program that uses the same sysfs package to read netclass and time it. It takes a handful of milliseconds to run on one of the nodes that is affected, so it isn't anything obvious with the actual reading.

sepich

comment created time in 6 days

issue commentprometheus/node_exporter

Collector netclass/bonding leads to scrape timeouts

FWIW I see the same long durations for netclass on some of our Kubernetes clusters (GKE). No errors in the node_exporter logs. Affected clusters are running 1.0.0 on Google COS, kernel 4.19.112+. Unaffected clusters are running 0.18.0 on Google COS, 4.14.174+. I tried bumping one cluster to 1.0.0, and while the average collection time increase from 10 ms to 15 ms, it isn't exactly timing out...

I had a look in /sys/class/net/ on one of the affected nodes, and there isn't an excessive amount of interfaces, or anything that seems strange when accessing files in there.

sepich

comment created time in 6 days

created tagcarlpett/dekms

tagv0.1

created time in 7 days

release carlpett/dekms

v0.1

released time in 7 days

create barnchcarlpett/dekms

branch : master

created branch time in 7 days

created repositorycarlpett/dekms

created time in 7 days

issue openedGoogleContainerTools/kaniko

Cloud build with dir: broken?

Actual behavior Setting working directory with Cloud Build seems broken? If I submit this build:

steps:
  - name: gcr.io/kaniko-project/executor:latest
    dir: some-subdir
    args:
      - --destination=gcr.io/$PROJECT_ID/my-image:$SHORT_SHA
      - --cache=true

Where some-subdir/Dockerfile has this content:

FROM alpine AS base
RUN apk add skopeo

FROM golang:1.14 AS builder
WORKDIR /src
ADD go.* .
RUN go mod download
ADD . .
RUN CGO_ENABLED=0 go build -tags containers_image_openpgp

FROM base
COPY --from=builder /src/my-bin /
CMD /my-bin

I get this error when it processes ADD go.* .:

error building image: error building stage: failed to optimize instructions: failed to get files used from context: copy failed: no source files specified

If I remove dir: and use --context=dir://some-subdir, it works.

Expected behavior Work from within the subdirectory

To Reproduce Steps to reproduce the behavior:

  1. See above repro

Additional Information

  • Dockerfile See above
  • Build Context See above
  • Kaniko Image (fully qualified with digest) gcr.io/kaniko-project/executor:latest@sha256:e36c9fa99279217c4bb8ee172819b441c3ca8ef946dc0e28b21721eefb2ba70a

Triage Notes for the Maintainers <!-- 🎉🎉🎉 Thank you for an opening an issue !!! 🎉🎉🎉 We are doing our best to get to this. Please help us by helping us prioritize your issue by filling the section below -->

Description Yes/No
Please check if this a new feature you are proposing <ul><li>- [ ] </li></ul>
Please check if the build works in docker but not in kaniko <ul><li>- [X] </li></ul>
Please check if this error is seen when you use --cache flag <ul><li>- [X] </li></ul>
Please check if your dockerfile is a multistage dockerfile <ul><li>- [X] </li></ul>

created time in 8 days

pull request commentprometheus-operator/prometheus-operator

Break the API types out into their own module

Is this by design, or something to improve? Should I as a downstream consumer rather do autogeneration for the client bits?

matthiasr

comment created time in 9 days

pull request commentopen-policy-agent/opa

topdown: Add base64.verify builtin

Well, we have some policies that error out instead of failing "properly", but I wouldn't say it's the end of the world to wait a while if you believe that solution is better. To some extent I'd argue that it's semantically clearer with a distinct verify function, though. It's very clear that we don't care about the actual decoded value, just that it is valid, as compared to base64.decode("...") which looks odd without using the result (to me, at least). Of course naming of the policy and the surrounding context will of course help reduce the risk of misunderstandings. Whether this is worth having extra functions for is the question, I guess :)

carlpett

comment created time in 9 days

pull request commentprometheus-operator/prometheus-operator

Break the API types out into their own module

That's what I thought, but

import (
    "github.com/coreos/prometheus-operator/pkg/apis/monitoring/v1" // Separate module
    "github.com/coreos/prometheus-operator/pkg/client/versioned" // Not under pkg/apis?
)

If I want to eg have a watcher on a ServiceMonitor, I believe I need both those?

matthiasr

comment created time in 9 days

issue commentkubernetes/ingress-gce

Changing class from gce-internal to gce leaves orphaned load balancers

Nice find! I'll clean up, thanks for the hints :)

carlpett

comment created time in 9 days

PullRequestReviewEvent

issue commentkubernetes/ingress-gce

Changing class from gce-internal to gce leaves orphaned load balancers

@spencerhance Just wanted to check if you got my project details last week and had a chance to take a look, or otherwise if you have an approximate idea when? Just so I know when I can do the cleanup I guess is going to be manual :)

carlpett

comment created time in 10 days

Pull request review commenthashicorp/packer

Bootcommand Fix For Proxmox Builder

 func TestTypeBootCommand(t *testing.T) { 			expectedKeysSent:  "shift-hellospcshift-worldspc2dot0fooshift-1barshift-2baz", 			expectedAction:    multistep.ActionContinue, 		},+		{+			name:              "holding and releasing keys",+			builderConfig:     &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"<leftShiftOn>hello<rightAltOn>world<leftShiftOff><rightAltOff>"}}},+			expectCallSendkey: true,+			expectedKeysSent:  "shift-hshift-eshift-lshift-lshift-oshift-alt_r-wshift-alt_r-oshift-alt_r-rshift-alt_r-lshift-alt_r-d",+			expectedAction:    multistep.ActionContinue,+		},+		{+			name:              "holding multiple alphabetical keys and shift",+			builderConfig:     &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"<cOn><leftShiftOn>n<leftShiftOff><cOff>"}}},+			expectCallSendkey: true,+			expectedKeysSent:  "shift-c-n",+			expectedAction:    multistep.ActionContinue,+		},

Could we add some test(s) to cover atypical input where things are essentially noop:s? Eg <cOn><leftShiftOn><cOff>h<leftShiftOff> should I guess send H?

paginabianca

comment created time in 10 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commenthashicorp/packer

Bootcommand Fix For Proxmox Builder

 func NewProxmoxDriver(c commandTyper, vmRef *proxmox.VmRef, interval time.Durati }  func (p *proxmoxDriver) SendKey(key rune, action bootcommand.KeyAction) error {-	if special, ok := p.runeMap[key]; ok {-		return p.send(special)+	switch action.String() {+	case "Press":+		if special, ok := p.runeMap[key]; ok {+			return p.send(special)+		}+		var keys string+		if unicode.IsUpper(key) {+			keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))+		} else {+			keys = fmt.Sprintf("%c", key)+		}+		return p.send(keys)+	case "On":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, key)+	case "Off":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, key) 	}--	var keys string-	if unicode.IsUpper(key) {-		keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))-	} else {-		keys = fmt.Sprintf("%c", key)-	}--	return p.send(keys)+	return nil }  func (p *proxmoxDriver) SendSpecial(special string, action bootcommand.KeyAction) error { 	keys := special 	if replacement, ok := p.specialMap[special]; ok { 		keys = replacement 	}--	return p.send(keys)+	switch action.String() {+	case "Press":+		return p.send(keys)+	case "On":+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, keys)+	case "Off":+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, keys)+	}+	return nil } -func (p *proxmoxDriver) send(keys string) error {-	err := p.client.Sendkey(p.vmRef, keys)+func (p *proxmoxDriver) send(key string) error {+	keys := append(p.specialBuffer, p.normalBuffer...)

Ah, okay! Then nice catch, and good with the added test case!

paginabianca

comment created time in 10 days

PullRequestReviewEvent

Pull request review commenthashicorp/packer

Bootcommand Fix For Proxmox Builder

 func NewProxmoxDriver(c commandTyper, vmRef *proxmox.VmRef, interval time.Durati }  func (p *proxmoxDriver) SendKey(key rune, action bootcommand.KeyAction) error {-	if special, ok := p.runeMap[key]; ok {-		return p.send(special)+	switch action.String() {+	case "Press":+		if special, ok := p.runeMap[key]; ok {+			return p.send(special)+		}+		var keys string+		if unicode.IsUpper(key) {+			keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))+		} else {+			keys = fmt.Sprintf("%c", key)+		}+		return p.send(keys)+	case "On":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, key)+	case "Off":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, key) 	}--	var keys string-	if unicode.IsUpper(key) {-		keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))-	} else {-		keys = fmt.Sprintf("%c", key)-	}--	return p.send(keys)+	return nil }  func (p *proxmoxDriver) SendSpecial(special string, action bootcommand.KeyAction) error { 	keys := special 	if replacement, ok := p.specialMap[special]; ok { 		keys = replacement 	}--	return p.send(keys)+	switch action.String() {+	case "Press":+		return p.send(keys)+	case "On":+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, keys)+	case "Off":+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, keys)+	}+	return nil } -func (p *proxmoxDriver) send(keys string) error {-	err := p.client.Sendkey(p.vmRef, keys)+func (p *proxmoxDriver) send(key string) error {+	keys := append(p.specialBuffer, p.normalBuffer...)

Just to check - is there a need for the dual buffers, since it apparently worked without?

paginabianca

comment created time in 12 days

PullRequestReviewEvent

issue closedprometheus-operator/prometheus-operator

Go module self-references with broken version

It is not possible to import 0.42 with the new import path due to this line: https://github.com/prometheus-operator/prometheus-operator/blob/675d303ee0b850ec2ac8ac4a07e3e34fbc1da49f/go.mod#L22

Related to #3393, which was supposed to be fixed in 0.42, but appears to have been missed?

closed time in 12 days

carlpett

issue commentprometheus-operator/prometheus-operator

Go module self-references with broken version

Sloppy reading on my part, this is not related to #3393, this is the separation of the type module. Sorry for the noise

carlpett

comment created time in 12 days

pull request commentprometheus-operator/prometheus-operator

Break the API types out into their own module

I may be missing something here, but as an external consumer of these types, don't I still need to depend on the "main" prometheus-operator module in order to access the Kubernetes client types?

matthiasr

comment created time in 12 days

issue commentprometheus-community/windows_exporter

Upgrading 0.11.1 to 0.13.0 fails

Apparently I forgot to commit the fix 🤦‍♂️ Thanks for testing! @christophe-freijanes, I think you are seeing a different issue. Could you open a new issue, with logs from the installer?

tbiles

comment created time in 12 days

push eventprometheus-community/windows_exporter

Calle Pettersson

commit sha d39d5230abbf6e2fd63e8a57f7e64ad2d01c252e

Change upgrade phase With afterInstallExecute, the old installation is only removed after the new one is finished, which has led to some users seeing the new version failing to start, leading to the install rolling back. afterInstallInitialize, in contrast, uninstalls before the new installation. Signed-off-by: Calle Pettersson <carlpett@users.noreply.github.com>

view details

push time in 12 days

PR opened prometheus-community/windows_exporter

Change upgrade phase

With afterInstallExecute, the old installation is only removed after the new one is finished, which has led to some users seeing the new version failing to start, leading to the install rolling back. afterInstallInitialize, in contrast, uninstalls before the new installation.

Fixes #558

+1 -1

0 comment

1 changed file

pr created time in 12 days

create barnchprometheus-community/windows_exporter

branch : installer-phase

created branch time in 12 days

issue openedprometheus-operator/prometheus-operator

Go module self-references with broken version

It is not possible to import 0.42 with the new import path due to this line: https://github.com/prometheus-operator/prometheus-operator/blob/675d303ee0b850ec2ac8ac4a07e3e34fbc1da49f/go.mod#L22

Related to #3393, which was supposed to be fixed in 0.42, but appears to have been missed?

created time in 12 days

issue commentprometheus-community/windows_exporter

Installation of the service to different directory

Duplicate of #333

Knuspel

comment created time in 12 days

issue commentprometheus-community/windows_exporter

Document needed user permissions

Hi @Knuspel, To the best of our knowledge, there isn't a permission set exposed by Windows which allows querying the APIs we use (primarily WMI and perflib). Perhaps your security team has some deeper knowledge about how to accomplish this, and if so we'd be very happy to have it added to the documentation!

Knuspel

comment created time in 12 days

Pull request review commenthashicorp/packer

Bootcommand Fix For Proxmox Builder

 func NewProxmoxDriver(c commandTyper, vmRef *proxmox.VmRef, interval time.Durati }  func (p *proxmoxDriver) SendKey(key rune, action bootcommand.KeyAction) error {-	if special, ok := p.runeMap[key]; ok {-		return p.send(special)+	switch action.String() {+	case "Press":+		if special, ok := p.runeMap[key]; ok {+			return p.send(special)+		}+		var keys string+		if unicode.IsUpper(key) {+			keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))+		} else {+			keys = fmt.Sprintf("%c", key)+		}+		return p.send(keys)+	case "On":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, key)+	case "Off":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, key) 	}--	var keys string-	if unicode.IsUpper(key) {-		keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))-	} else {-		keys = fmt.Sprintf("%c", key)-	}--	return p.send(keys)+	return nil }  func (p *proxmoxDriver) SendSpecial(special string, action bootcommand.KeyAction) error { 	keys := special 	if replacement, ok := p.specialMap[special]; ok { 		keys = replacement 	}--	return p.send(keys)+	switch action.String() {+	case "Press":+		return p.send(keys)+	case "On":+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, keys)+	case "Off":+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, keys)+	}+	return nil } -func (p *proxmoxDriver) send(keys string) error {-	err := p.client.Sendkey(p.vmRef, keys)+func (p *proxmoxDriver) send(key string) error {+	keys := append(p.specialBuffer, p.normalBuffer...)

specialBuffer never seems to be used?

paginabianca

comment created time in 12 days

Pull request review commenthashicorp/packer

Bootcommand Fix For Proxmox Builder

 func NewProxmoxDriver(c commandTyper, vmRef *proxmox.VmRef, interval time.Durati }  func (p *proxmoxDriver) SendKey(key rune, action bootcommand.KeyAction) error {-	if special, ok := p.runeMap[key]; ok {-		return p.send(special)+	switch action.String() {+	case "Press":+		if special, ok := p.runeMap[key]; ok {+			return p.send(special)+		}+		var keys string+		if unicode.IsUpper(key) {+			keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))+		} else {+			keys = fmt.Sprintf("%c", key)+		}+		return p.send(keys)+	case "On":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, key)+	case "Off":+		key := fmt.Sprintf("%c", key)+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, key) 	}--	var keys string-	if unicode.IsUpper(key) {-		keys = fmt.Sprintf("shift-%c", unicode.ToLower(key))-	} else {-		keys = fmt.Sprintf("%c", key)-	}--	return p.send(keys)+	return nil }  func (p *proxmoxDriver) SendSpecial(special string, action bootcommand.KeyAction) error { 	keys := special 	if replacement, ok := p.specialMap[special]; ok { 		keys = replacement 	}--	return p.send(keys)+	switch action.String() {+	case "Press":+		return p.send(keys)+	case "On":+		p.normalBuffer = addKeyToBuffer(p.normalBuffer, keys)+	case "Off":+		p.normalBuffer = removeKeyFromBuffer(p.normalBuffer, keys)+	}+	return nil } -func (p *proxmoxDriver) send(keys string) error {-	err := p.client.Sendkey(p.vmRef, keys)+func (p *proxmoxDriver) send(key string) error {+	keys := append(p.specialBuffer, p.normalBuffer...)+	keys = append(keys, key)+	keyEventString := bufferToKeyEvent(keys)+	err := p.client.Sendkey(p.vmRef, keyEventString) 	if err != nil { 		return err 	}--	time.Sleep(p.interval)

This sleep is important, the VM doesn't always pick up on rapid sequences of input unfortunately.

paginabianca

comment created time in 12 days

Pull request review commenthashicorp/packer

Bootcommand Fix For Proxmox Builder

 import ( )  type proxmoxDriver struct {-	client     commandTyper-	vmRef      *proxmox.VmRef-	specialMap map[string]string-	runeMap    map[rune]string-	interval   time.Duration+	client        commandTyper+	vmRef         *proxmox.VmRef+	specialMap    map[string]string+	runeMap       map[rune]string+	interval      time.Duration+	specialBuffer []string+	normalBuffer  []string }  func NewProxmoxDriver(c commandTyper, vmRef *proxmox.VmRef, interval time.Duration) *proxmoxDriver { 	// Mappings for packer shorthand to qemu qkeycodes 	sMap := map[string]string{-		"spacebar": "spc",-		"bs":       "backspace",-		"del":      "delete",-		"return":   "ret",-		"enter":    "ret",-		"pageUp":   "pgup",-		"pageDown": "pgdn",+		"spacebar":   "shift-f10",

This seems like a remainder from your original testing? :)

paginabianca

comment created time in 12 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

+package proxmoxiso++import (+	"context"++	proxmoxapi "github.com/Telmate/proxmox-api-go/proxmox"+	"github.com/hashicorp/hcl/v2/hcldec"+	"github.com/hashicorp/packer/builder/proxmox/common"+	"github.com/hashicorp/packer/common"+	"github.com/hashicorp/packer/helper/multistep"+	"github.com/hashicorp/packer/packer"+)++// The unique id for the builder+const BuilderID = "proxmox.clone"

Should be .iso, right?

featheredtoast

comment created time in 12 days

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

 var Builders = map[string]packer.Builder{ 	"parallels-iso":       new(parallelsisobuilder.Builder), 	"parallels-pvm":       new(parallelspvmbuilder.Builder), 	"profitbricks":        new(profitbricksbuilder.Builder),-	"proxmox":             new(proxmoxbuilder.Builder),+	"proxmox-clone":       new(proxmoxclonebuilder.Builder),+	"proxmox-iso":         new(proxmoxisobuilder.Builder),+	"proxmox":             new(proxmoxisobuilder.Builder),

@SwampDragons I did this as a backwards-compat thing (letting the old builder string remain as an alias). Good or bad? Is there a deprecation policy for this?

featheredtoast

comment created time in 12 days

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

+package proxmoxclone++import (+	"context"+        "time"+        "fmt"++	proxmoxapi "github.com/Telmate/proxmox-api-go/proxmox"+	"github.com/hashicorp/hcl/v2/hcldec"+	"github.com/hashicorp/packer/builder/proxmox/common"+	"github.com/hashicorp/packer/helper/multistep"+	"github.com/hashicorp/packer/helper/communicator"+	"github.com/hashicorp/packer/packer"+)++// The unique id for the builder+const BuilderID = "proxmox.clone"++type Builder struct {+	config Config+}++// Builder implements packer.Builder+var _ packer.Builder = &Builder{}++var pluginVersion = "1.0.0"++func (b *Builder) ConfigSpec() hcldec.ObjectSpec { return b.config.FlatMapstructure().HCL2Spec() }++func (b *Builder) Prepare(raws ...interface{}) ([]string, []string, error) {+	return b.config.Prepare(raws...)+}++func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (packer.Artifact, error) {+	state := new(multistep.BasicStateBag)+	state.Put("clone-config", &b.config)+        state.Put("comm", &b.config.Comm)++	preSteps := []multistep.Step{+                &StepSshKeyPair{+                        Debug:        b.config.PackerDebug,+			DebugKeyPath: fmt.Sprintf("%s.pem", b.config.PackerBuildName),+			Comm:         &b.config.Comm,+                },+        }+	postSteps := []multistep.Step{}++	sb := proxmox.NewSharedBuilder(BuilderID, b.config.Config, preSteps, postSteps, &cloneVMCreator{})+	return sb.Run(ctx, ui, hook, state)+}++type cloneVMCreator struct{}++func (*cloneVMCreator) Create(vmRef *proxmoxapi.VmRef, config proxmoxapi.ConfigQemu, state multistep.StateBag) error {+	client := state.Get("proxmoxClient").(*proxmoxapi.Client)+        c := state.Get("clone-config").(*Config)+        comm := state.Get("comm").(*communicator.Config)++        fullClone := 1+        if c.FullClone {+                fullClone = 0+        }

I'm having trouble following this. In the docs you say

full_clone (bool) - Whether to run a full or shallow clone from the base clone_vm. Defaults to true.

But the code appears to set full clone to false if the option is set to true?

featheredtoast

comment created time in 12 days

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

+package proxmoxclone++import (+	"context"+        "time"+        "fmt"++	proxmoxapi "github.com/Telmate/proxmox-api-go/proxmox"+	"github.com/hashicorp/hcl/v2/hcldec"+	"github.com/hashicorp/packer/builder/proxmox/common"+	"github.com/hashicorp/packer/helper/multistep"+	"github.com/hashicorp/packer/helper/communicator"+	"github.com/hashicorp/packer/packer"+)++// The unique id for the builder+const BuilderID = "proxmox.clone"++type Builder struct {+	config Config+}++// Builder implements packer.Builder+var _ packer.Builder = &Builder{}++var pluginVersion = "1.0.0"++func (b *Builder) ConfigSpec() hcldec.ObjectSpec { return b.config.FlatMapstructure().HCL2Spec() }++func (b *Builder) Prepare(raws ...interface{}) ([]string, []string, error) {+	return b.config.Prepare(raws...)+}++func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (packer.Artifact, error) {+	state := new(multistep.BasicStateBag)+	state.Put("clone-config", &b.config)+        state.Put("comm", &b.config.Comm)++	preSteps := []multistep.Step{+                &StepSshKeyPair{+                        Debug:        b.config.PackerDebug,+			DebugKeyPath: fmt.Sprintf("%s.pem", b.config.PackerBuildName),+			Comm:         &b.config.Comm,+                },

I assume this is something you're using during development?

featheredtoast

comment created time in 12 days

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

+package proxmoxclone++import (+	"context"+        "time"+        "fmt"++	proxmoxapi "github.com/Telmate/proxmox-api-go/proxmox"+	"github.com/hashicorp/hcl/v2/hcldec"+	"github.com/hashicorp/packer/builder/proxmox/common"+	"github.com/hashicorp/packer/helper/multistep"+	"github.com/hashicorp/packer/helper/communicator"+	"github.com/hashicorp/packer/packer"+)++// The unique id for the builder+const BuilderID = "proxmox.clone"++type Builder struct {+	config Config+}++// Builder implements packer.Builder+var _ packer.Builder = &Builder{}++var pluginVersion = "1.0.0"++func (b *Builder) ConfigSpec() hcldec.ObjectSpec { return b.config.FlatMapstructure().HCL2Spec() }++func (b *Builder) Prepare(raws ...interface{}) ([]string, []string, error) {+	return b.config.Prepare(raws...)+}++func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (packer.Artifact, error) {+	state := new(multistep.BasicStateBag)+	state.Put("clone-config", &b.config)+        state.Put("comm", &b.config.Comm)

Looks like a go fmt is needed?

featheredtoast

comment created time in 12 days

Pull request review commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

+package proxmoxclone++import (+	"context"+        "time"+        "fmt"++	proxmoxapi "github.com/Telmate/proxmox-api-go/proxmox"+	"github.com/hashicorp/hcl/v2/hcldec"+	"github.com/hashicorp/packer/builder/proxmox/common"+	"github.com/hashicorp/packer/helper/multistep"+	"github.com/hashicorp/packer/helper/communicator"+	"github.com/hashicorp/packer/packer"+)++// The unique id for the builder+const BuilderID = "proxmox.clone"++type Builder struct {+	config Config+}++// Builder implements packer.Builder+var _ packer.Builder = &Builder{}++var pluginVersion = "1.0.0"++func (b *Builder) ConfigSpec() hcldec.ObjectSpec { return b.config.FlatMapstructure().HCL2Spec() }++func (b *Builder) Prepare(raws ...interface{}) ([]string, []string, error) {+	return b.config.Prepare(raws...)+}++func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (packer.Artifact, error) {+	state := new(multistep.BasicStateBag)+	state.Put("clone-config", &b.config)+        state.Put("comm", &b.config.Comm)++	preSteps := []multistep.Step{+                &StepSshKeyPair{+                        Debug:        b.config.PackerDebug,+			DebugKeyPath: fmt.Sprintf("%s.pem", b.config.PackerBuildName),+			Comm:         &b.config.Comm,+                },+        }+	postSteps := []multistep.Step{}++	sb := proxmox.NewSharedBuilder(BuilderID, b.config.Config, preSteps, postSteps, &cloneVMCreator{})+	return sb.Run(ctx, ui, hook, state)+}++type cloneVMCreator struct{}++func (*cloneVMCreator) Create(vmRef *proxmoxapi.VmRef, config proxmoxapi.ConfigQemu, state multistep.StateBag) error {+	client := state.Get("proxmoxClient").(*proxmoxapi.Client)+        c := state.Get("clone-config").(*Config)+        comm := state.Get("comm").(*communicator.Config)++        fullClone := 1+        if c.FullClone {+                fullClone = 0+        }++        config.FullClone = &fullClone+        config.CIuser = comm.SSHUsername+        config.Sshkeys = string(comm.SSHPublicKey)+        sourceVmr, err := client.GetVmRefByName(c.CloneVM)+        if err != nil {+                return err+        }+        err = config.CloneVm(sourceVmr, vmRef, client)+        if err != nil {+                return err+        }+        err = config.UpdateConfig(vmRef, client)+        if err != nil {+                return err+        }+        time.Sleep(time.Duration(15) * time.Second)

This rings warning bells :) Why is this needed?

featheredtoast

comment created time in 12 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentprometheus-community/windows_exporter

Best Practices to deploy windows_exporter for Kubernetes

@keikhara The windows_exporter has a much larger scope than just container information. And the rest of the exporter primarily uses the WMI and Performance Counter subsystems in Windows, which are not available from unprivileged containers.

Gerthum

comment created time in 12 days

PR opened open-policy-agent/opa

topdown: Add base64.verify builtin

Adds a builtin to verify a string is valid base64. Since it just covers base64, this is perhaps more of a start on #2690 than a full fix, for now, to verify (hah!) that I'm on the right way. If everything looks good, I can add base64url, urlquery, json and yaml.

+60 -0

0 comment

4 changed files

pr created time in 12 days

create barnchcarlpett/opa

branch : add-base64-verify

created branch time in 12 days

fork carlpett/opa

An open source, general-purpose policy engine.

https://www.openpolicyagent.org

fork in 12 days

issue openedopen-policy-agent/opa

Add verify functions for encodings (eg base64)

Currently it is not possible to verify input is in some specific encoding, such as base64. The only option I've found is letting the decode function abort with an error, which isn't quite as nice as a policy failure.

Expected Behavior

_ := base64.verify("not-base64") # false
_ := base64.verify("YWN0dWFsbHktYmFzZTY0") # true

Actual Behavior

_ := base64.decode("not-base64") # ERROR

created time in 14 days

issue commentprometheus-community/windows_exporter

Clarification on windows_mssql_sqlstats_batch_requests

Hi, Yes, that appears to be a counter of the total number of batch requests, so to get the per-second you'd do roughly rate(windows_mssql_sqlstats_batch_requests[5m]) (where the window 5m can be tweaked depending on your needs and setup)

noahjd

comment created time in 14 days

issue openedhashicorp/terraform

repeat function for list generation

Current Terraform Version

v0.12.29

Use-cases

There are situations where lists of repeated elements are required. For example, when using cidrsubnets I frequently want a (sometimes long) list of repeated elements, eg cidrsubnets("10.0.0.0/8", 4, 4, 4, 4, 4, 4, 4, ...).

Attempted Solutions

Presently I get around this with nested ranges like this: [for cidr_block in cidrsubnets(local.cidrs, [for _ in range(16) : 4]...) : cidrsubnets(cidr_block, [for _ in range(7) : 4]...)]

Proposal

Add a new collection function repeat(elem, count), which generates a list of count copies of elem. For example:

> repeat(1, 4)
[
  1,
  1,
  1,
  1,
]

created time in 16 days

issue commenthelm/helm

Helm lint throws an error where a warning should be thrown instead

Metadata names: Following the kubectl naming rules is good practice

It looks to me like this is an overly strict reading of the naming docs? While the docs mention three name constraints (DNS subdomain, DNS label and Path segment), Helm's linter appears to only consider the subdomain constraint, which will be overly strict for some types, and too liberal for those that work with the label constraint. As an example of where this is too strict, Kubernetes built-ins roles and bindings use colon-separation for organization, which has thus been a pattern repeated in the community. This is now disallowed, breaking several charts.

kofan

comment created time in 16 days

issue commenthelm/helm

Helm lint throws an error where a warning should be thrown instead

Is there a metric out there that tells us what versions of Kubernetes are used with Helm?

A decent/better-than-nothing proxy might be what the major managed Kubernetes providers are defaulting to. As of today, GKE defaults to 1.15, EKS 1.17 and AKS 1.17 (I think - I don't have any engagements on Azure ATM, so interpretation from their docs). So basically, the policy outlined above would mean a GKE default cluster is not a supported Helm usage, which seems overly strict?

Another angle would be looking at the recently announced extended support window for Kubernetes itself from 9 to 12 months, which would mean a 4 step version skew.

kofan

comment created time in 16 days

created tagprometheus-community/windows_exporter

tagv0.14.0

Prometheus exporter for Windows machines

created time in 17 days

release prometheus-community/windows_exporter

v0.14.0

released time in 17 days

push eventprometheus-community/windows_exporter

Michael Allen

commit sha 24470eb17e84c4ecc2847e5adb39538cabf7f70a

Use perflib to gather metrics in the mssql collector The perflib-based access code replaces WMI and is substantially more efficient and stable, leading to fewer collection timeouts in the mssql collector. Signed-off-by: Michael Allen <MAllen@laserfiche.com>

view details

Michael Allen

commit sha 8d0d7b31b1b90431980d2f01e38289ef23fdd1ab

Add perflib annotations to struct win32PerfRawDataSqlServerTransactions Signed-off-by: Michael Allen <MAllen@laserfiche.com>

view details

Michael Allen

commit sha 3b2ef6287c1948fde1d661e6b0d8fd8843927a3d

Rename MSSQL metrics data structs for clarity The old names were hard to read, but had to be named as such to work with the WMI library. Now that raw performance counter data are used instead of WMI, we are free to name the data structs freely. Signed-off-by: Michael Allen <MAllen@laserfiche.com>

view details

Michael Allen

commit sha a3867b8dbf1e1bf4cc4e72f7aaa7f0f33f357f8b

Correct a typo where "perflib" was misspelled in a struct field tag Signed-off-by: Michael Allen <MAllen@laserfiche.com>

view details

Calle Pettersson

commit sha 922c08b85b60da8a444a57a84126448d7c993307

Merge pull request #592 from mallenLF/mssql-perflib Use perflib to gather metrics in the mssql collector

view details

push time in 17 days

PR merged prometheus-community/windows_exporter

Use perflib to gather metrics in the mssql collector

The perflib-based access code replaces WMI and is substantially more efficient and stable, leading to fewer collection timeouts in the mssql collector.

+1321 -1305

2 comments

2 changed files

mallenLF

pr closed time in 17 days

pull request commentprometheus-community/windows_exporter

Use perflib to gather metrics in the mssql collector

Nice catch! :) Then this seems good to go, thanks a lot!

mallenLF

comment created time in 17 days

issue commentkubernetes/ingress-gce

Changing class from gce-internal to gce leaves orphaned load balancers

Hey! I had actually cleaned it up, but could reproduce it. I'll mail you the details!

For completeness, here is the repro yamls:

apiVersion: v1
kind: Service
metadata:
  name: gke-test-ingress
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
  labels:
    app: foo
spec:
  type: ClusterIP
  selector:
    app: foo
  ports:
  - name: http
    port: 8080
    targetPort: http
    protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: gke-test-ingress
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: gce-internal
  labels:
    app: foo
spec:
  rules:
  - host: ingress-test.my-domain.tld
    http:
      paths:
      - path: /
        backend:
          serviceName: gke-test-ingress
          servicePort: http
  tls:
  - hosts:
    - ingress-test.my-domain.tld
    secretName: gke-test-ingress-tls
carlpett

comment created time in 18 days

issue openedkubernetes/ingress-gce

Changing class from gce-internal to gce leaves orphaned load balancers

Hey, I created an ingress with ingress.class: gce-internal. After testing a bit, I changed the class to gce to compare the latency. After a while, a new public load balancer was provisioned, however the old internal one still remains, even after removing the ingress object completely. (The new public load balancer didn't seem to work either, just returning Google 404:s, but that may be something on my end)

created time in 18 days

issue commentprometheus-community/windows_exporter

Change metric type process_start_time_seconds

Hi @electron2302, I would guess that you built the exporter with another version of the Prometheus client libraries than intended. What version of Go do you have? You need to make sure you are on 1.13 or later, and building with modules enabled, otherwise it'll pull in the absolutely latest version.

suvika17

comment created time in 18 days

issue commentprometheus-community/windows_exporter

How to monitor custom performance counters

This sounds like an unnecessary step producing large files if it runs long enough. One of the reasons we turned to Prometheus is the compression and rotation of stored data.

You'd overwrite the file at each pass, so file size wouldn't be an issue. Or am I misunderstanding you here? Anyway, there's no difference vs doing it inside the exporter from the Prometheus server's point of view, so performance-wise it's completely equivalent.

And sure enough if I test with Get-CimInstance -ComputerName "localhost" -Class "Win32_PerfRawData_MyApp_ MyApp” in the output, the name is empty.

The generator currently works with WMI only, not performance counters. In many of the Windows built-in counters, there's more or less a one-to-one mapping from counters to WMI, but this isn't the case if your application writes counters itself.

electron2302

comment created time in 18 days

PullRequestReviewEvent

pull request commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

Okay, did a quick spike in a branch here. I think this should have the necessary extension points, while avoiding most of the repetition. What do you think? I'm not super happy about the ProxmoxVMCreator via the statebag, but not sure if there are any great options.

featheredtoast

comment created time in 19 days

create barnchcarlpett/packer

branch : split-proxmox

created branch time in 19 days

pull request commenthashicorp/packer

builder/proxmox FEATURE: split Proxmox into proxmox-iso and proxmox-clone

Yes, this is definitely still relevant, I just haven't been able to find the time since this is a bit big. Sorry about that!

I started out attempting to source the same builder, but it looked like the ISO config reference is required (from the common ISO config) with no easy way of overriding it to be optional.

In principle, it's not too hard to make the iso config optional, basically just avoid calling c.ISOConfig.Prepare(&c.ctx). I think the crux here would be when to do so. We could simply do something like if c.CloneVM != "" { .. do clone prep .. } else { .. do ISO prep .. }, downside being that it might be a bit confusing to the user that we ignore the ISO config if given when cloning, so we'd probably also have to error out if both are supplied

However, splitting might be a better end user experience and more in line with other builders as you note. But in this case, I think we could do it with less duplication, to avoid having to update many places for every change. The difference is mainly just the "creating a new VM" bits in stepStartVM and some configuration details.

I could try to knock out a proof of concept on how to do this, if that'd help?

featheredtoast

comment created time in 20 days

issue closedprometheus-community/windows_exporter

How to monitor the windows scheduler task status using windows_exporter?

  1. I can not find how to monitor windows scheduler task running status, could you please give some advices ?
  2. How to monitor the evevt ID?

closed time in 21 days

xuanyuanaosheng

issue commentprometheus-community/windows_exporter

windows_exporter service failed to start on reboot

@dry4ng It'd be interesting to see if your case is solved with an exception in Windows Defender as mentioned above?

f1-outsourcing

comment created time in 21 days

Pull request review commentprometheus-community/windows_exporter

Use perflib to gather metrics in the mssql collector

 func (c *MSSQLCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric // - Win32_PerfRawData_MSSQLSERVER_SQLServerAccessMethods //   https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-access-methods-object type win32PerfRawDataSQLServerAccessMethods struct {

With the WMI lib, we had to match type names to class names, but this is no longer the case, so we can give these types names without all the WMI-isms. So something like mssqlAccessMethods in this case, maybe? Same pattern for all the other things mentioning perfraw etc.

mallenLF

comment created time in 21 days

Pull request review commentprometheus-community/windows_exporter

Use perflib to gather metrics in the mssql collector

 var (  func registerCollector(name string, builder collectorBuilder, perfCounterNames ...string) { 	builders[name] = builder+	addPerfCounterDependencies(name, perfCounterNames)+}++func addPerfCounterDependencies(name string, perfCounterNames []string) {

I can see why this is necessary, but it is a bit of a layering violation. We should probably - in a separate PR - overhaul the startup order so commandline parsing is done before collector initialization so we don't need to muck around in the internals like this in collectors.

mallenLF

comment created time in 21 days

PullRequestReviewEvent

issue commenthashicorp/packer

Packer sends empty API request

Nice find! I'd say that is a bug. It is only correct to ignore empty values if it was previously not empty. I added a comment on the upstream PR: https://github.com/Telmate/proxmox-api-go/pull/76 I think the appropriate thing here would be to roll back that change upstream.

paginabianca

comment created time in 21 days

pull request commentTelmate/proxmox-api-go

fix proxmox API errors on empty params

@jda @ggongaware This actually causes another bug, where attempting to reset something no longer works (ie something has a value, and we want to remove it). Since the empty string is a valid value in many places in the API, I think this change might be a bit broad?

jda

comment created time in 21 days

Pull request review commenthashicorp/packer

[WIP] Adds ability to specify interfaces for http_directroy and VM for the Proxmox builder

 builder.   - `iso_checksum` (string) - Checksum of the ISO file.   - `unmount` (bool) - If true, remove the mounted ISO from the template after finishing. Defaults to `false`. +- `http_interface` - (string) - Name of the network interface that Packer gets

Maybe this should rather go in common.HTTPConfig? There's already an option http_bind_address, so there should be some care taken on the interaction I guess (what happens if both are specified)

paginabianca

comment created time in 21 days

Pull request review commenthashicorp/packer

[WIP] Adds ability to specify interfaces for http_directroy and VM for the Proxmox builder

 func commHost(host string) func(state multistep.StateBag) (string, error) { // Reads the first non-loopback interface's IP address from the VM. // qemu-guest-agent package must be installed on the VM func getVMIP(state multistep.StateBag) (string, error) {-	c := state.Get("proxmoxClient").(*proxmox.Client)+	client := state.Get("proxmoxClient").(*proxmox.Client)+	config := state.Get("Config").(*Config)

I don't think this is case insensitive(?), so you'll need to use the same case as when state.Put is called above (all lowercase)

paginabianca

comment created time in 21 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commenthashicorp/packer

Proxmox Error running boot command: 500 invalid parameter: leftshift

Hi, I had a look deeper into how Packer generates the key sequences, and it looks like I had missed a few pieces. So what I said above isn't correct. It looks like the Proxmox bootcommand driver should handle KeyActions, but doesn't. So from there we'd need to keep track of shift-states and so on. I don't have time to look at this right away, but if you have some time, I believe that is where it needs to happen.

paginabianca

comment created time in 21 days

issue closedcarlpett/zookeeper_exporter

no zk_* items

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 16
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.984552e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.984552e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.443527e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 484
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 202752
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.984552e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 409600
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 3.162112e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 7332
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 3.571712e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 39
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 7816
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 6944
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 35112
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 49152
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.473924e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.042993e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 622592
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 622592
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 6.949112e+06
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 49158.369
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 49158.369
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 49158.369
http_request_duration_microseconds_sum{handler="prometheus"} 49158.369
http_request_duration_microseconds_count{handler="prometheus"} 1
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} 213
http_request_size_bytes{handler="prometheus",quantile="0.9"} 213
http_request_size_bytes{handler="prometheus",quantile="0.99"} 213
http_request_size_bytes_sum{handler="prometheus"} 213
http_request_size_bytes_count{handler="prometheus"} 1
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
http_requests_total{code="200",handler="prometheus",method="get"} 1
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1456
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1456
http_response_size_bytes{handler="prometheus",quantile="0.99"} 1456
http_response_size_bytes_sum{handler="prometheus"} 1456
http_response_size_bytes_count{handler="prometheus"} 1
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 65535
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 10
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 4.780032e+06
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.59909754567e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.4000128e+07
# HELP zk_up Exporter successful
# TYPE zk_up gauge
zk_up 0
# HELP zookeeper_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which zookeeper_exporter was built.
# TYPE zookeeper_exporter_build_info gauge
zookeeper_exporter_build_info{branch="HEAD",goversion="go1.10.8",revision="0e30f3afdcb8e36be1b2a41fa432f4626012e5c0",version="v1.1.0"} 1

closed time in 21 days

Newbwen

issue commentcarlpett/zookeeper_exporter

no zk_* items

Hi @Newbwen, Note the metric zk_up 0. This means that the exporter was unable to reach the Zookeeper server. The logs should mention what went wrong, fixing that will ensure you get all the metrics. I'll close this for now, feel free to re-open if you do not get metrics despite a working connection to the server.

Newbwen

comment created time in 21 days

issue openedcortexproject/cortex

Unexpected(?) memory usage increase after ingester restart

Following a question on Slack: https://cloud-native.slack.com/archives/CCYDASBLP/p1598877918007000. This is to some extent a question of "is this working as intended" - we're just getting started with Cortex and aren't familiar with the behaviours just yet.

We saw a peculiar increase in memory usage after our ingester (currently only running a single replica) was restarted as part of node maintenace in our GKE cluster. Here's container_memory_working_set_bytes and go_memstats_heap_alloc_bytes (available after 13:08): image

A few hours previous to the restart at 13:25, we had just added our first "real" remote write clients, three small Prometheus servers (~50-60k series each). Immediately after the restart, memory usage increases from ~450 MiB to ~1200 MiB.

The ingested is configured with wal_enabled: true and recover_from_wal: true. The PVC holding the WAL had ~625 MiB at the time of restart. We are running master-f1b95c68d in order to get some bug fixes with memberlist.

At request, I captured a number of pprof heap profiles, attached here: pprof.cortex.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz pprof.cortex.alloc_objects.alloc_space.inuse_objects.inuse_space.002.pb.gz pprof.cortex.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz pprof.cortex.alloc_objects.alloc_space.inuse_objects.inuse_space.004.pb.gz pprof.cortex.alloc_objects.alloc_space.inuse_objects.inuse_space.005.pb.gz

created time in 24 days

issue closedprometheus-community/windows_exporter

Install windows_exporter on many servers with service collector and regexp for service Name

Hi, I use DSC to install windows_exporter on servers and I have a problem with starting the installation of service collector with regexp for services. Example service collector with a custom query works if I start it on local machine using & msiexec.exe, and if I try it with Start-Process commandlet like Start-Process msiexec.exe -Wait -ArgumentList ... (used by DSC) it doesn't work. The problem appears to be in --% - every time I try to run the Start-Process with this --% sequence it just displays the msiexec help. Without the --% it works fine but without the regexp. No matter how I try (ArgumentList as an array, separate string variable etc.) I can't make it work. Please advice if you're familiar with this issue. My example scripts

$Arguments = '/quiet /i C:\Distrib\windows_exporter-0.13.0-amd64.msi ENABLED_COLLECTORS=os,cpu,cs,memory,net,system,logical_disk,service --% EXTRA_FLAGS=--collector.service.services-where="Name=''LRT%''"'
Start-Process msiexec.exe -Wait -ArgumentList $Arguments

windows_exporter - 0.13.0 Windows server 2016

closed time in 24 days

elita26

issue commentprometheus-community/windows_exporter

Install windows_exporter on many servers with service collector and regexp for service Name

''"""' Wow, what quoting. Thanks for posting that, I'm sure it'll save someone a bit of frustration! :heart:

elita26

comment created time in 24 days

pull request commenthashicorp/packer

Allows for the mounting of ISOs when a Proxmox VM s created. Same as …

:tada: Nice work!

paginabianca

comment created time in 24 days

Pull request review commenthashicorp/packer

Allows for the mounting of ISOs when a Proxmox VM s created. Same as …

 func (c *Config) Prepare(raws ...interface{}) ([]string, error) { 			warnings = append(warnings, isoWarnings...) 			c.AdditionalISOFiles[idx].shouldUploadISO = true 		}-		if c.AdditionalISOFiles[idx].DeviceType == "" {-			log.Printf("AdditionalISOFile %d DeviceType not set, using default 'ide3'", idx)-			c.AdditionalISOFiles[idx].DeviceType = "ide3"+		if c.AdditionalISOFiles[idx].Device == "" {+			log.Printf("AdditionalISOFile %d Devicenot set, using default 'ide3'", idx)

Typo

			log.Printf("AdditionalISOFile %d Device not set, using default 'ide3'", idx)
paginabianca

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent
more