profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/travisjeffery/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Travis Jeffery travisjeffery Confluent Canada https://travisjeffery.com Worked at Basecamp, Segment, Confluent on Kafka/Cloud. Author of Distributed Services with Go.

grpc/grpc-web 5628

gRPC for Web Clients

travisjeffery/ClangFormat-Xcode 2832

Xcode plug-in to to use clang-format from in Xcode and consistently format your code with Clang

tj/should.js 2771

BDD style assertions for node.js -- test framework agnostic

Automattic/monk 1826

The wise MongoDB API

jtrupiano/rack-rewrite 823

A web server agnostic rack middleware for defining and applying rewrite rules. In many cases you can get away with Rack::Rewrite instead of writing Apache mod_rewrite rules.

thoughtbot/shoulda-context 157

Minitest & Test::Unit context framework

rweng/underscore-rails 129

underscore.js asset-pipeline provider/wrapper

travisjeffery/awesome-wm 12

My configuration of the awesome window manager.

travisjeffery/computer-science 5

Data structures, algorithms, and other good stuff.

travisjeffery/certmagic-sqlstorage 3

SQL storage for CertMagic/Caddy TLS data.

issue closedgolang/go

runtime: segmentation violation with linkshared

What version of Go are you using (go version)?

<pre> $ go version go version go1.15.1 linux/amd64 </pre>

Does this issue reproduce with the latest release?

Yes with every version

What operating system and processor architecture are you using (go env)?

Linux 16.04.1-Ubuntu x86_64 GNU/Linux <details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/root/.cache/go-build" GOENV="/root/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/root/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/root/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/local/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="/usr/bin/gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build242294695=/tmp/go-build -gno-record-gcc-switches" </pre></details>

What did you do?

Dev Machine: Created a http application with linkshared option go install -buildmode=shared runtime sync/atomic go build -linkshared hpptapp.go

source code for httpapp.go

package main

import (
    "github.com/labstack/echo"
    "net/http"
)

func mainAdmin(c echo.Context) error {
    client := &http.Client{}
    req, err := http.NewRequest("GET", "https://www.geeksforgeeks.org/find-triplets-array-whose-sum-equal-zero", nil)

    if err == nil {
        resp, err2 := client.Do(req)
        if err2 == nil {
            defer resp.Body.Close()
        }
   }
    return c.String(http.StatusOK, "Hello, Main!")
}

func main() {
    e := echo.New()
    e.GET("/main", mainAdmin)
    e.Start(":1328")
}

Copied the app (hpptapp) with its dependent libraries (go-runtime) and run on another machine (Linux 16.04.1) where Golang is not installed. Launch the app & hit the browser(localhost:1328/main) multiple times, it worked as expected (print Hello, Main! in browser window). Again hit the browser after few seconds the app crashed. The crash can be reproduce every time. After launching the application, wait for few seconds. The crash can be reproduce easily on any machine where golang is not installed.

What did you expect to see?

The application should not crash.

What did you see instead?

It is getting crashed in ‘client.Do’ function. I am getting following stack trace

unexpected fault address 0x154ce0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0x154ce0 pc=0x154ce0]

goroutine 31 [running]:
runtime.throw(0x7f03b0b95f77, 0x5)
/usr/local/go/src/runtime/panic.go:1116 +0x74 fp=0xc0000ce518 sp=0xc0000ce4e8 pc=0x7f03b0b4b494
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:727 +0x428 fp=0xc0000ce548 sp=0xc0000ce518 pc=0x7f03b0b68a88
crypto/sha512.blockAMD64(0xc0000ce778, 0xc0000ce7b8, 0x80, 0x80)
/usr/local/go/src/crypto/sha512/sha512block_amd64.s:144 +0x3f15 fp=0xc0000ce550 sp=0xc0000ce548 pc=0x654315
crypto/sha512.block(0xc0000ce778, 0xc0000ce7b8, 0x80, 0x80)
/usr/local/go/src/crypto/sha512/sha512block_amd64.go:23 +0x8e fp=0xc0000ce580 sp=0xc0000ce550 pc=0x6503ae
crypto/sha512.(*digest).Write(0xc0000ce778, 0xc0000ce650, 0x10, 0x80, 0x62, 0x0, 0x0)
/usr/local/go/src/crypto/sha512/sha512.go:262 +0x1df fp=0xc0000ce5d0 sp=0xc0000ce580 pc=0x64fa5f
crypto/sha512.(*digest).checkSum(0xc0000ce778, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/sha512/sha512.go:310 +0x125 fp=0xc0000ce6e0 sp=0xc0000ce5d0 pc=0x650085
crypto/sha512.(*digest).Sum(0xc000542d20, 0x0, 0x0, 0x0, 0x1, 0x1, 0x0)
/usr/local/go/src/crypto/sha512/sha512.go:282 +0xa5 fp=0xc0000ce868 sp=0xc0000ce6e0 pc=0x64fba5
crypto/hmac.(*hmac).Sum(0xc000541c80, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0)
/usr/local/go/src/crypto/hmac/hmac.go:57 +0x58 fp=0xc0000ce8b8 sp=0xc0000ce868 pc=0x6814b8
golang.org/x/crypto/hkdf.(*hkdf).Read(0xc00050caf0, 0xc00052f340, 0x20, 0x20, 0xc0000a2770, 0xd, 0x10)
/usr/local/go/src/golang.org/x/crypto/hkdf/hkdf.go:63 +0x1d3 fp=0xc0000ce918 sp=0xc0000ce8b8 pc=0x71fb53
crypto/tls.(*cipherSuiteTLS13).expandLabel(0xd32fe0, 0xc00025c240, 0x30, 0x30, 0x88a498, 0x3, 0x0, 0x0, 0x0, 0x20, …)
/usr/local/go/src/crypto/tls/key_schedule.go:46 +0x2a2 fp=0xc0000cea30 sp=0xc0000ce918 pc=0x754b02
crypto/tls.(*cipherSuiteTLS13).trafficKey(0xd32fe0, 0xc00025c240, 0x30, 0x30, 0x30, 0x30, 0x30, 0xc00025c240, 0x30, 0x30)
/usr/local/go/src/crypto/tls/key_schedule.go:77 +0x98 fp=0xc0000ceac0 sp=0xc0000cea30 pc=0x755218
crypto/tls.(*halfConn).setTrafficSecret(0xc0000fe1f0, 0xd32fe0, 0xc00025c240, 0x30, 0x30)
/usr/local/go/src/crypto/tls/conn.go:215 +0x7a fp=0xc0000ceb20 sp=0xc0000ceac0 pc=0x7281da
crypto/tls.(*clientHandshakeStateTLS13).establishHandshakeKeys(0xc0000cedf0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/handshake_client_tls13.go:360 +0x226 fp=0xc0000cec00 sp=0xc0000ceb20 pc=0x73b046
crypto/tls.(*clientHandshakeStateTLS13).handshake(0xc0000cedf0, 0xc0000a2668, 0x4)
/usr/local/go/src/crypto/tls/handshake_client_tls13.go:79 +0x1bb fp=0xc0000cec50 sp=0xc0000cec00 pc=0x73943b
crypto/tls.(*Conn).clientHandshake(0xc0000fe000, 0x0, 0x0)
/usr/local/go/src/crypto/tls/handshake_client.go:209 +0x65d fp=0xc0000ceee0 sp=0xc0000cec50 pc=0x732f5d
crypto/tls.(*Conn).clientHandshake-fm(0x0, 0x0)
/usr/local/go/src/crypto/tls/handshake_client.go:136 +0x2c fp=0xc0000cef08 sp=0xc0000ceee0 pc=0x76e98c
crypto/tls.(*Conn).Handshake(0xc0000fe000, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1362 +0xc9 fp=0xc0000cef78 sp=0xc0000cef08 pc=0x731049
net/http.(*persistConn).addTLS.func2(0x0, 0xc0000fe000, 0xc00051f8b0, 0xc000540ea0)
/usr/local/go/src/net/http/transport.go:1509 +0x45 fp=0xc0000cefc0 sp=0xc0000cef78 pc=0x819545
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1374 +0x1 fp=0xc0000cefc8 sp=0xc0000cefc0 pc=0x7f03b0b8cfc1
created by net/http.(*persistConn).addTLS
/usr/local/go/src/net/http/transport.go:1505 +0x17d

goroutine 1 [IO wait]:
internal/poll.runtime_pollWait(0x7f03b0f0df48, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:220 +0x65
internal/poll.(*pollDesc).wait(0xc00014c318, 0x72, 0x0, 0x0, 0x88b2eb)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x47
internal/poll.(*pollDesc).waitRead(…)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc00014c300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:394 +0x1fc
net.(*netFD).accept(0xc00014c300, 0x1, 0x203000, 0x203000)
/usr/local/go/src/net/fd_unix.go:172 +0x45
net.(*TCPListener).accept(0xc00000e680, 0x29e8d60800, 0x8dffb4, 0xc0000d1d58)
/usr/local/go/src/net/tcpsock_posix.go:139 +0x34
net.(*TCPListener).AcceptTCP(0xc00000e680, 0xca1d52c7009746b2, 0x0, 0x0)
/usr/local/go/src/net/tcpsock.go:248 +0x67
github.com/labstack/echo.tcpKeepAliveListener.Accept(0xc00000e680, 0xc0000d1d58, 0x5ad9c8,   0x601400dd, 0x7f03b0b58970)
/root/go/src/github.com/labstack/echo/echo.go:946 +0x31
net/http.(*Server).Serve(0xc00014e000, 0xba22c0, 0xc00003ec98, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:2937 +0x286
github.com/labstack/echo.(*Echo).serve(0xc000150000, 0xc00014e000, 0x0)
/root/go/src/github.com/labstack/echo/echo.go:789 +0x9b
github.com/labstack/echo.(*Echo).Start(0xc000150000, 0x88a92b, 0x5, 0xb92858, 0x0)
/root/go/src/github.com/labstack/echo/echo.go:663 +0xd7
main.main()
/root/go/src/http2.go:66 +0x10f

goroutine 23 [IO wait]:
internal/poll.runtime_pollWait(0x7f03b0f0dca8, 0x72, 0xb9d200)
/usr/local/go/src/runtime/netpoll.go:220 +0x65
internal/poll.(*pollDesc).wait(0xc000520a18, 0x72, 0xc000539100, 0x1, 0x1)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x47
internal/poll.(*pollDesc).waitRead(…)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000520a00, 0xc000539151, 0x1, 0x1, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:159 +0x1b1
net.(*netFD).Read(0xc000520a00, 0xc000539151, 0x1, 0x1, 0x0, 0x0, 0x0)
/usr/local/go/src/net/fd_posix.go:55 +0x51
net.(*conn).Read(0xc00003e050, 0xc000539151, 0x1, 0x1, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:182 +0x90
net/http.(*connReader).backgroundRead(0xc000539140)
/usr/local/go/src/net/http/server.go:690 +0x5a
created by net/http.(*connReader).startBackgroundRead
/usr/local/go/src/net/http/server.go:686 +0xd5

goroutine 25 [chan receive]:
net/http.(*persistConn).addTLS(0xc00053c480, 0xc00052ef40, 0x15, 0x0, 0xc00052ef56, 0x3)
/usr/local/go/src/net/http/transport.go:1515 +0x1a6
net/http.(*Transport).dialConn(0xd38ee0, 0xba3300, 0xc0000a20a8, 0x0, 0x89ad45, 0x5,    0xc00052ef40, 0x19, 0x0, 0xc00053c480, …)
/usr/local/go/src/net/http/transport.go:1585 +0x1d67
net/http.(*Transport).dialConnFor(0xd38ee0, 0xc0001c4160)
/usr/local/go/src/net/http/transport.go:1421 +0xc8
created by net/http.(*Transport).queueForDial
/usr/local/go/src/net/http/transport.go:1390 +0x42f

goroutine 22 [select]:
net/http.(*Transport).getConn(0xd38ee0, 0xc000361400, 0x0, 0x89ad45, 0x5, 0xc00052ef40, 0x19, 0x0, 0x0, 0x0, …)
/usr/local/go/src/net/http/transport.go:1347 +0x5a9
net/http.(*Transport).roundTrip(0xd38ee0, 0xc000172900, 0x30, 0xc000539290, 0x7f03877f32f8)
/usr/local/go/src/net/http/transport.go:569 +0x78e
net/http.(*Transport).RoundTrip(0xd38ee0, 0xc000172900, 0xd38ee0, 0x0, 0x0)
/usr/local/go/src/net/http/roundtrip.go:17 +0x37
net/http.send(0xc000172900, 0xb9c2a0, 0xd38ee0, 0x0, 0x0, 0x0, 0xc00003e060, 0x8e0104, 0x1, 0x0)
/usr/local/go/src/net/http/client.go:252 +0x45a
net/http.(*Client).send(0xc000539230, 0xc000172900, 0x0, 0x0, 0x0, 0xc00003e060, 0x0, 0x1, 0xc0000ce850)
/usr/local/go/src/net/http/client.go:176 +0xff
net/http.(*Client).do(0xc000539230, 0xc000172900, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/client.go:718 +0x46f
net/http.(*Client).Do(0xc000539230, 0xc000172900, 0x89ad45, 0x46, 0x0)
/usr/local/go/src/net/http/client.go:586 +0x37
main.mainAdmin(0xba7380, 0xc00011cf00, 0x0, 0x0)
/root/go/src/http2.go:35 +0x1f7
github.com/labstack/echo.(*Echo).add.func1(0xba7380, 0xc00011cf00, 0x0, 0x0)
/root/go/src/github.com/labstack/echo/echo.go:536 +0x64
github.com/labstack/echo.(*Echo).ServeHTTP(0xc000150000, 0xba2980, 0xc00014e2a0, 0xc000172800)
/root/go/src/github.com/labstack/echo/echo.go:646 +0x187
net/http.serverHandler.ServeHTTP(0xc00014e000, 0xba2980, 0xc00014e2a0, 0xc000172800)
/usr/local/go/src/net/http/server.go:2843 +0xa5
net/http.(*conn).serve(0xc000423ea0, 0xba32c0, 0xc000361340)
/usr/local/go/src/net/http/server.go:1925 +0x8b1
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2969 +0x394

closed time in 12 minutes

ManojKumarChauhan

issue commentgolang/go

runtime: segmentation violation with linkshared

Nor reproducing in go version 1.16

ManojKumarChauhan

comment created time in 12 minutes

pull request commentolivere/elastic

Add some query parameters to delete indices API

Hi @olivere It's been a while since I pushed this PR. Is there anything I can support or add for this PR?

munkyu

comment created time in 16 minutes

issue commentkubernetes/kops

Unable to bootstrap a IPv6 only cluster

Cool thanks @hakman.

kasunt-nixdev

comment created time in 19 minutes

issue commentkubernetes/kops

Unable to bootstrap a IPv6 only cluster

All I can say is that I ran the k8s conformance tests with 5 nodes and worked with few minor unrelated issues. I would suggest to delete the cluster and try a fresh one, see if you can reproduce. If you can, I can give it a try too in the next few days.

kasunt-nixdev

comment created time in 27 minutes

issue commenthashicorp/nomad

Java workloads are allocated but cannot start

@c16a It would be nice if you updated the ticket with the actual error you have now; otherwise triagers would have to answer something that you already fixed…

c16a

comment created time in 29 minutes

fork rougier/replication_candy_1991

This repository contains code, data, and a manuscript implementing and describing the replication of two publications that apply ordinal regression models to insect phenology data.

fork in 35 minutes

release hotwired/turbo

v7.0.0-beta.7

released time in 36 minutes

issue openedhashicorp/vault

UI: Show date of creation in version history (KV)

Is your feature request related to a problem? Please describe. No

Describe the solution you'd like A field in the version history which shows the date when the version was created image

Describe alternatives you've considered None

Explain any additional use-cases We're using KV for SSL-Certificates and want to see when new certificates we're created.

Additional context None

created time in 38 minutes

pull request commentkubernetes/kops

Set default ClusterCIDR through the PodCIDR

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes/kops/pull/11756#pullrequestreview-682663906" title="Approved">hakman</a>

The full list of commands accepted by this bot can be found here.

The pull request process is described here

<details > Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":[]} -->

johngmyers

comment created time in 40 minutes

issue commentconfluentinc/confluent-kafka-python

Implement support of Schema Registry's schema references

I faced this issue when working with references in my setup. I managed to work around it, but I would like to make an official contribution to this repo so it can be an added feature.

Do I have to discuss the implementation with anyone before I just fork this repository?

FrancescoPessina

comment created time in 40 minutes

pull request commentkubernetes/kops

Allow unsetting fields from the command line

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes/kops/pull/11745#discussion_r650757359" title="Approved">hakman</a>

The full list of commands accepted by this bot can be found here.

The pull request process is described here

<details > Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":[]} -->

johngmyers

comment created time in an hour

Pull request review commentkubernetes/kops

Allow unsetting fields from the command line

 type CreateClusterOptions struct { 	// SSHPublicKeys is a map of the SSH public keys we should configure; required on AWS, not required on GCE 	SSHPublicKeys map[string][]byte -	// Overrides allows settings values direct in the spec+	// Overrides allows setting values directly in the spec. 	Overrides []string

Maybe we can rename 'Overrides' to Sets to be consistent. /hold in case you want to do that /lgtm /approve

johngmyers

comment created time in an hour

pull request commentkubernetes/kops

Enable IPv6 support for Cilium

/lgtm

johngmyers

comment created time in an hour

Pull request review commentkubernetes/kops

Enable IPv6 support for Cilium

 func (tf *TemplateFunctions) AddTo(dest template.FuncMap, secretStore fi.SecretS 		return strings.Join(labels, ",") 	} +	dest["IsIPv6Only"] = tf.IsIPv6Only

Fine by me.

johngmyers

comment created time in an hour

issue commentconfluentinc/cp-ansible

4lw Commands are not whitelisted via CP-Ansible

@Fobhep Sounds like a bug then. :). Thanks for the additional details, will add this to the list to investigate. :)

Fobhep

comment created time in an hour

issue commentkubernetes/kops

Unable to bootstrap a IPv6 only cluster

@hakman. Thanks. Yup been scratching my head on this for a while. Found a more serious issue than the one above and I think there might be a bigger network/CNI issue going on.

Ive got a 1 master 2 worker node topology as in the OP above. But only one of the CoreDNS instances respond in the worker InstanceGroup. Any idea why this might be ?

dig AAAA cert-manager-webhook.cert-manager.svc.cluster.local +tcp @fd12:3456:789a:1:f451:ce23:2e7e:1398
;; Connection to fd12:3456:789a:1:f451:ce23:2e7e:1398#53(fd12:3456:789a:1:f451:ce23:2e7e:1398) for cert-manager-webhook.cert-manager.svc.cluster.local failed: timed out.
;; Connection to fd12:3456:789a:1:f451:ce23:2e7e:1398#53(fd12:3456:789a:1:f451:ce23:2e7e:1398) for cert-manager-webhook.cert-manager.svc.cluster.local failed: timed out.

; <<>> DiG 9.11.6-P1 <<>> AAAA cert-manager-webhook.cert-manager.svc.cluster.local +tcp @fd12:3456:789a:1:f451:ce23:2e7e:1398
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Connection to fd12:3456:789a:1:f451:ce23:2e7e:1398#53(fd12:3456:789a:1:f451:ce23:2e7e:1398) for cert-manager-webhook.cert-manager.svc.cluster.local failed: timed out.
command terminated with exit code 9
dig AAAA cert-manager-webhook.cert-manager.svc.cluster.local +tcp @fd12:3456:789a:1:f6b:daa7:766f:8596

; <<>> DiG 9.11.6-P1 <<>> AAAA cert-manager-webhook.cert-manager.svc.cluster.local +tcp @fd12:3456:789a:1:f6b:daa7:766f:8596
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33016
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 39bf8f22503235fe (echoed)
;; QUESTION SECTION:
;cert-manager-webhook.cert-manager.svc.cluster.local. IN	AAAA

;; ANSWER SECTION:
cert-manager-webhook.cert-manager.svc.cluster.local. 30	IN AAAA	fd12:3456:789a::1838

;; Query time: 2 msec
;; SERVER: fd12:3456:789a:1:f6b:daa7:766f:8596#53(fd12:3456:789a:1:f6b:daa7:766f:8596)
;; WHEN: Mon Jun 14 07:48:47 UTC 2021
;; MSG SIZE  rcvd: 171
kasunt-nixdev

comment created time in an hour

issue openedkubernetes/kops

Kops cluster-autoscaler addon not getting deploy up

/kind bug

1. What kops version are you running? The command kops version, will display this information.

I've tried this with both kops 1.19 and kops 1.20.

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag.

I'm using kubernetes 1.19.9.

3. What cloud provider are you using?

AWS, on EC2.

4. What commands did you run? What is the simplest way to reproduce this issue?

On a cluster provisioned on EC2 using kops with manually created deployment for cluster-autoscaler, I removed the deploy. Then edit the cluster definition to include the cluster autoscaler example configuration from the documentation. I run kops update cluster example.k8s.local --yes and then kops rolling-update cluster example.k8s.local --yes.

5. What happened after the commands executed?

The cluster is rolling updated, but there's no cluster autoscaler pods not deployment.

6. What did you expect to happen?

I expected to have the cluster-autoscaler deploy created and corresponding pod up and running on the kube-system namespace.

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2020-02-13T14:47:28Z"
  generation: 26
  name: example.k8s.local
spec:
  additionalPolicies:
    node: |
      [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
      ]
  api:
    loadBalancer:
      class: Classic
      type: Internal
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    environment: example
  cloudProvider: aws
  clusterAutoscaler:
    enabled: true
    expander: least-waste
    balanceSimilarNodeGroups: false
    scaleDownUtilizationThreshold: "0.5"
    skipNodesWithLocalStorage: true
    skipNodesWithSystemPods: true
    newPodScaleUpDelay: 0s
    scaleDownDelayAfterAdd: 10m0s
    image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.19.1
    cpuRequest: "100m"
    memoryRequest: "300Mi"
  configBase: s3://kops-bucket/example.k8s.local
  docker:
    bridgeIP: 192.168.3.1/24
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-region-1
      name: 1
    - instanceGroup: master-region-2
      name: 2
    - instanceGroup: master-region-3
      name: 3
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-region-1
      name: 1
    - instanceGroup: master-region-2
      name: 2
    - instanceGroup: master-region-3
      name: 3
    memoryRequest: 100Mi
    name: events
  hooks:
  - before:
    - kubelet.service
    manifest: |
      [Service]
      Type=oneshot
      RemainAfterExit=no
      ExecStart=/bin/sh -c "sed -i -e 's/^pool/#pool/g' -e 's/^# pool: .*$/server 169.254.169.123 prefer iburst/' /etc/ntp.conf"
      ExecStartPost=/bin/systemctl restart ntp.service
    name: change_ntp_server.service
    roles:
    - Node
    - Master
  - before:
    - docker.service
    manifest: |
      [Service]
      Type=oneshot
      RemainAfterExit=no
      ExecStartPre=/bin/mkdir -p /root/.docker
      ExecStart=/usr/bin/wget https://amazon-ecr-credential-helper-releases.s3.region-4.amazonaws.com/0.3.1/linux-amd64/docker-credential-ecr-login -O /bin/docker-credential-ecr-login
      ExecStartPost=/bin/chmod +x /bin/docker-credential-ecr-login
      ExecStartPost=/bin/sh -c "echo '{\n  \"credHelpers\": {\n    \"111111111111.dkr.ecr.us-east-1.amazonaws.com\": \"ecr-login\"\n  }\n}' > /root/.docker/config.json"
    name: setup_ecr_docker.service
    roles:
    - Node
    - Master
  - manifest: |
      [Unit]
      Description=Telegraf Container
      After=docker.service
      Requires=docker.service

      [Service]
      TimeoutStartSec=0
      Restart=always
      ExecStartPre=/bin/sh -lc "docker pull 111111111111.dkr.ecr.us-east-1.amazonaws.com/telegraf"
      ExecStartPre=/bin/sh -lc "/usr/bin/docker rm -f telegraf || echo OK"
      ExecStart=/usr/bin/docker run -p 9126:9126 --rm --name telegraf 111111111111.dkr.ecr.us-east-1.amazonaws.com/telegraf

      [Install]
      WantedBy=multi-user.target
    name: telegraf.service
    roles:
    - Node
    - Master
    useRawManifest: true
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeDNS:
    provider: CoreDNS
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 192.168.6.0/25
  kubernetesVersion: 1.19.9
  masterInternalName: api.internal.example.k8s.local
  masterPublicName: api.example.k8s.local
  networkCIDR: 192.168.0.0/20
  networkID: vpc-11111111
  networking:
    calico:
      crossSubnet: true
      majorVersion: v3
      prometheusMetricsEnabled: true
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 192.168.6.0/25
  subnets:
  - cidr: 192.168.9.0/24
    name: region-3
    type: Private
    zone: region-3
  - cidr: 192.168.10.0/24
    name: region-1
    type: Private
    zone: region-1
  - cidr: 192.168.11.0/24
    name: region-2
    type: Private
    zone: region-2
  - cidr: 192.168.8.0/26
    name: utility-region-3
    type: Utility
    zone: region-3
  - cidr: 192.168.8.64/26
    name: utility-region-1
    type: Utility
    zone: region-1
  - cidr: 192.168.8.128/26
    name: utility-region-2
    type: Utility
    zone: region-2
  topology:
    dns:
      type: Public
    masters: private
    nodes: private

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 8
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: master-region-3
spec:
  additionalSecurityGroups:
  - sg-11111111
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-region-3
  role: Master
  subnets:
  - region-3

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 8
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: master-region-1
spec:
  additionalSecurityGroups:
  - sg-11111111
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-region-1
  role: Master
  subnets:
  - region-1

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 8
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: master-region-2
spec:
  additionalSecurityGroups:
  - sg-11111111
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-region-2
  role: Master
  subnets:
  - region-2

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 11
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: nodes
spec:
  additionalSecurityGroups:
  - sg-11111111
  cloudLabels:
    example.k8s.local/autoscaler/enabled: "true"
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.xlarge
  maxSize: 3
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - region-3
  - region-1
  - region-2

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

Update: https://controlc.com/ca90e288 Rolling-update: https://controlc.com/58c59773

9. Anything else do we need to know?

I've tried forcing a rolling update, but the result is the same, no deploy for cluster-autoscaler. The verbose output for the forced rolling update is too big to push, but I can send them. I've checked kubelet's logs but found nothing about the cluster autoscaler manifest.

created time in an hour

issue commentconfluentinc/cp-ansible

4lw Commands are not whitelisted via CP-Ansible

@JumaX Zookeeper has been installed with CP-Ansible from the beginning: Originally started with 5.3 or 5.4 (not sure off the top of my head) and upgraded then gradually until now to 6.1.1

If relevant, one could check whether this could actually affect more people who had a similar upgrade path. After we have solved it - we don't really need a support case - but if we do, I will of course open one :)

Fobhep

comment created time in an hour

issue commentgolang/go

x/website: Example codes don't have horizontal scroller on mobile devices

cc @dmitshur

inet56

comment created time in an hour

issue commentgolang/go

x/website: Example codes don't have horizontal scroller on mobile devices

@seankhliao more time to take decision? It's just two lines of change.

inet56

comment created time in an hour

issue openedhashicorp/terraform

getting The given key does not identify an element in this collection value.

HI, The below code was working previously in 11 version now using in 13 it's getting me error:

Code : Variable:

variable "alb" {
  default = {
    listener_https_port     = 443     
    listener_https_protocol = "https"
    listener_http_port      = 80  
    listener_http_protocol  = "http"  

    health_check_healthy_threshold   = 4 
    health_check_unhealthy_threshold = 5 
    health_check_timeout             = 30  
    health_check_target              = "/health"
    health_check_interval            = 120 

    cross_zone_load_balancing = true 

    idle_timeout = 300 

    connection_draining         = true 
    connection_draining_timeout = 100  
  }
}

calling in the main file:

  health_check {
    enabled		 = true
    healthy_threshold   = var.alb["health_check_healthy_threshold"]
    unhealthy_threshold = var.alb["health_check_unhealthy_threshold"]
    timeout             = var.alb["health_check_timeout"]
    interval            = var.alb["health_check_interval"]
    path                = var.alb["health_check_target"]
    matcher             = "200-299"
  }

Error:

Error: Invalid index
  on alb.tf line 11, in resource "aws_lb" "cms_alb":
  11:   idle_timeout = var.alb["idle_timeout"]
    |----------------
    | var.alb is object with 1 attribute "matcher"
The given key does not identify an element in this collection value.
Error: Invalid index
  on alb.tf line 35, in resource "aws_lb_target_group" "cms_tg":
  35:   port        = var.alb["listener_http_port"]
    |----------------
    | var.alb is object with 1 attribute "matcher"
The given key does not identify an element in this collection value.
Error: Invalid index
  on alb.tf line 36, in resource "aws_lb_target_group" "cms_tg":
  36:   protocol    = upper(var.alb["listener_http_protocol"])
    |----------------
    | var.alb is object with 1 attribute "matcher"
The given key does not identify an element in this collection value.
Error: Invalid index
  on alb.tf line 41, in resource "aws_lb_target_group" "cms_tg":
  41:     healthy_threshold   = var.alb["health_check_healthy_threshold"]
    |----------------
    | var.alb is object with 1 attribute "matcher"
The given key does not identify an element in this collection value.

created time in an hour

issue commentconfluentinc/confluent-kafka-dotnet

High CPU when broker is down/connectivity issues

Hi @mhowlett,

We are on linux platform with ubuntu LTS.

Yes it's a good rule to keep latest version, but actually we don't push a new production version if it's only for upgrade dependency.

We can try to update the dependency and try to reproduce it with 1.1.0 and 1.3.0 tomorrow.

Have a nice day

Hi @acesyde , updating the dependency fix the problem? We are using 1.1.0 version too. Thanks,

ashishbhatia22

comment created time in an hour

issue commentgolang/go

database/sql: possible new panic doing reflect-based conversions

Why does v.Type().ConvertibleTo(w.Type()) return true? Because of the language change ([]T is convertible to *[N]T). But that spec also says that some conversions will panic. So this can panic. database/sql handles panics at some places, this should be another one. (Eliminate choice (b)) Those types are convertible, only the length of the slice causes error -which is a common error- so this sould return an error (sth. like io.ErrShortBuffer, or a new reflect.ErrShortSlice ?).

TL;DR: My 2c for (a)

josharian

comment created time in an hour

issue commentconfluentinc/cp-ansible

4lw Commands are not whitelisted via CP-Ansible

@Fobhep are you using an existing Zookeeper quorum or is this a new one installed by CP-Ansible? I ask as if its a new quorum installed by CP-Ansible, we do not need to whitelist as by default it's accepted.

Historically, we've also not recommended running off an existing Zookeeper quorum, however you may want to check with our support team on this, if you have a support contract. I am not sure what the latest stance is.

Fobhep

comment created time in an hour

issue commentkubernetes/kops

Unable to bootstrap a IPv6 only cluster

You can also add log to the Corefile and see all requests: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#are-dns-queries-being-received-processed

I see that cert-manager-webhook.cert-manager.svc. contains a . at the end which suggests it looks for an absolute domain name.

kasunt-nixdev

comment created time in 2 hours

Pull request review commentkubernetes/kops

Set containerd config on nodeup.Config instead of clusterspec

 type Config struct { 	ConfigServer *ConfigServerOptions `json:"configServer,omitempty"` 	// AuxConfigHash holds a secure hash of the nodeup.AuxConfig. 	AuxConfigHash string++	// Containerd config holds the configuration for containerd+	Containerd *ContainerdConfig `json:"containerd,omitempty"`

But it's just part of the launch template, which is updated together with new version of nodeup?

olemarkus

comment created time in 2 hours

issue openedconfluentinc/kafka-connect-elasticsearch

Suggestions on log printing optimization

Hello,I have some questions about log printing. as follows: ElasticsearchSinkTask.java#L199 The method is executed at any logging level,I don't think it's a good practice.

A simple solution is to add a log level judgment before calling the method.

created time in 2 hours