profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/wking/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

opencontainers/selinux 105

common selinux implementation

spdx/license-list-XML 97

This is the repository for the master files that comprise the SPDX License List

jbenet/depviz 50

dependency visualizer for the web

mndrix/tap-go 22

Test Anything Protocol for Go

adina/boot-camps 2

Software Carpentry boot camp material

LJWilliams/2014-03-17-uw 1

Software Carpentry workshop at U. Washington, March 17-18 2014

wking/angular-validation-match 1

Checks if one input matches another. Useful for confirming passwords, emails, or anything.

wking/awk-lesson 1

test run of a bower workflow for lesson distribution

wking/bes 1

Bulk uploader for Elastic Search, written in Python

Pull request review commentetcd-io/etcd

clientv3: Replace balancer with upstream grpc solution

 func (c *Client) dial(target string, creds grpccredentials.TransportCredentials, 		defer cancel() // TODO: Is this right for cases where grpc.WithBlock() is not set on the dial options? 	} -	conn, err := grpc.DialContext(dctx, target, opts...)+	conn, err := grpc.DialContext(dctx, c.resolver.Scheme()+":///", opts...)

Am I getting it correct that this endpoint/target isn't what is being used to connect to etcd server? It was purely used for gRPC to lookup the EtcdManualResolver for this connection at runtime and EtcdManualResolver will pass over the initial Config#Endpoint to gRPC Balancer to establish tcp connection? Theoretically, we could pass in any url that starts with etcd-endpoints://, e.g. etcd-endpoints:///localhost or etcd-endpoints:/// (like in this PR)?

// Build returns itself for Resolver, because it's both a builder and a resolver.
func (r *Resolver) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) {
	r.CC = cc
	if r.bootstrapState != nil {
		r.UpdateState(*r.bootstrapState)
	}
	return r, nil
}

https://github.com/grpc/grpc-go/blob/0e8f1cda01321329950add4362b0fb0180f58b28/resolver/manual/manual.go#L56

func (r EtcdManualResolver) updateState() {
	if r.CC != nil {
		addresses := make([]resolver.Address, len(r.endpoints))
		for i, ep := range r.endpoints {
			addr, serverName := endpoint.Interpret(ep)
			addresses[i] = resolver.Address{Addr: addr, ServerName: serverName}
		}
		state := resolver.State{
			Addresses:     addresses,
			ServiceConfig: r.serviceConfig,
		}
		r.UpdateState(state)
	}
}

https://github.com/etcd-io/etcd/blob/4a1c24556c22f55e75cb5167d555a8c367ad4d2b/client/v3/internal/resolver/resolver.go#L61

ptabor

comment created time in a minute

push eventgentoo/gentoo

Erik Mackdanz

commit sha 188add7b6d682792c45e013103b1a2af3047fe47

media-sound/upmpdcli: Stabilize 1.5.7 Signed-off-by: Erik Mackdanz <stasibear@gentoo.org> Package-Manager: Portage-3.0.16, Repoman-3.0.2

view details

Erik Mackdanz

commit sha 62d858a22601496d74572a7d48e881339da05ebc

media-sound/upmpdcli: Remove old Signed-off-by: Erik Mackdanz <stasibear@gentoo.org> Package-Manager: Portage-3.0.16, Repoman-3.0.2

view details

push time in a minute

pull request commentopenshift/release

Only test Servers with support for Table transformation

@hamzy: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/rehearse/release-openshift-origin-installer-e2e-remote-libvirt-s390x-4.7 1fa3da324e2fb58ee60398729950ce3684b1fc09 link /test pj-rehearse
ci/rehearse/release-openshift-origin-installer-e2e-remote-libvirt-image-ecosystem-s390x-4.7 1fa3da324e2fb58ee60398729950ce3684b1fc09 link /test pj-rehearse
ci/rehearse/release-openshift-origin-installer-e2e-remote-libvirt-jenkins-e2e-s390x-4.7 1fa3da324e2fb58ee60398729950ce3684b1fc09 link /test pj-rehearse
ci/rehearse/release-openshift-origin-installer-e2e-remote-libvirt-ppc64le-4.7 5bb203636abab13c9472d5645874d64ef2e1acf3 link /test pj-rehearse
ci/prow/pj-rehearse 5bb203636abab13c9472d5645874d64ef2e1acf3 link /test pj-rehearse

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

hamzy

comment created time in a minute

PR opened gentoo/gentoo

sci-libs/shapely: fixed failing tests with geos 3.9

Related Upsteam Issue: https://github.com/Toblerity/Shapely/issues/1079

Closes: https://bugs.gentoo.org/765745 Signed-off-by: Dennis Lamm expeditioneer@gentoo.org Package-Manager: Portage-3.0.13, Repoman-3.0.2

+102 -0

0 comment

2 changed files

pr created time in 2 minutes

pull request commentkubernetes/kubernetes

Clean unused generators

/lgtm Thanks!

:partying_face:

As a follow up, I'll open an issue for the following cleanup:

pkg/cmd/autoscale - remove deprecated flags pkg/cmd/create/deployment - remove AddGeneratorFlag call Cleanup comments on create commands that used to have generators

soltysh

comment created time in 3 minutes

pull request commentcri-o/cri-o

pinns: Allow pinning mount namespaces

@lack: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/kata-jenkins 54fa04613585b328d92159ef186ed7aabd4a30cb link /test kata-containers
ci/prow/e2e-gcp 54fa04613585b328d92159ef186ed7aabd4a30cb link /test e2e-gcp
ci/prow/e2e-agnostic 54fa04613585b328d92159ef186ed7aabd4a30cb link /test e2e-agnostic

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

lack

comment created time in 5 minutes

push eventgentoo/gentoo

Erik Mackdanz

commit sha 30e9806ee5f6d845151e7b18e85adb77dd702307

games-roguelike/stone-soup: stabilizations in 0.25 and 0.26 slots Signed-off-by: Erik Mackdanz <stasibear@gentoo.org> Package-Manager: Portage-3.0.16, Repoman-3.0.2

view details

Erik Mackdanz

commit sha 3a18643eb250694f2c08ea3cae2575950337627d

games-roguelike/stone-soup: Remove old Signed-off-by: Erik Mackdanz <stasibear@gentoo.org> Package-Manager: Portage-3.0.16, Repoman-3.0.2

view details

push time in 6 minutes

pull request commentgentoo/gentoo

media-libs/lib3mf 2.1.0: version bump

I've nearly caught up with our PR work queue, please ping me in ~a week from now for the act package and I'll take a look.

waebbl

comment created time in 7 minutes

issue openedipython/ipython

IPython does not show colors when called with embed()

I use the following snippet to drop to an interactive IPython CLI from my code:

from IPython import embed

# do some things

embed()

But no syntax highlighting or any other color shows up.

If, in contrast, I start ipython simply with

ipython3

Then colors show up as expected.

How can I get colors/syntax highlighting when embedding IPython like this?

Systems tested

  • IPython==7.21.0 + python3==3.7.5
  • IPython==7.13.0 + python3==3.6.9
  • IPython==7.16.1 + python3==3.6.9

Note: Also asked this on SO

created time in 9 minutes

pull request commentopenshift/origin-aggregated-logging

Update hack/deploy-logging.sh to use CI env vars

@jcantrill: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/smoke fbcb6e382be7066cf02250e24492283adbbc11f7 link /test smoke

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

jcantrill

comment created time in 9 minutes

pull request commentcri-o/cri-o

pinns: Allow pinning mount namespaces

@lack: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-agnostic 54fa04613585b328d92159ef186ed7aabd4a30cb link /test e2e-agnostic
ci/kata-jenkins 54fa04613585b328d92159ef186ed7aabd4a30cb link /test kata-containers
ci/prow/e2e-gcp 54fa04613585b328d92159ef186ed7aabd4a30cb link /test e2e-gcp

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

lack

comment created time in 9 minutes

pull request commentopenshift/origin-aggregated-logging

Update hack/deploy-logging.sh to use CI env vars

/retest

Please review the full test history for this PR and help us cut down flakes.

jcantrill

comment created time in 9 minutes

pull request commentopenshift/origin

test/extended/router: Add OWNERS

/retest

Please review the full test history for this PR and help us cut down flakes.

Miciah

comment created time in 9 minutes

push eventgentoo/gentoo

Thomas Deutschmann

commit sha b910b1b59b91d9fe9eb36777627f8c04b6c51eab

sys-fs/btrfs-progs: x86 stable (bug #774336) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha 79b41e53ef90670dcb5e723237c3297b2c528377

sys-cluster/drbd-utils: x86 stable (bug #771090) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha 341ae3f0f69419a9e93e5e80a21eeb3284b1e439

sys-fs/exfatprogs: x86 stable (bug #774339) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha 20ab4450b4177f575b0a1a4859f150bfb0181937

sys-boot/lilo: x86 stable (bug #764146) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha 38ce8c692dc5a6e06f23e9c3a022e539e5a6953f

app-misc/mc: x86 stable (bug #774426) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha d714a359de24d9b5e109549a1b83e19ab4e8174f

app-misc/pax-utils: x86 stable (bug #774423) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha bbd02c609dd12218ab1bedda1624e90c4439c62b

media-sound/playerctl: x86 stable (bug #774381) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha 2cf240677e679b104d9796f7bec675f7c6818da1

app-emulation/qemu: x86 stable (bug #774420) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha 3b933de3cb6314c89d92f8b86f6f340e4fd0d6b1

sys-fs/quota: x86 stable (bug #774342) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha f7435bc7b93efc9ee68e75e9bdce5f11bdd61054

sci-geosciences/routino: x86 stable (bug #771345) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

Thomas Deutschmann

commit sha 4badcd615144c63ce0ada5aa05ab42f560ec15b9

media-fonts/terminus-font: x86 stable (bug #774429) Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org>

view details

push time in 10 minutes

push eventgentoo/sci

Andrew Ammerlaan

commit sha 9c5606756d2151e2aa0f5a0c18cb5cfb356a5104

sci-biology/tmhmm: EAPI bump Signed-off-by: Andrew Ammerlaan <andrewammerlaan@riseup.net>

view details

Andrew Ammerlaan

commit sha 042270d463acd89a4617b1fbb20f447cad728d22

dev-python/python-igraph: version bump, fix build Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Andrew Ammerlaan <andrewammerlaan@riseup.net>

view details

Andrew Ammerlaan

commit sha bc37c11b916f4db2d7135345be34e02ff828eb38

sci-biology/trans-abyss: update, fix install, EAPI bump Package-Manager: Portage-3.0.16, Repoman-3.0.2 Signed-off-by: Andrew Ammerlaan <andrewammerlaan@riseup.net>

view details

push time in 10 minutes

pull request commentgentoo/gentoo

media-libs/lib3mf 2.1.0: version bump

Yes it might nieed some rework. The point with the package is, that upstream doesn't follow the usual way Go packages are developed, so the Go eclasses don't seem to be useful. On the other hand the build is rather trivial, so I decided for the way it's done. I agree, it's better to leave it out for now and wait for a dedicated review of the package.

waebbl

comment created time in 11 minutes

pull request commentopenshift/installer

Adjust master-update.fcc to the new ceo render secret structure

@eranco74: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-aws-upgrade 11a6be954aa258a8c79d66b6fe6624202352ef0a link /test e2e-aws-upgrade
ci/prow/e2e-ovirt 11a6be954aa258a8c79d66b6fe6624202352ef0a link /test e2e-ovirt
ci/prow/e2e-aws-workers-rhel7 11a6be954aa258a8c79d66b6fe6624202352ef0a link /test e2e-aws-workers-rhel7

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

eranco74

comment created time in 12 minutes

issue closedkubernetes/minikube

minikube多节点pod间网络不互通

<!-- 请在报告问题时使用此模板,并提供尽可能详细的信息。否则可能导致响应延迟。谢谢!-->

重现问题所需的命令

helm install hue gethue/hue #部署任意服务

失败的命令的完整输出:<details> 命令无失败,但是hue-postgres-kdfvf、hue-j6vbx分别运行在两个节点,他们无法互相访问tcp端口 hue的报错:

OperationalError: could not connect to server: Connection refused
	Is the server running on host "hue-postgres" (10.98.13.3) and accepting
	TCP/IP connections on port 5432?

pods信息:

[vagrant@control-plane ~]$ kubectl get po -o wide
NAME                 READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
hue-j6vbx            1/1     Running   0          5m2s   172.17.0.5   test       <none>           <none>
hue-postgres-qvj5z   1/1     Running   0          5m2s   172.17.0.2   test-m02   <none>           <none>

</details>

minikube logs命令的输出: <details>

* ==> Docker <==
* -- Logs begin at Thu 2020-12-10 08:46:43 UTC, end at Thu 2020-12-10 11:59:39 UTC. --
* Dec 10 08:51:44 test dockerd[165]: time="2020-12-10T08:51:44.108467189Z" level=info msg="Container 1f387d7f8b093e017d1e6ff26a3162911813e623d4f4f5f35d17ef2ea0fa3fd8 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:51:57 test dockerd[165]: time="2020-12-10T08:51:57.229247377Z" level=info msg="Container 1f387d7f8b09 failed to exit within 10 seconds of kill - trying direct SIGKILL"
* Dec 10 08:51:59 test dockerd[165]: time="2020-12-10T08:51:59.587921340Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:52:00 test dockerd[165]: time="2020-12-10T08:52:00.810301662Z" level=info msg="Container f30cb9ceb7c14d42f8ffdc81f6c334cd9702dc73900b32d15e6cdd9146294f82 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:52:01 test dockerd[165]: time="2020-12-10T08:52:01.789988389Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:53:43 test dockerd[165]: time="2020-12-10T08:53:43.235347083Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:54:28 test dockerd[165]: time="2020-12-10T08:54:28.567694745Z" level=info msg="Container eb8786bd072822c83ce8fcf01a83a0afaa9809215c4181cbc9e1d566d7150a95 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:54:28 test dockerd[165]: time="2020-12-10T08:54:28.598900348Z" level=info msg="Container e4595d58efbc6a7d69d3128c64ea5bba940e0c50adcc79bfa39c9d0e3a5c60b5 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:54:34 test dockerd[165]: time="2020-12-10T08:54:34.868739722Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:54:34 test dockerd[165]: time="2020-12-10T08:54:34.872249759Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:55:43 test dockerd[165]: time="2020-12-10T08:55:43.526785013Z" level=info msg="Container cfcbb76d493265ae9f01e67dedda591a2253184233e52cbda0ee99de3945954f failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:55:44 test dockerd[165]: time="2020-12-10T08:55:44.345504186Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:31 test dockerd[165]: time="2020-12-10T08:58:31.569501808Z" level=info msg="Container 83c13b6bb4dab4bfcd6b30c512b86bd3670eeeb01d33e39b8bfce980d5e7150a failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:58:36 test dockerd[165]: time="2020-12-10T08:58:36.936258295Z" level=error msg="stream copy error: reading from a closed fifo"
* Dec 10 08:58:36 test dockerd[165]: time="2020-12-10T08:58:36.936375102Z" level=error msg="stream copy error: reading from a closed fifo"
* Dec 10 08:58:37 test dockerd[165]: time="2020-12-10T08:58:37.099326343Z" level=error msg="b452324593a9dc51c49e6fc9195f41ed46240cb43d7ee9b04507db3098731e47 cleanup: failed to delete container from containerd: no such container"
* Dec 10 08:58:37 test dockerd[165]: time="2020-12-10T08:58:37.099367115Z" level=error msg="Handler for POST /v1.40/containers/b452324593a9dc51c49e6fc9195f41ed46240cb43d7ee9b04507db3098731e47/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/kubelet/pods/482fd02a-8fef-45c5-bdd6-78502256e60d/volumes/kubernetes.io~empty-dir/dfs\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/4f90ecf644fb7b0f5891f6accc69384a3fc890f1e8b828932bd14c600df461b8/merged\\\\\\\" at \\\\\\\"/dfs\\\\\\\" caused \\\\\\\"stat /var/lib/kubelet/pods/482fd02a-8fef-45c5-bdd6-78502256e60d/volumes/kubernetes.io~empty-dir/dfs: no such file or directory\\\\\\\"\\\"\": unknown"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.141329509Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.157001671Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.227006579Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.829719753Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:39 test dockerd[165]: time="2020-12-10T08:58:39.326882930Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:45 test dockerd[165]: time="2020-12-10T08:58:45.045050561Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:45 test dockerd[165]: time="2020-12-10T08:58:45.148109841Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.492483069Z" level=info msg="Container 75b23ffaa68f4e6b6ae2cb44638f240762f486c8404a461d1ccc4d818cde3c50 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.499012653Z" level=info msg="Container 98ffbbe44120117377bd17c704ebd2f697a101c4a4cd2e39f655aaa1f91a87bd failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.518419629Z" level=info msg="Container c58436d0440da6940af8a802817b59d82e0d11ad0b0403402da428c7fba718b9 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.608732860Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.613139713Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.634497769Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.266542850Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.295927636Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.345220550Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.510018113Z" level=info msg="Container e42a4d295ad2396ae303ec82e7f6ce9a55a55ecfd44308cafc321efab13b0533 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.638278831Z" level=info msg="Container 1317e3999b83b00dbdfac2c31a0ba137e634e3f1c3e4e4655308efd09bb400eb failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.944736544Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.947834312Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:08 test dockerd[165]: time="2020-12-10T08:59:08.591941154Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:23:24 test dockerd[165]: time="2020-12-10T09:23:24.751524766Z" level=info msg="Container 83a30c7e9e0700d28fe4c0617b8ce5f029fea535cdf9185461194c102ccb6d51 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:23:26 test dockerd[165]: time="2020-12-10T09:23:26.315887562Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:23:36 test dockerd[165]: time="2020-12-10T09:23:36.316217145Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:24:34 test dockerd[165]: time="2020-12-10T09:24:34.732616056Z" level=info msg="Container 6923aaa794bc0383b4b5450a6d63b481f95cc769fe161d4913466fd0537e06a7 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:24:35 test dockerd[165]: time="2020-12-10T09:24:35.104108563Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:24:54 test dockerd[165]: time="2020-12-10T09:24:54.328544327Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:25:44 test dockerd[165]: time="2020-12-10T09:25:44.921543368Z" level=info msg="Container 684414a16ceba8e2802ad55d5c04e4851317fbc584f3b4a75926bfd8c7048af6 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:25:45 test dockerd[165]: time="2020-12-10T09:25:45.300098103Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:26:20 test dockerd[165]: time="2020-12-10T09:26:20.219958765Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:26:54 test dockerd[165]: time="2020-12-10T09:26:54.702303947Z" level=info msg="Container c6dd4817f57d20f4919c25e40dd5a6adde748c82a9a079e23a926781a461db48 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:26:54 test dockerd[165]: time="2020-12-10T09:26:54.877890402Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:28:01 test dockerd[165]: time="2020-12-10T09:28:01.388206278Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:28:04 test dockerd[165]: time="2020-12-10T09:28:04.950233188Z" level=info msg="Container 1ba8eb50a63ba971f25263961d162ca605e4748f5ee3ab51874fd6a85599a93b failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:28:05 test dockerd[165]: time="2020-12-10T09:28:05.527262777Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:29:15 test dockerd[165]: time="2020-12-10T09:29:15.107722623Z" level=info msg="Container 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:29:15 test dockerd[165]: time="2020-12-10T09:29:15.700774626Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:05 test dockerd[165]: time="2020-12-10T09:30:05.942677904Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:21 test dockerd[165]: time="2020-12-10T09:30:21.769523703Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:21 test dockerd[165]: time="2020-12-10T09:30:21.769553030Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:49 test dockerd[165]: time="2020-12-10T09:30:49.996451006Z" level=info msg="Container a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:30:50 test dockerd[165]: time="2020-12-10T09:30:50.580285053Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:50 test dockerd[165]: time="2020-12-10T09:30:50.674003483Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* 
* ==> container status <==
* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID
* d11c49fc234ec       4024184338620                                                                              2 hours ago         Running             hue                         0                   a23a0e9a5d818
* d6a179f3b8e53       bad58561c4be7                                                                              3 hours ago         Running             storage-provisioner         25                  b3bd1560f103d
* d438a591e9f6f       bad58561c4be7                                                                              3 hours ago         Exited              storage-provisioner         24                  b3bd1560f103d
* 8f8c7c3378785       503bc4b7440b9                                                                              3 hours ago         Running             kubernetes-dashboard        12                  a3169f5dbdca8
* 51400c84328ff       2186a1a396deb                                                                              3 hours ago         Running             kindnet-cni                 1                   17b32cddc4e63
* 4c6b38e30704b       bfe3a36ebd252                                                                              3 hours ago         Running             coredns                     5                   f904068d0a345
* 559ec4cdc5d7c       86262685d9abb                                                                              3 hours ago         Running             dashboard-metrics-scraper   5                   6c45b974068d5
* 1a408ee50d62c       503bc4b7440b9                                                                              3 hours ago         Exited              kubernetes-dashboard        11                  a3169f5dbdca8
* 147855e4d4e4e       635b36f4d89f0                                                                              3 hours ago         Running             kube-proxy                  1                   af88e7da2d65d
* eb97bca9e9de6       b15c6247777d7                                                                              3 hours ago         Running             kube-apiserver              4                   238b73f1f0d94
* cbed667768826       0369cf4303ffd                                                                              3 hours ago         Running             etcd                        3                   1223c475e6a0d
* ef1e373ce7683       4830ab6185860                                                                              3 hours ago         Running             kube-controller-manager     0                   10dc9502f09dd
* f1b156662850f       14cd22f7abe78                                                                              3 hours ago         Running             kube-scheduler              3                   9bed364abff80
* 2986147d45c59       kindest/kindnetd@sha256:b33085aafb18b652ce4b3b8c41dbf172dac8b62ffe016d26863f88e7f6bf1c98   4 hours ago         Exited              kindnet-cni                 0                   98a22854feb42
* fc879734428d5       bfe3a36ebd252                                                                              2 days ago          Exited              coredns                     4                   cff22597be120
* b2c42bfd2c462       86262685d9abb                                                                              2 days ago          Exited              dashboard-metrics-scraper   4                   2ee8f04be2f42
* 9618e44b528d2       b15c6247777d7                                                                              3 days ago          Exited              kube-apiserver              3                   8182bffe3ee2f
* c0580b146a273       0369cf4303ffd                                                                              3 days ago          Exited              etcd                        2                   cc51009e65321
* 476a76f1583b3       14cd22f7abe78                                                                              3 days ago          Exited              kube-scheduler              2                   9524824b6b951
* 1c2a69f238695       635b36f4d89f0                                                                              9 days ago          Exited              kube-proxy                  0                   a0d3f90edb01a
* 
* ==> coredns [4c6b38e30704] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* 
* ==> coredns [fc879734428d] <==
* I1210 08:27:53.921823       1 trace.go:116] Trace[1415033323]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-12-10 08:27:30.131294738 +0000 UTC m=+194743.927898894) (total time: 23.723342794s):
* Trace[1415033323]: [23.408751256s] [23.408751256s] Objects listed
* I1210 08:30:17.444978       1 trace.go:116] Trace[485945017]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-12-10 08:27:56.972150944 +0000 UTC m=+194770.768755076) (total time: 2m16.738043209s):
* Trace[485945017]: [2m15.749642532s] [2m15.749642532s] Objects listed
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* 
* ==> describe nodes <==
* Name:               test
* Roles:              master
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=test
*                     kubernetes.io/os=linux
*                     minikube.k8s.io/commit=3e098ff146b8502f597849dfda420a2fa4fa43f0
*                     minikube.k8s.io/name=test
*                     minikube.k8s.io/updated_at=2020_12_01T09_48_21_0700
*                     minikube.k8s.io/version=v1.15.0
*                     node-role.kubernetes.io/master=
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Tue, 01 Dec 2020 09:48:17 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  test
*   AcquireTime:     <unset>
*   RenewTime:       Thu, 10 Dec 2020 11:59:30 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  192.168.49.2
*   Hostname:    test
* Capacity:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* System Info:
*   Machine ID:                 4118f6bc99d24394b4ba31544b6db6ce
*   System UUID:                66f20968-7100-4cb7-812c-92ad564ae316
*   Boot ID:                    f70f46b3-a8c8-47b5-a6b9-6f283fa1aeab
*   Kernel Version:             4.18.0-80.el8.x86_64
*   OS Image:                   Ubuntu 20.04.1 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://19.3.13
*   Kubelet Version:            v1.19.4
*   Kube-Proxy Version:         v1.19.4
* PodCIDR:                      10.244.0.0/24
* PodCIDRs:                     10.244.0.0/24
* Non-terminated Pods:          (11 in total)
*   Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
*   default                     hue-nhx74                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         149m
*   kube-system                 coredns-f9fd979d6-h6tvx                      100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     9d
*   kube-system                 etcd-test                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 kindnet-wzqft                                100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      3h48m
*   kube-system                 kube-apiserver-test                          250m (6%)     0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 kube-controller-manager-test                 200m (5%)     0 (0%)      0 (0%)           0 (0%)         3h12m
*   kube-system                 kube-proxy-wlzsk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 kube-scheduler-test                          100m (2%)     0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kubernetes-dashboard        dashboard-metrics-scraper-c95fcf479-g4xqz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kubernetes-dashboard        kubernetes-dashboard-584f46694c-7gdhx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests    Limits
*   --------           --------    ------
*   cpu                750m (18%)  100m (2%)
*   memory             120Mi (3%)  220Mi (5%)
*   ephemeral-storage  0 (0%)      0 (0%)
*   hugepages-2Mi      0 (0%)      0 (0%)
* Events:              <none>
* 
* 
* Name:               test-m02
* Roles:              <none>
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=test-m02
*                     kubernetes.io/os=linux
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Thu, 10 Dec 2020 09:20:27 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  test-m02
*   AcquireTime:     <unset>
*   RenewTime:       Thu, 10 Dec 2020 11:59:30 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:28 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  192.168.49.3
*   Hostname:    test-m02
* Capacity:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* System Info:
*   Machine ID:                 68baf903f57f49438e4668481465f6d5
*   System UUID:                3b1e0fc2-eb9e-4c19-8ad0-baded9c8f84a
*   Boot ID:                    f70f46b3-a8c8-47b5-a6b9-6f283fa1aeab
*   Kernel Version:             4.18.0-80.el8.x86_64
*   OS Image:                   Ubuntu 20.04.1 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://19.3.13
*   Kubelet Version:            v1.19.4
*   Kube-Proxy Version:         v1.19.4
* PodCIDR:                      10.244.4.0/24
* PodCIDRs:                     10.244.4.0/24
* Non-terminated Pods:          (3 in total)
*   Namespace                   Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                  ------------  ----------  ---------------  -------------  ---
*   default                     hue-postgres-kdfvf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         149m
*   kube-system                 kindnet-xvbnz         100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      159m
*   kube-system                 kube-proxy-rs74k      0 (0%)        0 (0%)      0 (0%)           0 (0%)         159m
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests   Limits
*   --------           --------   ------
*   cpu                100m (2%)  100m (2%)
*   memory             50Mi (1%)  50Mi (1%)
*   ephemeral-storage  0 (0%)     0 (0%)
*   hugepages-2Mi      0 (0%)     0 (0%)
* Events:              <none>
* 
* ==> dmesg <==
* [Dec10 08:44] NOTE: The elevator= kernel parameter is deprecated.
* [  +0.000000]  #2
* [  +0.001999]  #3
* [  +0.032939] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [  +0.857170] e1000: E1000 MODULE IS NOT SUPPORTED
* [  +1.316793] systemd: 18 output lines suppressed due to ratelimiting
* [  +2.704163] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
* [  +0.554386] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
* [  +0.357728] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
* [Dec10 08:49] hrtimer: interrupt took 5407601 ns
* 
* ==> etcd [c0580b146a27] <==
* 2020-12-10 08:42:31.284211 I | embed: rejected connection from "127.0.0.1:57122" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284245 I | embed: rejected connection from "127.0.0.1:57124" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284254 I | embed: rejected connection from "127.0.0.1:57126" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284265 I | embed: rejected connection from "127.0.0.1:57128" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284272 I | embed: rejected connection from "127.0.0.1:56862" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284596 I | embed: rejected connection from "127.0.0.1:56864" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284610 I | embed: rejected connection from "127.0.0.1:56866" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284620 I | embed: rejected connection from "127.0.0.1:56940" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284633 I | embed: rejected connection from "127.0.0.1:56872" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284643 I | embed: rejected connection from "127.0.0.1:57214" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284657 I | embed: rejected connection from "127.0.0.1:57020" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284731 I | embed: rejected connection from "127.0.0.1:57194" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284745 I | embed: rejected connection from "127.0.0.1:57022" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284753 I | embed: rejected connection from "127.0.0.1:57196" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284760 I | embed: rejected connection from "127.0.0.1:57024" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284766 I | embed: rejected connection from "127.0.0.1:57026" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284773 I | embed: rejected connection from "127.0.0.1:57028" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284780 I | embed: rejected connection from "127.0.0.1:57198" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284787 I | embed: rejected connection from "127.0.0.1:57030" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284793 I | embed: rejected connection from "127.0.0.1:57032" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284800 I | embed: rejected connection from "127.0.0.1:56900" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284808 I | embed: rejected connection from "127.0.0.1:57202" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284815 I | embed: rejected connection from "127.0.0.1:56902" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284821 I | embed: rejected connection from "127.0.0.1:56904" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284828 I | embed: rejected connection from "127.0.0.1:56916" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284834 I | embed: rejected connection from "127.0.0.1:56982" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284842 I | embed: rejected connection from "127.0.0.1:56970" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284849 I | embed: rejected connection from "127.0.0.1:57168" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284857 I | embed: rejected connection from "127.0.0.1:56918" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284863 I | embed: rejected connection from "127.0.0.1:56984" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284870 I | embed: rejected connection from "127.0.0.1:57186" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284877 I | embed: rejected connection from "127.0.0.1:56844" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284887 I | embed: rejected connection from "127.0.0.1:56976" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284894 I | embed: rejected connection from "127.0.0.1:56986" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284901 I | embed: rejected connection from "127.0.0.1:56856" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284909 I | embed: rejected connection from "127.0.0.1:56988" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284915 I | embed: rejected connection from "127.0.0.1:57110" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284923 I | embed: rejected connection from "127.0.0.1:56990" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284932 I | embed: rejected connection from "127.0.0.1:57208" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284940 I | embed: rejected connection from "127.0.0.1:57222" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.405425 I | embed: rejected connection from "127.0.0.1:57216" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.416021 I | embed: rejected connection from "127.0.0.1:57002" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.493820 I | embed: rejected connection from "127.0.0.1:57130" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.502079 I | embed: rejected connection from "127.0.0.1:57006" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.504159 I | embed: rejected connection from "127.0.0.1:57132" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.507232 I | embed: rejected connection from "127.0.0.1:57010" (error "EOF", ServerName "")
* 2020-12-10 08:42:32.842748 I | embed: rejected connection from "127.0.0.1:57228" (error "EOF", ServerName "")
* 2020-12-10 08:42:32.993044 I | embed: rejected connection from "127.0.0.1:56922" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.035938 I | embed: rejected connection from "127.0.0.1:56924" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.201211 I | embed: rejected connection from "127.0.0.1:57084" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.455334 I | embed: rejected connection from "127.0.0.1:56896" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.455392 I | embed: rejected connection from "127.0.0.1:57200" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.455406 I | embed: rejected connection from "127.0.0.1:56860" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.492780 I | embed: rejected connection from "127.0.0.1:57204" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.495771 I | embed: rejected connection from "127.0.0.1:56848" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.515463 I | embed: rejected connection from "127.0.0.1:57206" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.549769 I | embed: rejected connection from "127.0.0.1:57218" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.586989 I | embed: rejected connection from "127.0.0.1:57224" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.587551 I | embed: rejected connection from "127.0.0.1:57004" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.929736 I | embed: rejected connection from "127.0.0.1:56932" (error "EOF", ServerName "")
* 
* ==> etcd [cbed66776882] <==
* 2020-12-10 11:50:39.931889 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:50:49.932555 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:50:59.932280 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:09.933438 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:19.933821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:29.932805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:39.932365 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:49.931971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:59.932810 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:09.934698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:10.060926 I | mvcc: store.index: compact 562998
* 2020-12-10 11:52:10.062601 I | mvcc: finished scheduled compaction at 562998 (took 1.195036ms)
* 2020-12-10 11:52:19.935104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:30.298813 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:39.931981 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:49.932150 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:59.932139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:09.934680 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:19.933500 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:29.934514 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:39.932742 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:49.932197 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:59.933573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:09.934588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:19.932112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:29.934208 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:39.936795 I | etcdserver/api/etcdhttp: /health OK (status code 200)
E1210 11:59:40.542893  128565 out.go:286] unable to execute * 2020-12-10 11:54:47.349044 W | etcdserver: request "header:<ID:8128001495567104299 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:563365 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1015 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:18" took too long (387.524803ms) to execute
: html/template:* 2020-12-10 11:54:47.349044 W | etcdserver: request "header:<ID:8128001495567104299 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:563365 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1015 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:18" took too long (387.524803ms) to execute
: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
* 2020-12-10 11:54:47.349044 W | etcdserver: request "header:<ID:8128001495567104299 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:563365 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1015 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:18" took too long (387.524803ms) to execute
* 2020-12-10 11:54:49.932027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:00.010020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:09.935178 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:19.932826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:29.932779 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:39.931853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:49.934104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:59.932525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:09.933648 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:19.935441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:29.935586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:39.933743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:49.932109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:59.933094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:09.934326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:10.070044 I | mvcc: store.index: compact 563239
* 2020-12-10 11:57:10.070910 I | mvcc: finished scheduled compaction at 563239 (took 366.904µs)
* 2020-12-10 11:57:19.933753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:29.943536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:39.934743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:49.934588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:59.932111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:09.932432 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:20.196705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:29.932965 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:39.932457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:49.932715 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:59.933524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:09.933220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:19.934173 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:29.934052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:39.933375 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 
* ==> kernel <==
*  11:59:40 up  3:15,  0 users,  load average: 0.70, 0.78, 0.65
* Linux test 4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 20.04.1 LTS"
* 
* ==> kube-apiserver [9618e44b528d] <==
* W1210 08:43:11.450789       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.450891       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.450934       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.450970       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451007       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451042       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451074       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451107       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451156       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451189       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451228       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451260       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451300       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451331       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.801968       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:12.041959       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:12.042027       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:12.689571       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.234201       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.234268       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.234304       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.236318       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:15.912800       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:16.357199       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808145       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808634       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808711       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808795       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.973090       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.973157       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.973241       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:18.956067       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:18.956126       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:18.956287       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.273213       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.406156       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.406497       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.407175       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.729161       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.729225       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:20.404219       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context canceled". Reconnecting...
* W1210 08:43:21.095038       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.095328       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.095597       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.095642       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.382179       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.382270       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.542977       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.543130       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.706046       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.024596       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026047       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026162       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026295       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026332       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.089113       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.089432       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.636864       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.636928       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.636961       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* 
* ==> kube-apiserver [eb97bca9e9de] <==
* I1210 11:49:49.737587       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:50:25.647996       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:50:25.648026       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:50:25.648031       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:51:03.544584       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:51:03.544689       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:51:03.544713       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:51:12.995874       1 trace.go:205] Trace[756723386]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (10-Dec-2020 11:51:12.494) (total time: 500ms):
* Trace[756723386]: [500.885483ms] [500.885483ms] END
* I1210 11:51:12.995963       1 trace.go:205] Trace[122590512]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.19.4 (linux/amd64) kubernetes/d360454/system:serviceaccount:kube-system:cronjob-controller,client:192.168.49.2 (10-Dec-2020 11:51:12.494) (total time: 500ms):
* Trace[122590512]: ---"Listing from storage done" 500ms (11:51:00.995)
* Trace[122590512]: [500.992914ms] [500.992914ms] END
* I1210 11:51:38.596450       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:51:38.596481       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:51:38.596488       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:52:19.802640       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:52:19.802842       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:52:19.802868       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:53:00.154578       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:53:00.154608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:53:00.154614       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:53:43.793049       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:53:43.793394       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:53:43.793449       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:54:14.328104       1 trace.go:205] Trace[1970391022]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2 (10-Dec-2020 11:54:13.827) (total time: 500ms):
* Trace[1970391022]: ---"About to write a response" 500ms (11:54:00.328)
* Trace[1970391022]: [500.936997ms] [500.936997ms] END
* I1210 11:54:16.438348       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:54:16.438932       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:54:16.438967       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:54:51.530963       1 trace.go:205] Trace[86500591]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Dec-2020 11:54:51.027) (total time: 503ms):
* Trace[86500591]: ---"Transaction prepared" 501ms (11:54:00.529)
* Trace[86500591]: [503.653583ms] [503.653583ms] END
* I1210 11:54:55.473177       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:54:55.473206       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:54:55.473214       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:55:28.961420       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:55:28.961443       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:55:28.961449       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:56:01.960202       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:56:01.960454       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:56:01.960482       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:56:39.860902       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:56:39.860929       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:56:39.860936       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:57:20.086398       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:57:20.086475       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:57:20.086494       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:57:51.939546       1 trace.go:205] Trace[1202313898]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Dec-2020 11:57:51.434) (total time: 505ms):
* Trace[1202313898]: ---"Transaction prepared" 503ms (11:57:00.938)
* Trace[1202313898]: [505.372815ms] [505.372815ms] END
* I1210 11:58:02.740440       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:58:02.740487       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:58:02.740518       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:58:41.077505       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:58:41.077539       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:58:41.077546       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:59:14.827406       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:59:14.827569       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:59:14.827597       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* 
* ==> kube-controller-manager [ef1e373ce768] <==
* I1210 08:54:50.286393       1 event.go:291] "Event occurred" object="default/hive-hdfs-namenode-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-hdfs-namenode-0"
* I1210 08:54:50.286446       1 event.go:291] "Event occurred" object="default/hive-metastore-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-metastore-0"
* I1210 08:54:50.286457       1 event.go:291] "Event occurred" object="default/hive-postgresql-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-postgresql-0"
* I1210 08:54:50.286468       1 event.go:291] "Event occurred" object="default/hive-server-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-server-0"
* I1210 08:54:50.286477       1 event.go:291] "Event occurred" object="default/hue-postgres-4q92t" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-postgres-4q92t"
* I1210 08:54:50.286484       1 event.go:291] "Event occurred" object="default/hive-hdfs-datanode-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-hdfs-datanode-0"
* I1210 08:54:50.286506       1 event.go:291] "Event occurred" object="default/hive-hdfs-httpfs-6cd6bc65d9-q75qj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-hdfs-httpfs-6cd6bc65d9-q75qj"
* I1210 08:54:50.286515       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6-h6tvx" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-f9fd979d6-h6tvx"
* I1210 08:58:35.562656       1 stateful_set.go:419] StatefulSet has been deleted default/hive-postgresql
* I1210 08:58:35.562700       1 stateful_set.go:419] StatefulSet has been deleted default/hive-server
* I1210 08:58:35.562879       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-namenode
* I1210 08:58:35.562957       1 stateful_set.go:419] StatefulSet has been deleted default/hive-metastore
* I1210 08:58:35.563112       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-datanode
* I1210 08:59:06.632977       1 event.go:291] "Event occurred" object="test-m04" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node test-m04 event: Removing Node test-m04 from Controller"
* I1210 08:59:41.971556       1 event.go:291] "Event occurred" object="test-m03" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node test-m03 event: Removing Node test-m03 from Controller"
* I1210 08:59:49.827238       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-7qqth
* I1210 08:59:49.839658       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kindnet-7qqth succeeded
* I1210 08:59:49.839679       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-tklxj
* E1210 08:59:49.849720       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"bcd72590-ad1e-4230-8818-32fa84750a3b", ResourceVersion:"553798", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63743184688, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002a5faa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a5fac0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002a5fae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a5fb00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002a5fb20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002a5fb40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002a5fb60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002a5fb80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002a5fba0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002a5fbe0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001ce50e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021ba678), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002a7e230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0006b48a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0021ba6c0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
* I1210 08:59:49.850239       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kube-proxy-tklxj succeeded
* I1210 09:00:29.861181       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-mp89t
* I1210 09:00:29.868925       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kube-proxy-mp89t succeeded
* I1210 09:00:29.868938       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-wz6qv
* I1210 09:00:29.874833       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kindnet-wz6qv succeeded
* I1210 09:17:17.859158       1 event.go:291] "Event occurred" object="test-m02" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node test-m02 event: Removing Node test-m02 from Controller"
* I1210 09:18:10.676370       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-7wzw9
* I1210 09:18:10.688617       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kube-proxy-7wzw9 succeeded
* I1210 09:18:10.688632       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-d4z4f
* I1210 09:18:10.695340       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kindnet-d4z4f succeeded
* W1210 09:20:28.133411       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-m02" does not exist
* I1210 09:20:28.147884       1 range_allocator.go:373] Set node test-m02 PodCIDR to [10.244.4.0/24]
* I1210 09:20:28.158549       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rs74k"
* I1210 09:20:28.163593       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xvbnz"
* E1210 09:20:28.200957       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"0da472a6-f8ed-45fe-970b-31291eae9763", ResourceVersion:"555116", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63742412901, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002437d20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002437d40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002437d60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002437d80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002437da0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0017a1380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002437dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002437de0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002437e20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0027b7260), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cb9068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001d51f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00041fcb8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000cb90c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* I1210 09:20:33.067352       1 event.go:291] "Event occurred" object="test-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-m02 event: Registered Node test-m02 in Controller"
* I1210 09:22:16.100919       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-f9kcs"
* I1210 09:22:16.137415       1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-24vdr"
* I1210 09:22:16.137440       1 event.go:291] "Event occurred" object="default/hive-hdfs-httpfs" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hive-hdfs-httpfs-6cd6bc65d9 to 1"
* I1210 09:22:16.159305       1 event.go:291] "Event occurred" object="default/hive-hdfs-httpfs-6cd6bc65d9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hive-hdfs-httpfs-6cd6bc65d9-lrqb2"
* I1210 09:22:16.179523       1 event.go:291] "Event occurred" object="default/hive-metastore" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-metastore-0 in StatefulSet hive-metastore successful"
* I1210 09:22:16.179544       1 event.go:291] "Event occurred" object="default/hive-server" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-server-0 in StatefulSet hive-server successful"
* I1210 09:22:16.179555       1 event.go:291] "Event occurred" object="default/hive-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-hive-postgresql-0 Pod hive-postgresql-0 in StatefulSet hive-postgresql success"
* I1210 09:22:16.184394       1 event.go:291] "Event occurred" object="default/hive-hdfs-datanode" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-hdfs-datanode-0 in StatefulSet hive-hdfs-datanode successful"
* I1210 09:22:16.186110       1 event.go:291] "Event occurred" object="default/hive-hdfs-namenode" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-hdfs-namenode-0 in StatefulSet hive-hdfs-namenode successful"
* I1210 09:22:16.189485       1 event.go:291] "Event occurred" object="default/data-hive-postgresql-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
* I1210 09:22:16.189887       1 event.go:291] "Event occurred" object="default/data-hive-postgresql-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
* I1210 09:22:16.198942       1 event.go:291] "Event occurred" object="default/hive-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-postgresql-0 in StatefulSet hive-postgresql successful"
* I1210 09:29:31.083591       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-g6l2f"
* I1210 09:30:18.823878       1 stateful_set.go:419] StatefulSet has been deleted default/hive-metastore
* I1210 09:30:18.824626       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-datanode
* I1210 09:30:18.825037       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-namenode
* I1210 09:30:18.826020       1 stateful_set.go:419] StatefulSet has been deleted default/hive-postgresql
* I1210 09:30:18.826865       1 stateful_set.go:419] StatefulSet has been deleted default/hive-server
* I1210 09:30:28.537836       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-kdfvf"
* I1210 09:30:28.537852       1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-nhx74"
* I1210 09:47:15.422577       1 cleaner.go:181] Cleaning CSR "csr-tstcs" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.426723       1 cleaner.go:181] Cleaning CSR "csr-6fh65" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.434504       1 cleaner.go:181] Cleaning CSR "csr-wjjfc" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.441703       1 cleaner.go:181] Cleaning CSR "csr-2h6fg" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.444107       1 cleaner.go:181] Cleaning CSR "csr-bfd8r" as it is more than 1h0m0s old and approved.
* 
* ==> kube-proxy [147855e4d4e4] <==
* I1210 08:47:14.015675       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
* I1210 08:47:14.015716       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
* W1210 08:47:16.734463       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
* I1210 08:47:16.745420       1 server_others.go:186] Using iptables Proxier.
* W1210 08:47:16.745436       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I1210 08:47:16.745439       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I1210 08:47:16.745617       1 server.go:650] Version: v1.19.4
* I1210 08:47:16.745878       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1210 08:47:16.745890       1 conntrack.go:52] Setting nf_conntrack_max to 131072
* E1210 08:47:16.746653       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
* I1210 08:47:16.746733       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1210 08:47:16.746758       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1210 08:47:16.747051       1 config.go:315] Starting service config controller
* I1210 08:47:16.747056       1 shared_informer.go:240] Waiting for caches to sync for service config
* I1210 08:47:16.747065       1 config.go:224] Starting endpoint slice config controller
* I1210 08:47:16.747068       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I1210 08:47:16.849582       1 shared_informer.go:247] Caches are synced for endpoint slice config 
* I1210 08:47:16.849586       1 shared_informer.go:247] Caches are synced for service config 
* I1210 08:49:50.708763       1 trace.go:205] Trace[1176923502]: "iptables Monitor CANARY check" (10-Dec-2020 08:49:47.428) (total time: 2793ms):
* Trace[1176923502]: [2.793602539s] [2.793602539s] END
* I1210 08:50:20.780453       1 trace.go:205] Trace[1901112382]: "iptables Monitor CANARY check" (10-Dec-2020 08:50:16.773) (total time: 4006ms):
* Trace[1901112382]: [4.006873012s] [4.006873012s] END
* I1210 08:51:57.090526       1 trace.go:205] Trace[385707588]: "iptables Monitor CANARY check" (10-Dec-2020 08:51:46.765) (total time: 8726ms):
* Trace[385707588]: [8.726548904s] [8.726548904s] END
* I1210 08:53:29.809980       1 trace.go:205] Trace[764479901]: "iptables Monitor CANARY check" (10-Dec-2020 08:53:16.764) (total time: 9581ms):
* Trace[764479901]: [9.581968281s] [9.581968281s] END
* I1210 08:53:58.071872       1 trace.go:205] Trace[112474632]: "iptables Monitor CANARY check" (10-Dec-2020 08:53:49.896) (total time: 8039ms):
* Trace[112474632]: [8.039081406s] [8.039081406s] END
* I1210 08:54:25.962695       1 trace.go:205] Trace[39398182]: "iptables Monitor CANARY check" (10-Dec-2020 08:54:18.568) (total time: 6549ms):
* Trace[39398182]: [6.549417389s] [6.549417389s] END
* I1210 08:54:55.389600       1 trace.go:205] Trace[1338509158]: "iptables Monitor CANARY check" (10-Dec-2020 08:54:49.801) (total time: 5455ms):
* Trace[1338509158]: [5.45541041s] [5.45541041s] END
* I1210 08:58:24.665049       1 trace.go:205] Trace[431073268]: "iptables Monitor CANARY check" (10-Dec-2020 08:58:19.270) (total time: 5394ms):
* Trace[431073268]: [5.39404937s] [5.39404937s] END
* 
* ==> kube-proxy [1c2a69f23869] <==
* I1210 08:19:29.254495       1 trace.go:205] Trace[1606941258]: "iptables Monitor CANARY check" (10-Dec-2020 08:19:11.769) (total time: 16133ms):
* Trace[1606941258]: [16.133109154s] [16.133109154s] END
* I1210 08:21:12.253997       1 trace.go:205] Trace[2032763386]: "iptables save" (10-Dec-2020 08:20:08.295) (total time: 59013ms):
* Trace[2032763386]: [59.013708868s] [59.013708868s] END
* I1210 08:21:41.396933       1 trace.go:205] Trace[442713710]: "iptables Monitor CANARY check" (10-Dec-2020 08:21:12.647) (total time: 6548ms):
* Trace[442713710]: [6.548046652s] [6.548046652s] END
* I1210 08:22:20.625150       1 trace.go:205] Trace[1705933123]: "iptables Monitor CANARY check" (10-Dec-2020 08:22:02.412) (total time: 17389ms):
* Trace[1705933123]: [17.389959117s] [17.389959117s] END
* I1210 08:24:32.283396       1 trace.go:205] Trace[1257929763]: "iptables restore" (10-Dec-2020 08:22:21.206) (total time: 77394ms):
* Trace[1257929763]: [1m17.394371742s] [1m17.394371742s] END
* I1210 08:25:14.934948       1 trace.go:205] Trace[82457319]: "iptables Monitor CANARY check" (10-Dec-2020 08:24:34.887) (total time: 39598ms):
* Trace[82457319]: [39.59828664s] [39.59828664s] END
* I1210 08:25:52.826194       1 trace.go:205] Trace[1529003266]: "iptables Monitor CANARY check" (10-Dec-2020 08:25:29.557) (total time: 14986ms):
* Trace[1529003266]: [14.986878467s] [14.986878467s] END
* I1210 08:26:41.833556       1 trace.go:205] Trace[824315921]: "iptables Monitor CANARY check" (10-Dec-2020 08:25:58.312) (total time: 35395ms):
* Trace[824315921]: [35.395955838s] [35.395955838s] END
* I1210 08:27:13.010202       1 trace.go:205] Trace[356046513]: "iptables Monitor CANARY check" (10-Dec-2020 08:27:01.263) (total time: 10073ms):
* Trace[356046513]: [10.073628207s] [10.073628207s] END
* I1210 08:27:36.603564       1 trace.go:205] Trace[1754103961]: "iptables Monitor CANARY check" (10-Dec-2020 08:27:28.323) (total time: 6429ms):
* Trace[1754103961]: [6.429998521s] [6.429998521s] END
* I1210 08:28:02.899016       1 trace.go:205] Trace[2019656647]: "iptables Monitor CANARY check" (10-Dec-2020 08:27:58.313) (total time: 4294ms):
* Trace[2019656647]: [4.294967975s] [4.294967975s] END
* I1210 08:28:37.144254       1 trace.go:205] Trace[1498401531]: "iptables Monitor CANARY check" (10-Dec-2020 08:28:28.312) (total time: 8617ms):
* Trace[1498401531]: [8.617230153s] [8.617230153s] END
* I1210 08:29:32.119745       1 trace.go:205] Trace[1511550882]: "iptables Monitor CANARY check" (10-Dec-2020 08:29:16.995) (total time: 11390ms):
* Trace[1511550882]: [11.390891356s] [11.390891356s] END
* I1210 08:30:08.132644       1 trace.go:205] Trace[1444828810]: "iptables Monitor CANARY check" (10-Dec-2020 08:30:00.140) (total time: 3704ms):
* Trace[1444828810]: [3.704739702s] [3.704739702s] END
* I1210 08:30:54.286260       1 trace.go:205] Trace[143958087]: "iptables Monitor CANARY check" (10-Dec-2020 08:30:29.624) (total time: 24105ms):
* Trace[143958087]: [24.105896041s] [24.105896041s] END
* I1210 08:31:10.426799       1 trace.go:205] Trace[962323657]: "iptables Monitor CANARY check" (10-Dec-2020 08:30:58.318) (total time: 11130ms):
* Trace[962323657]: [11.130432866s] [11.130432866s] END
* I1210 08:31:13.141112       1 trace.go:205] Trace[813852681]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:28:18.935) (total time: 173942ms):
* Trace[813852681]: ---"Objects listed" 171222ms (08:31:00.158)
* Trace[813852681]: ---"Objects extracted" 2028ms (08:31:00.187)
* Trace[813852681]: [2m53.942254293s] [2m53.942254293s] END
* I1210 08:31:35.241036       1 trace.go:205] Trace[638151091]: "iptables Monitor CANARY check" (10-Dec-2020 08:31:28.322) (total time: 6503ms):
* Trace[638151091]: [6.503279497s] [6.503279497s] END
* I1210 08:32:55.337379       1 trace.go:205] Trace[1536844848]: "iptables Monitor CANARY check" (10-Dec-2020 08:32:00.940) (total time: 38653ms):
* Trace[1536844848]: [38.653773609s] [38.653773609s] END
* I1210 08:33:20.885200       1 trace.go:205] Trace[1414107287]: "iptables Monitor CANARY check" (10-Dec-2020 08:33:04.564) (total time: 5442ms):
* Trace[1414107287]: [5.44213901s] [5.44213901s] END
* I1210 08:34:13.548138       1 trace.go:205] Trace[946104448]: "iptables Monitor CANARY check" (10-Dec-2020 08:33:58.329) (total time: 15037ms):
* Trace[946104448]: [15.037169668s] [15.037169668s] END
* I1210 08:34:31.345398       1 trace.go:205] Trace[175663141]: "iptables Monitor CANARY check" (10-Dec-2020 08:34:28.314) (total time: 3030ms):
* Trace[175663141]: [3.030810392s] [3.030810392s] END
* I1210 08:34:47.908016       1 trace.go:205] Trace[56971626]: "iptables restore" (10-Dec-2020 08:34:40.449) (total time: 2107ms):
* Trace[56971626]: [2.107572612s] [2.107572612s] END
* I1210 08:35:05.374031       1 trace.go:205] Trace[1258522673]: "iptables Monitor CANARY check" (10-Dec-2020 08:34:58.313) (total time: 5702ms):
* Trace[1258522673]: [5.702046014s] [5.702046014s] END
* I1210 08:37:42.825944       1 trace.go:205] Trace[1689351721]: "iptables Monitor CANARY check" (10-Dec-2020 08:37:29.818) (total time: 11357ms):
* Trace[1689351721]: [11.357098617s] [11.357098617s] END
* I1210 08:38:06.121769       1 trace.go:205] Trace[1552961587]: "iptables Monitor CANARY check" (10-Dec-2020 08:38:00.265) (total time: 5619ms):
* Trace[1552961587]: [5.619850908s] [5.619850908s] END
* I1210 08:39:24.509743       1 trace.go:205] Trace[934536800]: "iptables Monitor CANARY check" (10-Dec-2020 08:39:14.191) (total time: 7834ms):
* Trace[934536800]: [7.834903692s] [7.834903692s] END
* I1210 08:41:36.851519       1 trace.go:205] Trace[1452415980]: "iptables Monitor CANARY check" (10-Dec-2020 08:40:58.312) (total time: 13366ms):
* Trace[1452415980]: [13.366711746s] [13.366711746s] END
* I1210 08:42:34.270977       1 trace.go:205] Trace[1092740501]: "iptables Monitor CANARY check" (10-Dec-2020 08:41:58.312) (total time: 31405ms):
* Trace[1092740501]: [31.405010423s] [31.405010423s] END
* 
* ==> kube-scheduler [476a76f1583b] <==
* E1207 09:38:35.522011       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
* I1207 09:38:37.822079       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I1207 11:56:44.908379       1 trace.go:205] Trace[666183408]: "Scheduling" namespace:default,name:hue-kfjpj (07-Dec-2020 11:56:44.251) (total time: 654ms):
* Trace[666183408]: ---"Snapshotting scheduler cache and node infos done" 78ms (11:56:00.329)
* Trace[666183408]: ---"Computing predicates done" 314ms (11:56:00.643)
* Trace[666183408]: [654.649558ms] [654.649558ms] END
* I1207 12:16:15.374183       1 trace.go:205] Trace[768065693]: "Scheduling" namespace:default,name:hue-r4ph2 (07-Dec-2020 12:16:15.184) (total time: 158ms):
* Trace[768065693]: [158.498803ms] [158.498803ms] END
* I1207 12:27:14.040714       1 trace.go:205] Trace[1221006435]: "Scheduling" namespace:default,name:hive-server-0 (07-Dec-2020 12:27:13.745) (total time: 294ms):
* Trace[1221006435]: ---"Computing predicates done" 33ms (12:27:00.779)
* Trace[1221006435]: [294.883224ms] [294.883224ms] END
* I1208 12:20:32.444919       1 trace.go:205] Trace[1038095338]: "Scheduling" namespace:default,name:hive-hdfs-httpfs-6cd6bc65d9-wv4r2 (08-Dec-2020 12:20:31.962) (total time: 409ms):
* Trace[1038095338]: ---"Basic checks done" 26ms (12:20:00.988)
* Trace[1038095338]: ---"Computing predicates done" 78ms (12:20:00.067)
* Trace[1038095338]: [409.973001ms] [409.973001ms] END
* I1208 12:39:31.821886       1 trace.go:205] Trace[629732245]: "Scheduling" namespace:default,name:hive-hdfs-httpfs-6cd6bc65d9-fmmxf (08-Dec-2020 12:39:31.526) (total time: 295ms):
* Trace[629732245]: ---"Computing predicates done" 77ms (12:39:00.603)
* Trace[629732245]: [295.777206ms] [295.777206ms] END
* I1210 07:47:12.395846       1 trace.go:205] Trace[529640691]: "Scheduling" namespace:default,name:hive-hdfs-namenode-0 (10-Dec-2020 07:47:12.233) (total time: 162ms):
* Trace[529640691]: ---"Computing predicates done" 102ms (07:47:00.335)
* Trace[529640691]: [162.499393ms] [162.499393ms] END
* I1210 08:11:04.509198       1 trace.go:205] Trace[296532859]: "Scheduling" namespace:kube-system,name:kube-proxy-7wzw9 (10-Dec-2020 08:11:04.236) (total time: 189ms):
* Trace[296532859]: ---"Computing predicates done" 189ms (08:11:00.425)
* Trace[296532859]: [189.486956ms] [189.486956ms] END
* I1210 08:11:33.086818       1 trace.go:205] Trace[348250462]: "Scheduling" namespace:kube-system,name:kindnet-wzqft (10-Dec-2020 08:11:32.627) (total time: 201ms):
* Trace[348250462]: ---"Basic checks done" 185ms (08:11:00.812)
* Trace[348250462]: [201.770106ms] [201.770106ms] END
* I1210 08:29:40.681410       1 trace.go:205] Trace[1834003984]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:28:00.603) (total time: 99651ms):
* Trace[1834003984]: ---"Objects listed" 99489ms (08:29:00.093)
* Trace[1834003984]: [1m39.651842586s] [1m39.651842586s] END
* I1210 08:30:11.289748       1 trace.go:205] Trace[1831143731]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:29:38.676) (total time: 31768ms):
* Trace[1831143731]: ---"Objects listed" 31592ms (08:30:00.268)
* Trace[1831143731]: [31.768949549s] [31.768949549s] END
* I1210 08:31:06.036895       1 trace.go:205] Trace[789880583]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:30:11.815) (total time: 53539ms):
* Trace[789880583]: ---"Objects listed" 53539ms (08:31:00.355)
* Trace[789880583]: [53.539347253s] [53.539347253s] END
* I1210 08:34:12.686107       1 trace.go:205] Trace[561017287]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:31:30.011) (total time: 162674ms):
* Trace[561017287]: [2m42.674637052s] [2m42.674637052s] END
* E1210 08:34:12.686137       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: the server was unable to return a response in the time allotted, but may still be processing the request (get statefulsets.apps)
* I1210 08:34:12.686149       1 trace.go:205] Trace[1404922494]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:31:30.488) (total time: 161994ms):
* Trace[1404922494]: [2m41.994649502s] [2m41.994649502s] END
* E1210 08:34:12.686155       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes)
* I1210 08:34:12.769079       1 trace.go:205] Trace[107198569]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:33:39.696) (total time: 33073ms):
* Trace[107198569]: ---"Objects listed" 33072ms (08:34:00.768)
* Trace[107198569]: [33.073033457s] [33.073033457s] END
* I1210 08:34:29.503249       1 trace.go:205] Trace[852968660]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (10-Dec-2020 08:34:14.673) (total time: 14431ms):
* Trace[852968660]: ---"Objects listed" 14423ms (08:34:00.096)
* Trace[852968660]: [14.431816186s] [14.431816186s] END
* I1210 08:34:29.512056       1 trace.go:205] Trace[530687809]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:14.672) (total time: 14839ms):
* Trace[530687809]: ---"Objects listed" 14426ms (08:34:00.099)
* Trace[530687809]: [14.839135769s] [14.839135769s] END
* I1210 08:34:29.545762       1 trace.go:205] Trace[1947675615]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:13.840) (total time: 15704ms):
* Trace[1947675615]: ---"Objects listed" 15704ms (08:34:00.545)
* Trace[1947675615]: [15.704762479s] [15.704762479s] END
* I1210 08:34:29.550600       1 trace.go:205] Trace[1237261392]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:13.813) (total time: 15736ms):
* Trace[1237261392]: ---"Objects listed" 15736ms (08:34:00.550)
* Trace[1237261392]: [15.736651893s] [15.736651893s] END
* I1210 08:34:29.551005       1 trace.go:205] Trace[1669297276]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:13.607) (total time: 15943ms):
* Trace[1669297276]: ---"Objects listed" 15943ms (08:34:00.550)
* Trace[1669297276]: [15.943070102s] [15.943070102s] END
* 
* ==> kube-scheduler [f1b156662850] <==
* I1210 08:47:03.439155       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:03.439192       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:04.170813       1 serving.go:331] Generated self-signed cert in-memory
* W1210 08:47:11.200901       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W1210 08:47:11.200949       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W1210 08:47:11.200957       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
* W1210 08:47:11.200961       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I1210 08:47:11.221895       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:11.221915       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:11.224944       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
* I1210 08:47:11.225412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1210 08:47:11.225420       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1210 08:47:11.225438       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E1210 08:47:11.228063       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1210 08:47:11.228255       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1210 08:47:11.228327       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1210 08:47:11.228402       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1210 08:47:11.228474       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1210 08:47:11.228540       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1210 08:47:11.228638       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1210 08:47:11.228734       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1210 08:47:11.228804       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1210 08:47:11.229784       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1210 08:47:11.229864       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1210 08:47:11.230247       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1210 08:47:11.231607       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* I1210 08:47:12.550817       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I1210 08:51:21.046666       1 trace.go:205] Trace[759072766]: "Scheduling" namespace:kube-system,name:kube-proxy-mp89t (10-Dec-2020 08:51:20.645) (total time: 389ms):
* Trace[759072766]: ---"Basic checks done" 173ms (08:51:00.819)
* Trace[759072766]: ---"Computing predicates done" 212ms (08:51:00.031)
* Trace[759072766]: [389.49898ms] [389.49898ms] END
* I1210 08:54:16.826915       1 trace.go:205] Trace[1442715761]: "Scheduling" namespace:kube-system,name:kube-proxy-tklxj (10-Dec-2020 08:54:15.035) (total time: 979ms):
* Trace[1442715761]: ---"Computing predicates done" 896ms (08:54:00.931)
* Trace[1442715761]: [979.173237ms] [979.173237ms] END
* I1210 08:54:29.412297       1 request.go:645] Throttling request took 1.709821614s, request: POST:https://192.168.49.2:8443/api/v1/namespaces/kube-system/events
* 
* ==> kubelet <==
* -- Logs begin at Thu 2020-12-10 08:46:43 UTC, end at Thu 2020-12-10 11:59:41 UTC. --
* Dec 10 09:29:59 test kubelet[1151]: I1210 09:29:59.831855    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf
* Dec 10 09:29:59 test kubelet[1151]: E1210 09:29:59.832127    1151 pod_workers.go:191] Error syncing pod 0138dca9-01f3-4f18-8ec1-beccdf0458fd ("hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"), skipping: failed to "StartContainer" for "server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=server pod=hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"
* Dec 10 09:30:06 test kubelet[1151]: W1210 09:30:06.579759    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hive-metastore-0 through plugin: invalid network status for
* Dec 10 09:30:06 test kubelet[1151]: I1210 09:30:06.585659    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 03f4f446bb3a6bb90c13780fccb2e10bbf491fde32ee31567720df13440b45fa
* Dec 10 09:30:06 test kubelet[1151]: I1210 09:30:06.585880    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 541c51b9afb9d8f90efc3b3495b565125211ef27b7e63e82c767410acac2f17c
* Dec 10 09:30:06 test kubelet[1151]: E1210 09:30:06.586040    1151 pod_workers.go:191] Error syncing pod ea1fe809-303d-4119-8baf-780f818ab6ed ("hive-metastore-0_default(ea1fe809-303d-4119-8baf-780f818ab6ed)"), skipping: failed to "StartContainer" for "metastore" with CrashLoopBackOff: "back-off 1m20s restarting failed container=metastore pod=hive-metastore-0_default(ea1fe809-303d-4119-8baf-780f818ab6ed)"
* Dec 10 09:30:07 test kubelet[1151]: W1210 09:30:07.594434    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hive-metastore-0 through plugin: invalid network status for
* Dec 10 09:30:13 test kubelet[1151]: I1210 09:30:13.833176    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf
* Dec 10 09:30:13 test kubelet[1151]: E1210 09:30:13.834649    1151 pod_workers.go:191] Error syncing pod 0138dca9-01f3-4f18-8ec1-beccdf0458fd ("hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"), skipping: failed to "StartContainer" for "server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=server pod=hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.230882    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/ea1fe809-303d-4119-8baf-780f818ab6ed-default-token-ldrct") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed")
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.230922    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "hive-config" (UniqueName: "kubernetes.io/configmap/ea1fe809-303d-4119-8baf-780f818ab6ed-hive-config") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed")
* Dec 10 09:30:19 test kubelet[1151]: W1210 09:30:19.231040    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/ea1fe809-303d-4119-8baf-780f818ab6ed/volumes/kubernetes.io~configmap/hive-config: ClearQuota called, but quotas disabled
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.231211    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1fe809-303d-4119-8baf-780f818ab6ed-hive-config" (OuterVolumeSpecName: "hive-config") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed"). InnerVolumeSpecName "hive-config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.241174    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1fe809-303d-4119-8baf-780f818ab6ed-default-token-ldrct" (OuterVolumeSpecName: "default-token-ldrct") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed"). InnerVolumeSpecName "default-token-ldrct". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.333209    1151 reconciler.go:319] Volume detached for volume "hive-config" (UniqueName: "kubernetes.io/configmap/ea1fe809-303d-4119-8baf-780f818ab6ed-hive-config") on node "test" DevicePath ""
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.333235    1151 reconciler.go:319] Volume detached for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/ea1fe809-303d-4119-8baf-780f818ab6ed-default-token-ldrct") on node "test" DevicePath ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808374    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "hadoop-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hadoop-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd")
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808411    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/0138dca9-01f3-4f18-8ec1-beccdf0458fd-default-token-ldrct") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd")
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808432    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "hive-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hive-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd")
* Dec 10 09:30:21 test kubelet[1151]: W1210 09:30:21.808524    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/0138dca9-01f3-4f18-8ec1-beccdf0458fd/volumes/kubernetes.io~configmap/hive-config: ClearQuota called, but quotas disabled
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808695    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hive-config" (OuterVolumeSpecName: "hive-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd"). InnerVolumeSpecName "hive-config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:21 test kubelet[1151]: W1210 09:30:21.808736    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/0138dca9-01f3-4f18-8ec1-beccdf0458fd/volumes/kubernetes.io~configmap/hadoop-config: ClearQuota called, but quotas disabled
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.809021    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hadoop-config" (OuterVolumeSpecName: "hadoop-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd"). InnerVolumeSpecName "hadoop-config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.820449    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0138dca9-01f3-4f18-8ec1-beccdf0458fd-default-token-ldrct" (OuterVolumeSpecName: "default-token-ldrct") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd"). InnerVolumeSpecName "default-token-ldrct". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.911565    1151 reconciler.go:319] Volume detached for volume "hadoop-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hadoop-config") on node "test" DevicePath ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.911590    1151 reconciler.go:319] Volume detached for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/0138dca9-01f3-4f18-8ec1-beccdf0458fd-default-token-ldrct") on node "test" DevicePath ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.911597    1151 reconciler.go:319] Volume detached for volume "hive-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hive-config") on node "test" DevicePath ""
* Dec 10 09:30:23 test kubelet[1151]: I1210 09:30:23.210874    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 541c51b9afb9d8f90efc3b3495b565125211ef27b7e63e82c767410acac2f17c
* Dec 10 09:30:23 test kubelet[1151]: I1210 09:30:23.226423    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.547688    1151 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.678012    1151 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170-default-token-ldrct") pod "hue-nhx74" (UID: "b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170")
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.678060    1151 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170-config-volume") pod "hue-nhx74" (UID: "b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170")
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.678080    1151 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume-extra" (UniqueName: "kubernetes.io/configmap/b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170-config-volume-extra") pod "hue-nhx74" (UID: "b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170")
* Dec 10 09:30:29 test kubelet[1151]: W1210 09:30:29.662575    1151 pod_container_deletor.go:79] Container "a23a0e9a5d81822a5f227882cbbb0aa6668f8a00effec9fc14a7c9d1029899a6" not found in pod's containers
* Dec 10 09:30:29 test kubelet[1151]: W1210 09:30:29.664055    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hue-nhx74 through plugin: invalid network status for
* Dec 10 09:30:30 test kubelet[1151]: W1210 09:30:30.685128    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hue-nhx74 through plugin: invalid network status for
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.404362    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.425857    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: E1210 09:30:51.426486    1151 remote_runtime.go:329] ContainerStatus "a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: W1210 09:30:51.426523    1151 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6}): failed to get container status "a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6": rpc error: code = Unknown desc = Error: No such container: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.530868    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5")
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.530903    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-default-token-ldrct") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5")
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.530924    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume-extra" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume-extra") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5")
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.536458    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-default-token-ldrct" (OuterVolumeSpecName: "default-token-ldrct") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5"). InnerVolumeSpecName "default-token-ldrct". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Dec 10 09:30:51 test kubelet[1151]: W1210 09:30:51.537338    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5/volumes/kubernetes.io~configmap/config-volume-extra: ClearQuota called, but quotas disabled
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.537448    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume-extra" (OuterVolumeSpecName: "config-volume-extra") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5"). InnerVolumeSpecName "config-volume-extra". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:51 test kubelet[1151]: W1210 09:30:51.540247    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5/volumes/kubernetes.io~configmap/config-volume: ClearQuota called, but quotas disabled
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.540408    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.631030    1151 reconciler.go:319] Volume detached for volume "config-volume-extra" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume-extra") on node "test" DevicePath ""
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.631050    1151 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume") on node "test" DevicePath ""
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.631057    1151 reconciler.go:319] Volume detached for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-default-token-ldrct") on node "test" DevicePath ""
* Dec 10 09:30:54 test kubelet[1151]: E1210 09:30:54.279509    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : container not running (81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5)
* Dec 10 09:31:06 test kubelet[1151]: E1210 09:31:06.463036    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:08 test kubelet[1151]: E1210 09:31:08.824356    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:08 test kubelet[1151]: E1210 09:31:08.824356    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:09 test kubelet[1151]: E1210 09:31:09.346393    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:09 test kubelet[1151]: E1210 09:31:09.346668    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:09 test kubelet[1151]: E1210 09:31:09.530802    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:40 test kubelet[1151]: E1210 09:31:40.017924    1151 httpstream.go:143] (conn=&{0xc0009eda20 [0xc001668c80] {0 0} 0x1d6e540}, request=16) timed out waiting for streams
* Dec 10 09:31:40 test kubelet[1151]: E1210 09:31:40.412144    1151 httpstream.go:143] (conn=&{0xc0009eda20 [0xc001668c80] {0 0} 0x1d6e540}, request=17) timed out waiting for streams
* 
* ==> kubernetes-dashboard [1a408ee50d62] <==
* 2020/12/10 08:47:15 Using namespace: kubernetes-dashboard
* 2020/12/10 08:47:15 Using in-cluster config to connect to apiserver
* 2020/12/10 08:47:15 Using secret token for csrf signing
* 2020/12/10 08:47:15 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2020/12/10 08:47:15 Starting overwatch
* panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
* 
* goroutine 1 [running]:
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0005243e0)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00019d500)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00019d500)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47
* github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550
* main.main()
* 	/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d
* 
* ==> kubernetes-dashboard [8f8c7c337878] <==
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 [2020-12-10T11:54:05Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/10 11:57:42 Getting list of namespaces
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/10 11:57:42 Getting list of all pods in the cluster
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 Getting pod metrics
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/10 11:57:47 Getting list of namespaces
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/10 11:57:47 Getting list of all pods in the cluster
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 Getting pod metrics
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/10 11:57:48 Getting list of namespaces
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/10 11:57:48 Getting list of all pods in the cluster
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 Getting pod metrics
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Outcoming response to 192.168.33.1 with 200 status code
* 
* ==> storage-provisioner [d438a591e9f6] <==
* I1210 08:50:27.634802       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
* I1210 08:50:46.345596       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I1210 08:50:46.377278       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_test_10558a67-fa69-4df2-a92f-8a742b67e889!
* I1210 08:50:46.399083       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e31e7ce0-0694-419e-a1e9-fbb2b8121025", APIVersion:"v1", ResourceVersion:"553457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test_10558a67-fa69-4df2-a92f-8a742b67e889 became leader
* I1210 08:50:46.478337       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_test_10558a67-fa69-4df2-a92f-8a742b67e889!
* I1210 08:53:25.144639       1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded
* I1210 08:53:25.127909       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e31e7ce0-0694-419e-a1e9-fbb2b8121025", APIVersion:"v1", ResourceVersion:"553631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test_10558a67-fa69-4df2-a92f-8a742b67e889 stopped leading
* F1210 08:53:25.785482       1 controller.go:877] leaderelection lost
* 
* ==> storage-provisioner [d6a179f3b8e5] <==
* I1210 08:54:35.741710       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
* I1210 08:54:56.990381       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I1210 08:54:56.990732       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_test_78af1074-193b-4c52-92ee-2a823a574973!
* I1210 08:54:56.990676       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e31e7ce0-0694-419e-a1e9-fbb2b8121025", APIVersion:"v1", ResourceVersion:"553771", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test_78af1074-193b-4c52-92ee-2a823a574973 became leader
* I1210 08:54:57.891081       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_test_78af1074-193b-4c52-92ee-2a823a574973!
* I1210 09:22:16.191174       1 controller.go:1284] provision "default/data-hive-postgresql-0" class "standard": started
* I1210 09:22:16.220351       1 controller.go:1392] provision "default/data-hive-postgresql-0" class "standard": volume "pvc-4f2491ec-d503-43f3-9694-89dee83f488e" provisioned
* I1210 09:22:16.220370       1 controller.go:1409] provision "default/data-hive-postgresql-0" class "standard": succeeded
* I1210 09:22:16.220374       1 volume_store.go:212] Trying to save persistentvolume "pvc-4f2491ec-d503-43f3-9694-89dee83f488e"
* I1210 09:22:16.228338       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-hive-postgresql-0", UID:"4f2491ec-d503-43f3-9694-89dee83f488e", APIVersion:"v1", ResourceVersion:"555466", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-hive-postgresql-0"
* I1210 09:22:16.229065       1 volume_store.go:219] persistentvolume "pvc-4f2491ec-d503-43f3-9694-89dee83f488e" saved
* I1210 09:22:16.230433       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-hive-postgresql-0", UID:"4f2491ec-d503-43f3-9694-89dee83f488e", APIVersion:"v1", ResourceVersion:"555466", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4f2491ec-d503-43f3-9694-89dee83f488e

</details>

使用的操作系统版本:windows10

其他

minikube node add #添加了一个节点

closed time in 12 minutes

LY1806620741

issue commentkubernetes/minikube

minikube多节点pod间网络不互通

是的,是单节点。在我上一条回复时问题还存在,但是这个问题已经提出了很久,这意味着这个问题可能已经不再有效了。 Yes, it's a single node. The problem still exists in my last reply, but it has been raised for a long time, which means it may no longer be valid.

LY1806620741

comment created time in 13 minutes

pull request commentcri-o/cri-o

pinns: Allow pinning mount namespaces

@lack: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/kata-jenkins 54fa04613585b328d92159ef186ed7aabd4a30cb link /test kata-containers

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

lack

comment created time in 15 minutes

pull request commentcri-o/cri-o

pinns: Allow pinning mount namespaces

@lack: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-agnostic 54fa04613585b328d92159ef186ed7aabd4a30cb link /test e2e-agnostic
ci/prow/e2e-gcp 54fa04613585b328d92159ef186ed7aabd4a30cb link /test e2e-gcp
ci/kata-jenkins 54fa04613585b328d92159ef186ed7aabd4a30cb link /test kata-containers

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

lack

comment created time in 15 minutes

pull request commentopenshift/origin

Bug 1926834: UPSTREAM: 77165: Increase maxMsgSize for dockershim

@rphillips: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-gcp-crio 6a81b246f96d3ce90c542e56e184f4e4aaeb7ae3 link /test e2e-gcp-crio

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

rphillips

comment created time in 18 minutes

pull request commentopenshift/origin-aggregated-logging

Update hack/deploy-logging.sh to use CI env vars

@jcantrill: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/smoke fbcb6e382be7066cf02250e24492283adbbc11f7 link /test smoke

Full PR test history. Your PR dashboard.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

jcantrill

comment created time in 22 minutes

pull request commentopenshift/origin-aggregated-logging

Update hack/deploy-logging.sh to use CI env vars

/retest

Please review the full test history for this PR and help us cut down flakes.

jcantrill

comment created time in 22 minutes

pull request commentopenshift/cluster-kube-apiserver-operator

Updating ose-cluster-kube-apiserver-operator builder & base images to be consistent with ART

/retest

Please review the full test history for this PR and help us cut down flakes.

openshift-bot

comment created time in 22 minutes

pull request commentopenshift/machine-api-operator

[release-4.7] BUG 1929721: Add SecurityProfile.EncryptionAtHost parameter to enable host-based VM encryption

/retest

Please review the full test history for this PR and help us cut down flakes.

mjudeikis

comment created time in 22 minutes

pull request commentopenshift/cluster-monitoring-operator

Bug 1923984: Refactor jsonnet to include latest kube-prometheus

/retest

Please review the full test history for this PR and help us cut down flakes.

paulfantom

comment created time in 22 minutes

Pull request review commentgentoo/gentoo

libvirt: Version update to 7.1.0

+# Copyright 1999-2021 Gentoo Authors+# Distributed under the terms of the GNU General Public License v2++EAPI=7++PYTHON_COMPAT=( python3_{7,8,9} )++inherit meson bash-completion-r1 eutils linux-info python-any-r1 readme.gentoo-r1 systemd++if [[ ${PV} = *9999* ]]; then+	inherit git-r3+	EGIT_REPO_URI="https://gitlab.com/libvirt/libvirt.git"+	SRC_URI=""+	SLOT="0"+else+	SRC_URI="https://libvirt.org/sources/${P}.tar.xz"+	KEYWORDS="~amd64 ~arm64 ~ppc64 ~x86"+	SLOT="0/${PV}"+fi++DESCRIPTION="C toolkit to manipulate virtual machines"+HOMEPAGE="https://www.libvirt.org/"+LICENSE="LGPL-2.1"+IUSE="+	apparmor audit +caps dtrace firewalld fuse glusterfs iscsi+	iscsi-direct +libvirtd lvm libssh lxc nfs nls numa openvz+	parted pcap policykit +qemu rbd sasl selinux +udev+	virtualbox +virt-network wireshark-plugins xen zfs+"++REQUIRED_USE="+	firewalld? ( virt-network )+	libvirtd? ( || ( lxc openvz qemu virtualbox xen ) )+	lxc? ( caps libvirtd )+	openvz? ( libvirtd )+	qemu? ( libvirtd )+	virt-network? ( libvirtd )+	virtualbox? ( libvirtd )+	xen? ( libvirtd )"++BDEPEND="+	app-text/xhtml1+	dev-lang/perl+	dev-libs/libxslt+	dev-perl/XML-XPath+	dev-python/docutils+	virtual/pkgconfig"++# gettext.sh command is used by the libvirt command wrappers, and it's+# non-optional, so put it into RDEPEND.+# We can use both libnl:1.1 and libnl:3, but if you have both installed, the+# package will use 3 by default. Since we don't have slot pinning in an API,+# we must go with the most recent+RDEPEND="+	acct-user/qemu+	app-misc/scrub+	>=dev-libs/glib-2.48.0+	dev-libs/libgcrypt:0+	dev-libs/libnl:3+	>=dev-libs/libxml2-2.7.6+	>=net-analyzer/openbsd-netcat-1.105-r1+	>=net-libs/gnutls-1.0.25:0=+	net-libs/libssh2+	net-libs/libtirpc+	net-libs/rpcsvc-proto+	>=net-misc/curl-7.18.0+	sys-apps/dbus+	sys-apps/dmidecode+	sys-devel/gettext+	sys-libs/ncurses:0=+	sys-libs/readline:=+	virtual/acl+	apparmor? ( sys-libs/libapparmor )+	audit? ( sys-process/audit )+	caps? ( sys-libs/libcap-ng )+	dtrace? ( dev-util/systemtap )+	firewalld? ( >=net-firewall/firewalld-0.6.3 )+	fuse? ( sys-fs/fuse:0= )+	glusterfs? ( >=sys-cluster/glusterfs-3.4.1 )+	iscsi? ( sys-block/open-iscsi )+	iscsi-direct? ( >=net-libs/libiscsi-1.18.0 )+	libssh? ( net-libs/libssh )+	lvm? ( >=sys-fs/lvm2-2.02.48-r2[-device-mapper-only(-)] )+	lxc? ( !sys-apps/systemd[cgroup-hybrid(-)] )+	nfs? ( net-fs/nfs-utils )+	numa? (+		>sys-process/numactl-2.0.2+		sys-process/numad+	)+	parted? (+		>=sys-block/parted-1.8[device-mapper]+		sys-fs/lvm2[-device-mapper-only(-)]+	)+	pcap? ( >=net-libs/libpcap-1.0.0 )+	policykit? (+		acct-group/libvirt+		>=sys-auth/polkit-0.9+	)+	qemu? (+		>=app-emulation/qemu-1.5.0+		dev-libs/yajl+	)+	rbd? ( sys-cluster/ceph )+	sasl? ( dev-libs/cyrus-sasl )+	selinux? ( >=sys-libs/libselinux-2.0.85 )+	virt-network? (+		net-dns/dnsmasq[dhcp,ipv6,script]+		net-firewall/ebtables+		>=net-firewall/iptables-1.4.10[ipv6]+		net-misc/radvd+		sys-apps/iproute2[-minimal]+	)+	wireshark-plugins? ( net-analyzer/wireshark:= )+	xen? (+		>=app-emulation/xen-4.6.0+		app-emulation/xen-tools:=+	)+	udev? (+		virtual/libudev+		>=x11-libs/libpciaccess-0.10.9+	)+	zfs? ( sys-fs/zfs )"++DEPEND="${BDEPEND}+	${RDEPEND}+	${PYTHON_DEPS}"++PATCHES=(+	"${FILESDIR}"/${PN}-6.0.0-fix_paths_in_libvirt-guests_sh.patch+	"${FILESDIR}"/${PN}-6.7.0-do-not-use-sysconfig.patch+	"${FILESDIR}"/${PN}-6.7.0-doc-path.patch+	"${FILESDIR}"/${PN}-6.7.0-fix-paths-for-apparmor.patch+)++pkg_setup() {+	# Check kernel configuration:+	CONFIG_CHECK=""+	use fuse && CONFIG_CHECK+="+		~FUSE_FS"++	use lvm && CONFIG_CHECK+="+		~BLK_DEV_DM+		~DM_MULTIPATH+		~DM_SNAPSHOT"++	use lxc && CONFIG_CHECK+="+		~BLK_CGROUP+		~CGROUP_CPUACCT+		~CGROUP_DEVICE+		~CGROUP_FREEZER+		~CGROUP_NET_PRIO+		~CGROUP_PERF+		~CGROUPS+		~CGROUP_SCHED+		~CPUSETS+		~IPC_NS+		~MACVLAN+		~NAMESPACES+		~NET_CLS_CGROUP+		~NET_NS+		~PID_NS+		~POSIX_MQUEUE+		~SECURITYFS+		~USER_NS+		~UTS_NS+		~VETH+		~!GRKERNSEC_CHROOT_MOUNT+		~!GRKERNSEC_CHROOT_DOUBLE+		~!GRKERNSEC_CHROOT_PIVOT+		~!GRKERNSEC_CHROOT_CHMOD+		~!GRKERNSEC_CHROOT_CAPS"++	kernel_is lt 4 7 && use lxc && CONFIG_CHECK+="+		~DEVPTS_MULTIPLE_INSTANCES"++	use virt-network && CONFIG_CHECK+="+		~BRIDGE_EBT_MARK_T+		~BRIDGE_NF_EBTABLES+		~NETFILTER_ADVANCED+		~NETFILTER_XT_CONNMARK+		~NETFILTER_XT_MARK+		~NETFILTER_XT_TARGET_CHECKSUM+		~IP_NF_FILTER+		~IP_NF_MANGLE+		~IP_NF_NAT+		~IP_NF_TARGET_MASQUERADE+		~IP6_NF_FILTER+		~IP6_NF_MANGLE+		~IP6_NF_NAT"+	# Bandwidth Limiting Support+	use virt-network && CONFIG_CHECK+="+		~BRIDGE_EBT_T_NAT+		~IP_NF_TARGET_REJECT+		~NET_ACT_POLICE+		~NET_CLS_FW+		~NET_CLS_U32+		~NET_SCH_HTB+		~NET_SCH_INGRESS+		~NET_SCH_SFQ"++	# Handle specific kernel versions for different features+	kernel_is lt 3 6 && CONFIG_CHECK+=" ~CGROUP_MEM_RES_CTLR"+	if kernel_is ge 3 6; then+		CONFIG_CHECK+=" ~MEMCG ~MEMCG_SWAP "+		kernel_is lt 4 5 && CONFIG_CHECK+=" ~MEMCG_KMEM "+	fi++	ERROR_USER_NS="Optional depending on LXC configuration."++	if [[ -n ${CONFIG_CHECK} ]]; then+		linux-info_pkg_setup+	fi++	python-any-r1_pkg_setup+}++src_prepare() {+	touch "${S}/.mailmap" || die++	default+	python_fix_shebang .++	# Tweak the init script:+	cp "${FILESDIR}/libvirtd.init-r19" "${S}/libvirtd.init" || die+	sed -e "s/USE_FLAG_FIREWALLD/$(usex firewalld 'need firewalld' '')/" \+		-i "${S}/libvirtd.init" || die "sed failed"+}++src_configure() {+	local emesonargs=(+		$(meson_feature apparmor)+		$(meson_use apparmor apparmor_profiles)+		$(meson_feature audit)+		$(meson_feature caps capng)+		$(meson_feature dtrace)+		$(meson_feature firewalld)+		$(meson_feature fuse)+		$(meson_feature glusterfs)+		$(meson_feature glusterfs storage_gluster)+		$(meson_feature iscsi storage_iscsi)+		$(meson_feature iscsi-direct storage_iscsi_direct)+		$(meson_feature libvirtd driver_libvirtd)+		$(meson_feature libssh)+		$(meson_feature lvm storage_lvm)+		$(meson_feature lvm storage_mpath)+		$(meson_feature lxc driver_lxc)+		$(meson_feature nls)+		$(meson_feature numa numactl)+		$(meson_feature numa numad)+		$(meson_feature openvz driver_openvz)+		$(meson_feature parted storage_disk)+		$(meson_feature pcap libpcap)+		$(meson_feature policykit polkit)+		$(meson_feature qemu driver_qemu)+		$(meson_feature qemu yajl)+		$(meson_feature rbd storage_rbd)+		$(meson_feature sasl)+		$(meson_feature selinux)+		$(meson_feature udev)+		$(meson_feature virt-network driver_network)+		$(meson_feature virtualbox driver_vbox)+		$(meson_feature wireshark-plugins wireshark_dissector)+		$(meson_feature xen driver_libxl)+		$(meson_feature zfs storage_zfs)++		-Dnetcf=disabled+		-Dsanlock=disabled++		-Ddriver_esx=enabled+		-Dinit_script=systemd+		-Dqemu_user=$(usex caps qemu root)+		-Dqemu_group=$(usex caps qemu root)+		-Ddriver_remote=enabled+		-Dstorage_fs=enabled+		-Ddriver_vmware=enabled++		--localstatedir="${EPREFIX}/var"+		-Drunstatedir="${EPREFIX}/run"+	)++	meson_src_configure+}++src_test() {+	export VIR_TEST_DEBUG=1+	meson_src_test+}++src_install() {+	meson_src_install++	# Remove bogus, empty directories. They are either not used, or+	# libvirtd is able to create them on demand+	rm -rf "${D}"/etc/sysconfig || die+	rm -rf "${D}"/var || die+	rm -rf "${D}"/run || die++	# Fix up doc paths for revisions+	if [ $PV != $PVR ]; then

[[ ]]

jpds

comment created time in 26 minutes