profile
viewpoint

Ask questionsUnable to use a local filesystem state store

1. What kops version are you running? The command kops version, will display this information.

1.11.0

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag.

N/A

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

kops create cluster --state 'file:///tmp/kops-state' --zones us-east-1a foo.cluster.k8s.local

5. What happened after the commands executed?

State Store: Invalid value: "file:///tmp/kops-state": Unable to read state store.
Please use a valid state store when setting --state or KOPS_STATE_STORE env var.
For example, a valid value follows the format s3://<bucket>.
Trailing slash will be trimmed.

Issue is caused by vfs.IsClusterReadable returning false for all filesystem state storage as called by factory.Clientset

6. What did you expect to happen?

Not to get an error about being unable to read the state store.

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

Irrelevant

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

I0213 14:25:54.779621   34407 factory.go:68] state store file:///tmp/kops-state

State Store: Invalid value: "file:///tmp/kops-state": Unable to read state store.
Please use a valid state store when setting --state or KOPS_STATE_STORE env var.
For example, a valid value follows the format s3://<bucket>.
Trailing slash will be trimmed.

9. Anything else do we need to know?

The documentation specifically claims that local filesystem state storage is supported with the file:// protocol.

The bug was triggered when attempting to build a CI workflow around:

  1. sync remote S3 state bucket to a local directory
  2. change .spec.configBase to refer to the local directory
  3. run kops replace -f <candidate.yaml> --state file:///path/to/local/state/dir
  4. run kops update cluster <cluster_name> --state file:///path/to/local/state/dir

In a CI workflow, this would allow us to preview (for example, during a pull request) how a kops update cluster would run, without touching the actual remote S3 state bucket (from running kops replace).

kubernetes/kops

Answer questions qrilka

The docs https://github.com/kubernetes/kops/blob/master/docs/state.md list many supported state stores but the source code looks like SSH, local FS and MemFS are explicitly not allowed

useful!

Related questions

Kops 1.12-beta.2 won't/can't bring up etcd server, manager or kube-api hot 1
kube controller manager refuses to connect after upgrading from 1.10.6 to 1.11.7 hot 1
Missing kops controller support for cloudproviders hot 1
InstanceGroup not found (for etcd ap-southeast-2a/main): "ap-southeast-2a" hot 1
Rolling-update fails due to calico-node with 1.12.0-beta.2 hot 1
Kubelet Unable To Apply Reserved Cgroup Limits because Cgroup does not exist hot 1
etcd3 and kube-apiserver fail on terraform apply after terraform destroying w/ kops generated config hot 1
Upgrade from Kops 1.11 to 1.12 has failed. hot 1
Couldn't find key etcd_endpoints in ConfigMap kube-system/calico-config hot 1
Protokube has sustained cpu usage above 100% hot 1
Allow just one instance type in mixedInstancesPolicy hot 1
kubectl command: Unable to connect to the server: EOF hot 1
DNS record for public API address not updated hot 1
etcd3 and kube-apiserver fail on terraform apply after terraform destroying w/ kops generated config hot 1
Issues encountered deploying to OpenStack hot 1
source:https://uonfu.com/
Github User Rank List