profile
viewpoint
Anusha Ragunathan anusha-ragunathan Docker San Francisco

Pull request review commentdocker/docker.github.io

Add note on CSI drivers

 on the cluster. 6. Perform static or dynamic provisioning of PVs using the CSI plugin as the provisioner. For details on how  to provision volumes for your specific storage provider, refer to the storage provider’s user manual. +### Certified CSI drivers+The following table lists the UCP certified CSI drivers.++| Partner name | Kubernetes on Docker Enterprise 3.0 |+|--------------|-------------------------------------|+| NetApp       | Certified (Trident - CSI)           |+| EMC/Dell     | Certified (VxFlexOS CSI driver)     |+| VMware       | Certified (CSI)                     |+| Portworx     | Certified (CSI)                     |+| Nexenta      | Certified (CSI)                     |+| Blockbridge  | Certified (CSI)                     |+| Storidge     | Certified (CSI)                     |

Storidge? Is that a real partner? Havent heard that one before.

traci-morrison

comment created time in 12 days

Pull request review commentdocker/docker.github.io

[WIP] Add note on CSI drivers

 Docker Enterprise 3.0 supports version 1.0+ of the CSI specification. Therefore,  ![Kubernetes and CSI components](/ee/ucp/images/csi-plugins.png){: .with-border} -**Note**: Docker Enterprise does not provide CSI drivers. CSI drivers are provided by enterprise storage vendors. -Kubernetes does not enforce a specific procedure for how Storage Providers (SP) should bundle and distribute CSI drivers.+> Note+>+> Docker Enterprise does not provide CSI drivers. CSI drivers are provided by enterprise storage vendors. Kubernetes does not enforce a specific procedure for how Storage Providers (SP) should bundle and distribute CSI drivers.  Review the [Kubernetes CSI Developer Documentation](https://kubernetes-csi.github.io/docs/) for CSI architecture,  security, and deployment details.  ## Prerequisites -1. Select a storage provider from the list of [available CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html) -or as documented by your storage vendor. +1. Select a CSI driver to use with Kubernetes. Contact Docker for a list of certified drivers.

great. lets do that please.

traci-morrison

comment created time in 19 days

Pull request review commentdocker/docker.github.io

[WIP] Add vSphere Volumes section

+---+title: Configuring vSphere Volumes for Kubernetes+description: Learn how to add persistent storage to your Docker Enterprise clusters using vSphere Volumes.+keywords: Universal Control Plane, UCP, Docker Enterprise, Kubernetes, storage, volume+---++The [vSphere Storage for Kubernetes driver](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/index.html) enables customers to address persistent storage requirements for Kubernetes pods  in vSphere environments. The driver allows you to create a persistent  volume on a Virtual Machine File System (VMFS), and use it to manage persistent storage requirements independent of pod and VM lifecycle.++> Note+>+> Of the three main storage backends offered by vSphere on Kubernetes (VMFS, vSAN, and NFS), we support VMFS.++You can use vSphere Cloud Provider to manage storage with Kubernetes in UCP 3.1 and later. This includes support for :++* Volumes+* Persistent volumes+* Storage classes and provisioning volumes+++## Prerequisites+* Ensure that vsphere.conf is populated per the [vSphere Cloud Provider Configuration Deployment Guide](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html#create-the-vsphere-cloud-config-file-vsphereconf).+* `disk.EnableUUID` value on the worker VMs must be set to `True`.++## Configure for Kubernetes++Kubernetes cloud providers provide a method of provisioning cloud resources through Kubernetes via the `--cloud-provider` option. This is to ensure that the kubelet is aware that it must be initialized by ucp-kube-controller-manager before it is scheduled any work.++```bash+docker container run --rm -it --name ucp -e REGISTRY_USERNAME=$REGISTRY_USERNAME -e REGISTRY_PASSWORD=$REGISTRY_PASSWORD \+  -v /var/run/docker.sock:/var/run/docker.sock \+  "dockereng/ucp:3.1.0-tp2" \+  install \+  --host-address <HOST_ADDR> \+  --admin-username admin \+  --admin-password XXXXXXXX \+  --cloud-provider=vsphere \+  --image-version latest:+```++## Create a StorageClass++1. Create a StorageClass with a user specified disk format.+```bash+apiVersion: storage.k8s.io/v1+kind: StorageClass+metadata:+  name: fast+provisioner: kubernetes.io/vsphere-volume+parameters:+  diskformat: zeroedthick+```+For example, `diskformat` can be `thin`, `zeroedthick`, or `eagerzeroedthick`. The default format is `thin`.+2. Create a StorageClass with a disk format on a user specified datastore.+```bash+apiVersion: storage.k8s.io/v1+kind: StorageClass+metadata:+  name: fast+provisioner: kubernetes.io/vsphere-volume+parameters:+    diskformat: zeroedthick+    datastore: VSANDatastore+```+You can also specify the `datastore` in the StorageClass. The volume will be created on the datastore specified in the storage class, which in this case is `VSANDatastore`. This field is optional. If the datastore is not specified, then the volume will be created on the datastore specified in the vSphere config file used to initialize the vSphere Cloud Provider.++For more information on Kubernetes Storage Classes, see [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).++## Deploy vSphere Volumes++After you create a StorageClass, you can create [PersistentVolumes (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction) that deploy volumes attached to hosts and mounted inside pods. A PersistentVolumeClaim (PVC) is a claim for storage resources that are bound to a PV when storage resources are granted. ++We recommend that you use the StorageClass and PVC resources as these abstraction layers provide more portability as well as control over the storage layer across environments.++To deploy vSphere volumes:++1. [Create a PVC from the plugin](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html). When you define a PVC to use the storage class, a PV is created and bound.+2. [Refer to the PVC from the Pod](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html).

Should be okay. I'd leave out the "in the configuration file". In short, you can change it to "Create a reference to the PVC from the pod"

traci-morrison

comment created time in 19 days

Pull request review commentdocker/docker.github.io

[WIP] Add note on CSI drivers

 Docker Enterprise 3.0 supports version 1.0+ of the CSI specification. Therefore,  ![Kubernetes and CSI components](/ee/ucp/images/csi-plugins.png){: .with-border} -**Note**: Docker Enterprise does not provide CSI drivers. CSI drivers are provided by enterprise storage vendors. -Kubernetes does not enforce a specific procedure for how Storage Providers (SP) should bundle and distribute CSI drivers.+> Note+>+> Docker Enterprise does not provide CSI drivers. CSI drivers are provided by enterprise storage vendors. Kubernetes does not enforce a specific procedure for how Storage Providers (SP) should bundle and distribute CSI drivers.  Review the [Kubernetes CSI Developer Documentation](https://kubernetes-csi.github.io/docs/) for CSI architecture,  security, and deployment details.  ## Prerequisites -1. Select a storage provider from the list of [available CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html) -or as documented by your storage vendor. +1. Select a CSI driver to use with Kubernetes. Contact Docker for a list of certified drivers.

Contacting docker everytime a customer wants to know about the list of certified drivers is very cumbersome. Why dont we list the drivers per UCP release in this doc or create another link and reference that here?

traci-morrison

comment created time in 19 days

pull request commentdocker/docker.github.io

Add vSphere Volumes section

@traci-morrison : can you confirm that this doc will be at the same level as https://docs.docker.com/ee/ucp/kubernetes/storage/use-iscsi/ in docs.docker.com ?

traci-morrison

comment created time in 22 days

push eventanusha-ragunathan/external-storage

Anusha Ragunathan

commit sha c6780fe82d9d2895735f0ed0ef829001656c2dc5

Update sample PVC Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>

view details

push time in a month

Pull request review commentkubernetes-incubator/external-storage

Add iSCSI FlexVol support.

+kind: PersistentVolumeClaim+apiVersion: v1+metadata:+  name: myclaim+  annotations:+    volume.beta.kubernetes.io/storage-class: "iscsi-targetd-vg-targetd"

Thanks for your review @jingxu97 The pattern is from https://github.com/kubernetes-incubator/external-storage/blob/master/iscsi/targetd/kubernetes/iscsi-provisioner-pvc.yaml. It could be from a sample from an earlier version of k8s. Let me fix that.

anusha-ragunathan

comment created time in a month

Pull request review commentkubernetes-incubator/external-storage

Add iSCSI FlexVol support.

 func (p *iscsiProvisioner) getAccessModes() []v1.PersistentVolumeAccessMode { 	} } +func createPVSource(options controller.VolumeOptions, portals []string, lun int32) v1.PersistentVolumeSource {+	if options.PVC.Annotations["os"] == "windows" {+		return v1.PersistentVolumeSource{+			FlexVolume: &v1.FlexPersistentVolumeSource{+				Driver: "microsoft.com/iscsi.cmd",

Instead of forking, how about make the flexvol plugin Driver field configurable from the storage class? Simpler.

anusha-ragunathan

comment created time in a month

pull request commentkubernetes-incubator/external-storage

Add iSCSI FlexVol support.

Thanks for the review, @pjh. iSCSI FlexVolume support does not exist for Linux. Its not needed since Kubernetes provides in-tree support for what we are trying to achieve using FlexVols here.

anusha-ragunathan

comment created time in a month

pull request commentkubernetes-incubator/external-storage

Add iSCSI FlexVol support.

Thanks for the review @ddebroy

Agree that the annotation can be made from the PVC or the storage class. I found that embedding the OS specifics as part of the PVC to be more logical. The user wants a claim specific to Windows workloads; she/he adds the "os: windows" annotation. The storage class remains OS-agnostic. The claim is used in the workload and it seems like the more natural object to hold this information.

anusha-ragunathan

comment created time in 2 months

pull request commentkubernetes-incubator/external-storage

Add iSCSI FlexVol support.

/assign @jsafrane

anusha-ragunathan

comment created time in 2 months

pull request commentkubernetes-incubator/external-storage

Add iSCSI FlexVol support.

cc @ddebroy

anusha-ragunathan

comment created time in 2 months

PR opened kubernetes-incubator/external-storage

Add iSCSI FlexVol support.

Background: The external targetd provisioner allocates targetd based LUNs. These LUNs are eventually mounted into pods by the kubelet. The in-tree iSCSI support in the kubelet allows for mounting the iSCSI LUNs into pods.

Note that Kubelet's iSCSI in-tree support (discovery, attach, mount) is available on Linux workers only. Although the Windows Kubelet does not provide in-tree iSCSI support, mounting of iSCSI volumes can be accomplished using Windows Flexvolume plugins. https://github.com/microsoft/K8s-Storage-Plugins

Proposed change: LUNs provisioned by the targetd provisioner can be mounted by Windows iSCSI FlexVolume plugins by creating FlexPersistentVolumeSource for PV Source. This allows Windows pods to use: - targetd external provisioner for provisioning. - windows flexvol plugins for mounting.

Sample yamls: 1. iscsi/targetd/kubernetes/iscsi-provisioner-d-win.yaml note that deploying this on a mixed cluster of Windows and Linux workers requires to qualify that the provisioner runs on Linux only. So set nodeSelector to be "beta.kubernetes.io/os: linux" TODO: provisioner image is set to aragunathan/iscsi-controller:latest. Once this PR merges and official images are built, I'll send a follow on PR to update yaml with official image. 2. iscsi/targetd/kubernetes/iscsi-provisioner-flexvol-class.yaml note "fsType: ntfs" for Windows mounts/workloads. 3. iscsi/targetd/kubernetes/iscsi-provisioner-pvc-win.yaml note that os: "windows" annotation should be added for PVCs.

Testing:
Tested on a kubernetes 1.15.0 cluster.
- Setup targetd server

- Setup flexvol scripts on Windows workers per https://github.com/microsoft/K8s-Storage-Plugins
- Using powershell, get the windows worker's iSCSI IQN:
  (Get-InitiatorPort).NodeAddress
  This will be added to the storage class yaml

- Setup targetd account and secret
- Deployed storage class: iscsi-provisioner-flexvol-class.yaml
- Deployed PVC: iscsi-provisioner-pvc-win.yaml
- Created a Pod to use the claim and observed LUNs mounted successfully.

Signed-off-by: Anusha Ragunathan anusha.ragunathan@docker.com

+150 -15

0 comment

4 changed files

pr created time in 2 months

push eventanusha-ragunathan/external-storage

Anusha Ragunathan

commit sha 162a568fdebc4cd540336da93fb9d2876ff500b9

Add iSCSI FlexVol support. Background: The external targetd provisioner allocates targetd based LUNs. These LUNs are eventually mounted into pods by the kubelet. The in-tree iSCSI support in the kubelet allows for mounting the iSCSI LUNs into pods. It is important to note that Kubelet's iSCSI in-tree support (discovery, attach, mount) is available on Linux workers only. Although the Windows Kubelet does not provide in-tree iSCSI support, mounting of iSCSI volumes can be accomplished using Windows Flexvolume plugins. https://github.com/microsoft/K8s-Storage-Plugins Proposed change: LUNs provisioned by the targetd provisioner can be mounted by Windows iSCSI FlexVolume plugins by creating FlexPersistentVolumeSource for PV Source. This allows Windows pods to use: - targetd external provisioner for provisioning. - windows flexvol plugins for mounting. UX changes: If a user wishes to provision for Windows workers, then PVC should have the os: "windows" annotation. Sample yamls: 1. iscsi/targetd/kubernetes/iscsi-provisioner-d-win.yaml note that deploying this on a mixed cluster of Windows and Linux workers requires to qualify that the provisioner runs on Linux only. So set nodeSelector to be "beta.kubernetes.io/os: linux" TODO: provisioner image is set to aragunathan/iscsi-controller:latest. Once this PR merges and official images are built, I'll send a follow on PR to update yaml with official image. 2. iscsi/targetd/kubernetes/iscsi-provisioner-flexvol-class.yaml note "fsType: ntfs" for Windows mounts/workloads. 3. iscsi/targetd/kubernetes/iscsi-provisioner-pvc-win.yaml note that os: "windows" annotation should be added for PVCs. Testing: Tested on a kubernetes 1.15.0 cluster. - Setup targetd server - Setup flexvol scripts on Windows workers per https://github.com/microsoft/K8s-Storage-Plugins - Using powershell, get the windows worker's iSCSI IQN: (Get-InitiatorPort).NodeAddress This will be added to the storage class yaml - Setup targetd account and secret - Deployed storage class: iscsi-provisioner-flexvol-class.yaml - Deployed PVC: iscsi-provisioner-pvc-win.yaml - Created a Pod to use the claim and observed LUNs mounted successfully. Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>

view details

push time in 2 months

push eventanusha-ragunathan/external-storage

Anusha Ragunathan

commit sha b3ba38715547a2a92fec26c792dbd9e9c4d939b3

Add iSCSI FlexVol support. Background: The external targetd provisioner allocates targetd based LUNs. These LUNs are eventually mounted into pods by the kubelet. The in-tree iSCSI support in the kubelet allows for mounting the iSCSI LUNs into pods. It is important to note that Kubelet's iSCSI in-tree support (discovery, attach, mount) is available on Linux workers only. Although the Windows Kubelet does not provide in-tree iSCSI support, mounting of iSCSI volumes can be accomplished using Windows Flexvolume plugins. https://github.com/microsoft/K8s-Storage-Plugins Proposed change: LUNs provisioned by the targetd provisioner can be mounted by Windows iSCSI FlexVolume plugins by creating FlexPersistentVolumeSource for PV Source. This allows Windows pods to use: - targetd external provisioner for provisioning. - windows flexvol plugins for mounting. UX changes: If a user wishes to provision for Windows workers, then PVC should have the os: "windows" annotation. Sample yamls: 1. iscsi/targetd/kubernetes/iscsi-provisioner-d-win.yaml note that deploying this on a mixed cluster of Windows and Linux workers requires to qualify that the provisioner runs on Linux only. So set nodeSelector to be "beta.kubernetes.io/os: linux" TODO: provisioner image is set to aragunathan/iscsi-controller:latest. Once this PR merges and official images are built, I'll send a follow on PR to update yaml with official image. 2. iscsi/targetd/kubernetes/iscsi-provisioner-flexvol-class.yaml note "fsType: ntfs" for Windows mounts/workloads. 3. iscsi/targetd/kubernetes/iscsi-provisioner-pvc-win.yaml note that os: "windows" annotation should be added for PVCs. Testing: Tested on a kubernetes 1.15.0 cluster. - Setup targetd server - Setup flexvol scripts on Windows workers per https://github.com/microsoft/K8s-Storage-Plugins - Using powershell, get the windows worker's iSCSI IQN: (Get-InitiatorPort).NodeAddress This will be added to the storage class yaml - Setup targetd account and secret - Deployed storage class: iscsi-provisioner-flexvol-class.yaml - Deployed PVC: iscsi-provisioner-pvc-win.yaml - Created a Pod to use the claim and observed LUNs mounted successfully. Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>

view details

push time in 2 months

PR closed moby/moby

Reviewers
[WIP] Plugin save and load. status/1-design-review status/failing-ci

Implement plugin save and load to handle the offline distribution usecase. Save a tar of the plugin in the OCI image format. Load loads such a tar stream into the plugin inventory.

Signed-off-by: Anusha Ragunathan anusha.ragunathan@docker.com Signed-off-by: Brian Goff cpuguy83@gmail.com

Resurrected from #33032

+615 -12

10 comments

13 changed files

cpuguy83

pr closed time in 2 months

pull request commentmoby/moby

[WIP] Plugin save and load.

This PR is not getting much traction. We can close it for now

cpuguy83

comment created time in 2 months

create barnchanusha-ragunathan/external-storage

branch : win_support

created branch time in 2 months

fork anusha-ragunathan/external-storage

External storage plugins, provisioners, and helper libraries

fork in 2 months

more