profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/deepakkinni/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Deepak Kinni deepakkinni VMware Palo Alto

deepakkinni/astrolabe 0

Data protection framework for complex applications

deepakkinni/enhancements 0

Enhancements tracking repo for Kubernetes

deepakkinni/external-health-monitor 0

This repo contains sidecar controller and agent for volume health monitoring.

deepakkinni/external-provisioner 0

Sidecar container that watches Kubernetes PersistentVolumeClaim objects and triggers CreateVolume/DeleteVolume against a CSI endpoint

deepakkinni/external-snapshotter 0

Sidecar container that watches Kubernetes Snapshot CRD objects and triggers CreateSnapshot/DeleteSnapshot against a CSI endpoint.

deepakkinni/govmomi 0

Go library for the VMware vSphere API

deepakkinni/kubernetes 0

Production-Grade Container Scheduling and Management

deepakkinni/linux 0

Linux kernel source tree

issue closedvmware-tanzu/velero-plugin-for-vsphere

Restore stuck in phase "InProgress" for Vanilla cluster

Describe the bug

Restore stuck in phase "InProgress" for Vanilla cluster.

To Reproduce

  1. succeeds to create backup with snapshot by command velero -n wangc backup create bk --snapshot-volumes=true --include-namespaces nginx-example --include-cluster-resources=false --storage-location=bsl-aws --volume-snapshot-locations=vsl-vsphere
  2. fails to restore the backup with snapshot by command velero -n wangc restore create --from-backup bk
❯ velero -n wangc restore get
NAME                BACKUP   STATUS       STARTED                         COMPLETED                       ERRORS   WARNINGS   CREATED                         SELECTOR
bk-20210615202001   bk       InProgress   2021-06-15 20:20:04 +0800 CST   <nil>                           0        0          2021-06-15 20:20:05 +0800 CST   <none>

Expected behavior Restore should success if creating backup works.

Troubleshooting Information

[Please refer to the Troubleshooting page and collect the required information]

Velero version: 1.5.3

Velero features (use velero client config get features): features: <NOT SET>

velero-plugin-for-vsphere version: v1.1.1

velero/velero-plugin-for-aws: v1.1.0

Kubernetes cluster flavor: Vanilla

vSphere CSI driver version: harbor-repo.vmware.com/velero/backup-driver:1.1.0-rc6 all images in vsphere-csi-controller deployment and vsphere-csi-node daemonset: harbor-repo.vmware.com/velero/data-manager-for-plugin:1.1.0-rc6

Kubernetes version (use kubectl version) Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

vCenter version: 7.0.1.10000 ESXi version: 7.0.1

kubectl -n wangc logs deploy/velero
❯ velero -n wangc backup describe bk
Name:         bk
Namespace:    wangc
Labels:       velero.io/storage-location=bsl-aws
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.16.3
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=16

Phase:  Completed

Errors:    0
Warnings:  0

Namespaces:
  Included:  nginx-example
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  excluded

Label selector:  <none>

Storage Location:  bsl-aws

Velero-Native Snapshot PVs:  true

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2021-06-04 18:39:16 +0800 CST
Completed:  2021-06-04 18:39:30 +0800 CST

Expiration:  2021-07-04 18:39:16 +0800 CST

Total items to be backed up:  9
Items backed up:              9

Velero-Native Snapshots: <none included>
❯ velero -n wangc restore describe bk-20210615202001
Name:         bk-20210615202001
Namespace:    wangc
Labels:       <none>
Annotations:  <none>

Phase:  InProgress

Started:    2021-06-15 20:20:04 +0800 CST
Completed:  <n/a>

Backup:  bk

Namespaces:
  Included:  all namespaces found in the backup
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
  Cluster-scoped:  auto

Namespace mappings:  <none>

Label selector:  <none>

Restore PVs:  auto
❯ kubectl get crd -A
NAME                                                   CREATED AT
backuprepositories.backupdriver.cnsdp.vmware.com       2021-06-04T07:23:24Z
backuprepositoryclaims.backupdriver.cnsdp.vmware.com   2021-06-04T07:23:24Z
backups.velero.io                                      2021-06-07T01:21:22Z
backupstoragelocations.velero.io                       2021-06-07T01:21:22Z
clonefromsnapshots.backupdriver.cnsdp.vmware.com       2021-06-04T07:23:24Z
deletebackuprequests.velero.io                         2021-06-07T01:21:22Z
deletesnapshots.backupdriver.cnsdp.vmware.com          2021-06-04T07:23:24Z
downloadrequests.velero.io                             2021-06-07T01:21:23Z
downloads.datamover.cnsdp.vmware.com                   2021-06-04T07:23:24Z
podvolumebackups.velero.io                             2021-06-07T01:21:23Z
podvolumerestores.velero.io                            2021-06-07T01:21:23Z
resticrepositories.velero.io                           2021-06-07T01:21:23Z
restores.velero.io                                     2021-06-07T01:21:24Z
schedules.velero.io                                    2021-06-07T01:21:24Z
serverstatusrequests.velero.io                         2021-06-07T01:21:24Z
snapshots.backupdriver.cnsdp.vmware.com                2021-06-04T07:23:24Z
uploads.datamover.cnsdp.vmware.com                     2021-06-04T07:23:25Z
volumesnapshotlocations.velero.io                      2021-06-07T01:21:24Z

Screenshots

[If applicable, add screenshots to help explain your problem.]

Anything else you would like to add: backup-driver.log.zip data-mgt.log.zip velero.log.zip

[Miscellaneous information that will assist in solving the issue]

closed time in 12 hours

csgtree

issue commentvmware-tanzu/velero-plugin-for-vsphere

Restore stuck in phase "InProgress" for Vanilla cluster

QE is testing 1.2 RC1 now. Should be released soon.

csgtree

comment created time in 14 hours

issue commentvmware-tanzu/velero-plugin-for-vsphere

Restore stuck in phase "InProgress" for Vanilla cluster

Thanks a lot! It works if both application and its PV are removed.

One short question. When will #341 be in the official release?

csgtree

comment created time in 14 hours

PR merged vmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

What this PR does / why we need it: Currently we hardcoding the default log level by explicitly calling VixDiskLib_Init in VDDK side.

VDDK has a function VixDiskLib_InitEx which allows user to pass a config file which contains customized log level configuration. We have already added the gvddk wrapper function for VixDiskLib_InitEx, we need to update our vsphere plugin to allow user to configure log level during run time.

Test steps:

  1. Backup/Restore without config map, default vddk log level is 2
  2. Create a configMap to set vddk log level
apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: vddk-config
  # must be in the velero namespace
  namespace: velero
  labels:
    # this value-less label identifies the ConfigMap as
    # config for vddk
    velero.io/vddk-config: vix-disk-lib
data:
  # NFC LogLevel (0 = Quiet, 1 = Error, 2 = Warning, 3 = Info, 4 = Debug)
  vixDiskLib.nfc.LogLevel: "4"
  vixDiskLib.transport.LogLevel: "5"
  1. Manually restart data manager pod
kubectl -n velero delete deployment.apps/backup-driver
kubectl delete crds backuprepositories.backupdriver.cnsdp.vmware.com \
                    backuprepositoryclaims.backupdriver.cnsdp.vmware.com \
                    clonefromsnapshots.backupdriver.cnsdp.vmware.com \
                    deletesnapshots.backupdriver.cnsdp.vmware.com \
                    snapshots.backupdriver.cnsdp.vmware.com
kubectl -n velero delete daemonset.apps/datamgr-for-vsphere-plugin
kubectl delete crds uploads.datamover.cnsdp.vmware.com downloads.datamover.cnsdp.vmware.com
kubectl -n velero scale deploy/velero --replicas=0
kubectl -n velero scale deploy/velero --replicas=1
  1. Backup/Restore, check logs
kubectl exec -n velero -it datamgr-for-vsphere-plugin-XXXXX -- /bin/bash
cd tmp/vmware-root/
cat vixDiskLib-1.log 

...
2021-06-22T10:07:12.914Z| host-33| I005: Setting NFC log level to 1
2021-06-22T10:07:12.914Z| host-33| I005: Setting NFC log level to 2
2021-06-22T10:07:12.919Z| host-33| I005: NFC Async IO session is established for '[vsanDatastore] 693cd160-ba9d-0371-f65b-020074a6838e/2ff18e95ad4f45ac83eade2d3e90bc4b.vmdk' with log level 2.
...

Precheck: https://container-dp.svc.eng.vmware.com/job/Container_Precheck_Velero/1102/ Supervisor/Guest cluster: https://container-dp.svc.eng.vmware.com/view/CNS-DP/job/Velero-Pipeline-WCP/637/ https://container-dp.svc.eng.vmware.com/view/CNS-DP/job/Velero-Pipeline-WCP/639/

Which issue(s) this PR fixes: <!-- Usage: Fixes #<issue number>, or Fixes (paste link of issue). --> Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?: <!-- If no, just write "NONE" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. -->

Make vddk log level configurable
+59 -5

5 comments

5 changed files

xinyanw409

pr closed time in 14 hours

push eventvmware-tanzu/velero-plugin-for-vsphere

Xinyan Wu

commit sha 44c80945c471d398f83cf45a3f2c45918d33a20f

Update plugin to allow user to configure vddk log level (#354) Signed-off-by: xinyanw409 <wxinyan@vmware.com>

view details

push time in 14 hours

pull request commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

Sorry, some of the pre-check tests are failing for this PR !

Please check the Jenkins job for more details https://container-dp.svc.eng.vmware.com/job/CNSDP-CI/555/

xinyanw409

comment created time in 17 hours

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func (v *vcConfigController) processVcConfigSecretItem(key string) error { 		return err 	} 	v.logger.Debug("Successfully retrieved latest vSphere VC credentials.")+	err = utils.RetrieveVddkLogLevel(ivdParams, v.logger)+	if err != nil {+		v.logger.WithError(err).Error("Failed to retrieve vddk log level")+		return err+	}+	v.logger.Debug("Successfully retrieved vddk log level")

Remove this piece of code. Only the first call for vddk initial works as expected, which is called when starting server.

xinyanw409

comment created time in 17 hours

pull request commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

Sorry, some of the pre-check tests are failing for this PR !

Please check the Jenkins job for more details https://container-dp.svc.eng.vmware.com/job/CNSDP-CI/554/

xinyanw409

comment created time in 18 hours

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func (v *vcConfigController) processVcConfigSecretItem(key string) error { 		return err 	} 	v.logger.Debug("Successfully retrieved latest vSphere VC credentials.")+	err = utils.RetrieveVddkLogLevel(ivdParams, v.logger)+	if err != nil {+		v.logger.WithError(err).Error("Failed to retrieve vddk log level")+		return err+	}+	v.logger.Debug("Successfully retrieved vddk log level")

We don't plan to support reloading VDDK log level in the run time. As far as we observe, it doesn't work somehow. Do we really need this piece of code?

xinyanw409

comment created time in 18 hours

issue commentvmware-tanzu/velero-plugin-for-vsphere

Restore stuck in phase "InProgress" for Vanilla cluster

FYI, we had a fix at #341 to improve the user experience. Basically, plugin would skip PVC restoration if it already exists. It would be available in the upcoming release.

csgtree

comment created time in 18 hours

push eventvmware-tanzu/astrolabe

Xinyan Wu

commit sha 91eeed4dcf77edd1387a25e984174f159d66fedb

make vddk log level configurable; pickup virtual-disks change (#81)

view details

push time in 19 hours

PR merged vmware-tanzu/astrolabe

Make vddk log level configurable

Currently we hardcoding the default log level by explicitly calling VixDiskLib_Init in VDDK side.

VDDK has a function VixDiskLib_InitEx which allows user to pass a config file which contains customized log level configuration. We have already added the gvddk wrapper function for VixDiskLib_InitEx, we need to update our vsphere plugin to allow user to configure log level during run time.

This change includes:

  1. Pick up virtual-disks change.
  2. Change to use disklib.InitEx and pass a config file path when initializing vddk lib.

Manually tested. Detail test steps refer to https://github.com/vmware-tanzu/velero-plugin-for-vsphere/pull/354.

+97 -33

0 comment

4 changed files

xinyanw409

pr closed time in 19 hours

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func (ctrl *backupDriverController) syncSecretByKey(key string) error { 		return err 	} 	ctrl.logger.Debugf("Successfully retrieved latest vSphere VC credentials.")+	err = utils.RetrieveVddkLogLevel(params, ctrl.logger)

VDDK is not used in backup-driver. So it doesn't matter whether we retrieve configured VDDK log level in backup-driver.

xinyanw409

comment created time in 20 hours

pull request commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

Congratulations !!

All the pre-check tests passed https://container-dp.svc.eng.vmware.com/job/CNSDP-CI/553/

xinyanw409

comment created time in 20 hours

Pull request review commentvmware-tanzu/astrolabe

Make vddk log level configurable

 func (this *IVDProtectedEntityTypeManager) ReloadConfig(ctx context.Context, par 		this.cnsManager.ResetManager(reloadedVc, cnsClient) 	} -	err = disklib.Init(vsphereMajor, vSphereMinor, disklibLib64)+	// check whether customized vddk config is provided.+	// currently only support user to modify vixdisklib nfc log level and transport log level+	var path string+	if _, ok := params[vddkconfig]; !ok {+		this.logger.Info("No customized vddk log level provided, set vddk log level as default")+		path = ""+	} else {+		this.logger.Info("Customized vddk config provided: %v", params[vddkconfig])

Infof

xinyanw409

comment created time in 20 hours

Pull request review commentvmware-tanzu/astrolabe

Make vddk log level configurable

 const ( 	VSphere                  = "vSphere Kubernetes Cluster" ) +const VddkConfigPath = "/tmp/config"

Would you please move this constant definition to the top of this file? Eventually, we might want to move them to a separate file.

xinyanw409

comment created time in 20 hours

Pull request review commentvmware-tanzu/astrolabe

Make vddk log level configurable

 func (this *IVDProtectedEntityTypeManager) ReloadConfig(ctx context.Context, par 		this.cnsManager.ResetManager(reloadedVc, cnsClient) 	} -	err = disklib.Init(vsphereMajor, vSphereMinor, disklibLib64)+	// check whether customized vddk config is provided.+	// currently only support user to modify vixdisklib nfc log level and transport log level+	var path string+	if _, ok := params[vddkconfig]; !ok {+		this.logger.Info("No customized vddk log level provided, set vddk log level as default")+		path = ""

this line is unnecessary

xinyanw409

comment created time in 20 hours

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func CompareVersion(currentVersion string, minVersion string) int { 	} 	return current.Compare(minimum) }++func RetrieveVddkLogLevel(params map[string]interface{}, logger logrus.FieldLogger) error {+	var err error // Declare here to avoid shadowing on using in cluster config only+	kubeClient, err := CreateKubeClientSet()+	if err != nil {+		logger.WithError(err).Error("Failed to create kube clientset")+		return err+	}+	veleroNs, exist := os.LookupEnv("VELERO_NAMESPACE")+	if !exist {+		errMsg := "Failed to lookup the ENV variable for velero namespace"+		logger.Error(errMsg)+		return errors.New(errMsg)+	}+	opts := metav1.ListOptions{+		// velero.io/vddk-config: vix-disk-lib+		LabelSelector: fmt.Sprintf("%s=%s", constants.VddkConfigLabelKey, constants.VixDiskLib),+	}

Here using labels just to allow user to use any name for vddk config map. Vddk configMap is identified by the specific label, and in code I restrict that only one vddk configMap can exist.

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func NewSnapshotManagerFromConfig(configInfo server.ConfigInfo, s3RepoParams map 			} 			logger.Infof("SnapshotManager: vSphere VC credential is retrieved") 		}+		err = utils.RetrieveVddkLogLevel(ivdParams, logger)

It is better not to retrieve VDDK log level here in snapshot manager, since snapshot manager is shared by both backup-driver server and data-manager server while backup-driver server care nothing about vddk log level. let's do in data-manager server instead

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func (v *vcConfigController) processVcConfigSecretItem(key string) error { 		return err 	} 	v.logger.Debug("Successfully retrieved latest vSphere VC credentials.")+	err = utils.RetrieveVddkLogLevel(ivdParams, v.logger)+	v.logger.Debug("Start retrieving vddk log level.")

do we really need this line of log?

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func CompareVersion(currentVersion string, minVersion string) int { 	} 	return current.Compare(minimum) }++func RetrieveVddkLogLevel(params map[string]interface{}, logger logrus.FieldLogger) error {+	var err error // Declare here to avoid shadowing on using in cluster config only+	kubeClient, err := CreateKubeClientSet()+	if err != nil {+		logger.WithError(err).Error("Failed to create kube clientset")+		return err+	}+	veleroNs, exist := os.LookupEnv("VELERO_NAMESPACE")+	if !exist {+		errMsg := "Failed to lookup the ENV variable for velero namespace"+		logger.Error(errMsg)+		return errors.New(errMsg)+	}+	opts := metav1.ListOptions{+		// velero.io/vddk-config: vix-disk-lib+		LabelSelector: fmt.Sprintf("%s=%s", constants.VddkConfigLabelKey, constants.VixDiskLib),+	}+	vddkConfigMaps, err := kubeClient.CoreV1().ConfigMaps(veleroNs).List(context.TODO(), opts)+	if err != nil {+		logger.WithError(err).Errorf("Failed to retrieve config map lists for vddk config")

Errorf -> Error

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func (ctrl *backupDriverController) syncSecretByKey(key string) error { 		return err 	} 	ctrl.logger.Debugf("Successfully retrieved latest vSphere VC credentials.")+	err = utils.RetrieveVddkLogLevel(params, ctrl.logger)+	if err != nil {+		ctrl.logger.WithError(err).Error("Failed to retrieve vddk log level")+		return err+	}+	ctrl.logger.Info("Successfully retrieved vddk log level.")

Info -> Debug

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func NewDataMoverFromCluster(params map[string]interface{}, logger logrus.FieldL 		logger.Infof("DataMover: vSphere VC credential is retrieved") 	} +	err := utils.RetrieveVddkLogLevel(params, logger)+	if err != nil {+		logger.WithError(err).Error("Failed to retrieve vddk log level")+		return nil, err+	}+	logger.Info("DataMover: VDDK log level is retrieved")+

The same as the comment in snapshot manager.

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/velero-plugin-for-vsphere

Update plugin to allow user to configure vddk log level

 func CompareVersion(currentVersion string, minVersion string) int { 	} 	return current.Compare(minimum) }++func RetrieveVddkLogLevel(params map[string]interface{}, logger logrus.FieldLogger) error {+	var err error // Declare here to avoid shadowing on using in cluster config only+	kubeClient, err := CreateKubeClientSet()+	if err != nil {+		logger.WithError(err).Error("Failed to create kube clientset")+		return err+	}+	veleroNs, exist := os.LookupEnv("VELERO_NAMESPACE")+	if !exist {+		errMsg := "Failed to lookup the ENV variable for velero namespace"+		logger.Error(errMsg)+		return errors.New(errMsg)+	}+	opts := metav1.ListOptions{+		// velero.io/vddk-config: vix-disk-lib+		LabelSelector: fmt.Sprintf("%s=%s", constants.VddkConfigLabelKey, constants.VixDiskLib),+	}

What's the benefit of using label selector here? Do you plan to support retrieving VDDK config information from any ConfigMap with the specific label, rather than the specific ConfigMap name?

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/astrolabe

Make vddk log level configurable

 func (this *IVDProtectedEntityTypeManager) ReloadConfig(ctx context.Context, par 		this.cnsManager.ResetManager(reloadedVc, cnsClient) 	} -	err = disklib.Init(vsphereMajor, vSphereMinor, disklibLib64)-	if err != nil {-		return errors.Wrap(err, "Could not initialize VDDK during config reload.")+	// check whether customized vddk config is provided.+	// currently only support user to modify vixdisklib nfc log level and transport log level+	if _, ok := params[vddkconfig]; !ok {+		this.logger.Info("No customized vddk log level provided, set vddk log level as default.")+		err = disklib.InitEx(vsphereMajor, vSphereMinor, disklibLib64, "")+		if err != nil {+			return errors.Wrap(err, "Could not initialize VDDK during config reload.")+		}

the code snippet L387-L390 is used twice. Would you please re-organize your code to get rid of the repeated code snippet?

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/astrolabe

Make vddk log level configurable

 func GetClusterFlavor(config *rest.Config) (ClusterFlavor, error) { 	// Did not match any search criteria. Unknown cluster flavor. 	return Unknown, errors.New("GetClusterFlavor: Failed to identify cluster flavor") }++func CreateConfigFile(vddkConfig map[string]string, logger logrus.FieldLogger) (string, error) {+	path := "/tmp/config"+	logger.Infof("Customized vddk config: %v", vddkConfig)+	err := createFile(path, vddkConfig, logger)+	if err != nil {+		logger.WithError(err).Error("Failed to create config file")+		return "", err+	}+	return path, nil+}++func DeleteConfigFile(path string, logger logrus.FieldLogger) error {+	return deleteFile(path, logger)+}++func createFile(path string, config map[string]string, logger logrus.FieldLogger) error {+	// check if file exists+	var _, err = os.Stat(path)++	// create file if not exists+	if os.IsNotExist(err) {+		var file, err = os.Create(path)+		if err != nil {+			return err+		}+		defer file.Close()+	}++	file, err := os.OpenFile(path, os.O_RDWR, 0644)+	if err != nil {+		return err+	}+	defer file.Close()++	for k, v := range config {+		logger.Infof("Writing config %s=%s:", k, v)+		// Write some text line-by-line to file.+		_, err = file.WriteString(k + "=" + v + "\n")+		if err != nil {+			return err+		}+	}++	// Save file changes.+	err = file.Sync()+	if err != nil {+		return err+	}++	logger.Infof("Config File %v Created Successfully", path)

Infof -> Debugf?

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/astrolabe

Make vddk log level configurable

 func GetClusterFlavor(config *rest.Config) (ClusterFlavor, error) { 	// Did not match any search criteria. Unknown cluster flavor. 	return Unknown, errors.New("GetClusterFlavor: Failed to identify cluster flavor") }++func CreateConfigFile(vddkConfig map[string]string, logger logrus.FieldLogger) (string, error) {+	path := "/tmp/config"

Please do not hardcode anything in func. Make a constant and use it here.

xinyanw409

comment created time in a day

Pull request review commentvmware-tanzu/astrolabe

Make vddk log level configurable

 func (this *IVDProtectedEntityTypeManager) ReloadConfig(ctx context.Context, par 		this.cnsManager.ResetManager(reloadedVc, cnsClient) 	} -	err = disklib.Init(vsphereMajor, vSphereMinor, disklibLib64)-	if err != nil {-		return errors.Wrap(err, "Could not initialize VDDK during config reload.")+	// check whether customized vddk config is provided.+	// currently only support user to modify vixdisklib nfc log level and transport log level+	if _, ok := params[vddkconfig]; !ok {+		this.logger.Info("No customized vddk log level provided, set vddk log level as default.")

Please double check all logging statement in your PR and remove the end period in each of them.

this.logger.Info("No customized vddk log level provided, set vddk log level as default")

xinyanw409

comment created time in a day

issue commentvmware-tanzu/velero-plugin-for-vsphere

Restore stuck in phase "InProgress" for Vanilla cluster

From VDDK log,

2021-06-23T06:31:31.704Z| host-30| I125: Opening file [datastore122b] fcd/b210430c7f774b0280dadee2141beded-000022.vmdk (ha-nfc://[datastore122b] fcd/b210430c7f774b0280dadee2141beded-000022.vmdk@10.110.125.122:902)
2021-06-23T06:31:33.214Z| host-87| W115: [NFC ERROR]NfcAioRcvErrorMsg: Error from server: NfcAioProcessOpenFileMsg: Failed to open '[datastore122b] fcd/b210430c7f774b0280dadee2141beded-000022.vmdk': DiskLib error 16392: Failed to lock the file

The root cause is Failed to lock the specific VMDK file. I guess you didn't remove the application before the restore. Basically, you are supposed to remove the backed-up application from the cluster before the restore. velero-plugin-for-vsphere doesn't support in-place restore. I don't think velero does, either.

From backup-driver.log

time="2021-06-23T06:10:44Z" level=info msg="PVC nginx-example/nginx-logs already exists, reusing" logSource="/go/pkg/mod/github.com/vmware-tanzu/astrolabe@v0.3.0/pkg/pvc/pvc_protected_entity_type_manager.go:175"

We can confirm the application or PVC of the application is still available in the k8s cluster. If running restore in such case, the observed error Failed to download snapshot, ..., Open virtual disk file failed. The error code is 1. with error code: 1 is expected.

csgtree

comment created time in a day