profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/boredabdel/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Abdel SGHIOUAR boredabdel Google Stockholm, sweden

ahmetb/kubernetes-network-policy-recipes 2978

Example recipes for Kubernetes Network Policies that you can just copy paste

boredabdel/devoxx-morocco 1

Code used in the Devoxx Morocco 2018 Talk about Kubernetes

boredabdel/OpenERP 1

Modules OpenERP

pull request commentahmetb/kubernetes-network-policy-recipes

Update 02a-allow-all-traffic-to-an-application.md

@boredabdel I edited your comment to add ``` syntax, otherwise it doesn't indent properly.

sridharlreddy

comment created time in 20 days

pull request commentahmetb/kubernetes-network-policy-recipes

Hotfox/allow external fix

I just saw the context at #71. networkpolicy.spec.ingress.from docs say:

DESCRIPTION: List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list.

According to this description, I don't think from: [] blocks everything according to the bold section above.

Is [] value not considered "empty"? Have we tested this @boredabdel?

ericyz

comment created time in 22 days

pull request commentahmetb/kubernetes-network-policy-recipes

Hotfox/allow external fix

I'm trying to understand what this fixes, was there an API validation error?

The networkpolicy.spec.ingress field seems to be an []Object type, so

  ingress:
  - from: []

seems to conform to the API as well.

Furthermore, the array's 0th values {} and from: [] both seem to have the same effect according to ingress field doc (we just need at least 1 element here, and {} and from: [] do the same thing).

List of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default)

ericyz

comment created time in 22 days

pull request commentahmetb/kubernetes-network-policy-recipes

Update 02a-allow-all-traffic-to-an-application.md

@boredabdel please use code block syntax :) (it's in github comment editor)

sridharlreddy

comment created time in 22 days

pull request commentahmetb/kubernetes-network-policy-recipes

cleaning up example

@boredabdel I updated your comment to use code fence syntax. "describe" output is not always going to be an accurate description of what API does. "describe" just prints a cosmetic representation, which might have different defaulting/omitting behavior than the API.

erkules

comment created time in 22 days

Pull request review commentahmetb/kubernetes-network-policy-recipes

Use namespace default to match the GIF

 networkpolicy "deny-from-other-namespaces" created"  Note a few things about this manifest: -- `namespace: secondary` deploys it to the `secondary` namespace.-- it applies the policy to ALL pods in `secondary` namespace as the+- `namespace: default` deploys it to the `default` namespace.+- it applies the policy to ALL pods in `default` namespace as the   `spec.podSelector.matchLabels` is empty and therefore selects all pods.-- it allows traffic from ALL pods in the `secondary` namespace, as+- it allows traffic from ALL pods in the `default` namespace, as    `spec.ingress.from.podSelector` is empty and therefore selects all pods.  ## Try it out -Query this web service from the `default` namespace:+Query this web service from the `foo` namespace:  ```sh-$ kubectl run test-$RANDOM --namespace=default --rm -i -t --image=alpine -- sh-/ # wget -qO- --timeout=2 http://web.secondary+$ kubectl run test-$RANDOM --namespace=foo --rm -i -t --image=alpine -- sh

@boredabdel I think you mean foo as default always exists

cscetbon

comment created time in 22 days

pull request commentahmetb/kubernetes-network-policy-recipes

fix the allow-external-traffic

My bad. New commit to fix it.

ericyz

comment created time in 22 days

PR opened ahmetb/kubernetes-network-policy-recipes

fix the allow-external-traffic

I would think the original configuration from:[] is to block all the ingress traffic

+2 -2

0 comment

1 changed file

pr created time in 24 days

PR opened ahmetb/kubernetes-network-policy-recipes

Update 04-deny-traffic-from-other-namespaces.md

Hopefully the edit helps clarifies things a bit.

+1 -1

0 comment

1 changed file

pr created time in a month

MemberEvent

startedboredabdel/gke-networking-recipes

started time in a month

PR opened GoogleCloudPlatform/professional-services

Update gce-quota-sync.py with type_annotations.

Type annotations should help make the code more readable.

Pull Request Template

Please, go through these steps before you submit a PR.

  1. Make sure that your PR is not a duplicate.

  2. Before submitting the PR, review your changes:

    • Adjust the code to the existing style.
    • Add a set of unit tests for the changes and ensure they all pass.
    • Add accompanying README.md file with instructions on usage. See awesome-readme for good examples of high-quality READMEs.
    • Add a link to your contribution in the top-level README (alpha-order).
    • Add Apache 2.0 license headers with an up-to-date copyright date attributed to Google LLC.
    • Remove unnecessary LICENSE files. There's no need to include an additional license since all repository submissions are covered by the top-level Apache 2.0 license.
    • For new tools/examples, file your project with the PSO Engineering Council.
  3. After these steps, you're ready to open a pull request.

    • Give a descriptive title to your PR.
    • Provide a description of your changes.
    • Include the Engineering Council bug ID with your pull request.
    • Put closes #XXXX in your comment to auto-close the issue that your PR fixes (if such). <br/>

PLEASE REMOVE THIS TEMPLATE BEFORE SUBMITTING

+18 -4

0 comment

1 changed file

pr created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

Update gce-quota-sync.py

 _METRIC_KIND = monitoring_v3.enums.MetricDescriptor.MetricKind.GAUGE _METRIC_TYPE = 'custom.googleapis.com/quota/gce' +# set project_id to avoid auth errors. This should be moved into main.+os.environ["GOOGLE_CLOUD_PROJECT"] = project_id

The project_id is not since we can expect the value to change based on who's running this. It should be the same value as the as the project_id provided in the _add_series function. In theory, this variable can be set in the main function but it probably makes sense to move it to the _add_series function since it's been predefined there.

aos-aos

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

Update gce-quota-sync.py

 _METRIC_KIND = monitoring_v3.enums.MetricDescriptor.MetricKind.GAUGE _METRIC_TYPE = 'custom.googleapis.com/quota/gce' +# set project_id to avoid auth errors. This should be moved into main.+os.environ["GOOGLE_CLOUD_PROJECT"] = project_id

Where is the variable project_id defined? I cannot find it in this file.

aos-aos

comment created time in 2 months

PR opened GoogleCloudPlatform/professional-services

Update gce-quota-sync.py

Set project_id in the environment this script is running to avoid errors similar to https://stackoverflow.com/questions/47423772/how-to-set-project-id-to-avoid-warnings/65582521#65582521.

Pull Request Template

Please, go through these steps before you submit a PR.

  1. Make sure that your PR is not a duplicate.

  2. Before submitting the PR, review your changes:

    • Adjust the code to the existing style.
    • Add a set of unit tests for the changes and ensure they all pass.
    • Add accompanying README.md file with instructions on usage. See awesome-readme for good examples of high-quality READMEs.
    • Add a link to your contribution in the top-level README (alpha-order).
    • Add Apache 2.0 license headers with an up-to-date copyright date attributed to Google LLC.
    • Remove unnecessary LICENSE files. There's no need to include an additional license since all repository submissions are covered by the top-level Apache 2.0 license.
    • For new tools/examples, file your project with the PSO Engineering Council.
  3. After these steps, you're ready to open a pull request.

    • Give a descriptive title to your PR.
    • Provide a description of your changes.
    • Include the Engineering Council bug ID with your pull request.
    • Put closes #XXXX in your comment to auto-close the issue that your PR fixes (if such). <br/>

PLEASE REMOVE THIS TEMPLATE BEFORE SUBMITTING

+4 -0

0 comment

1 changed file

pr created time in 2 months

PR closed GoogleCloudPlatform/professional-services

Update gce-quota-sync.py cla: yes size/XS

Set project_id in the environment this script is running to avoid errors similar to https://stackoverflow.com/questions/47423772/how-to-set-project-id-to-avoid-warnings/65582521#65582521.

+3 -0

1 comment

1 changed file

aos-aos

pr closed time in 2 months

pull request commentGoogleCloudPlatform/professional-services

Update gce-quota-sync.py

We are not using master as the main branch of this repo anymore. Please could you resubmit this PR and send it against the main branch?

aos-aos

comment created time in 2 months

pull request commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

removed the unnecessary parenthesis and added custom exceptions wherever possible, since the source python library does not specify the exception i am using Exception to catch errors in that and again raise them to make sure the ThreadPoolExecutor framework picks them and marks them as failed.

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+#!/usr/bin/env python+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License..+"""+This file is used to create instance from a machine image.+"""++import time+import re+import googleapiclient.discovery+import logging+from . import node_group_mapping+from . import machine_type_mapping+from . import machine_image+from ratemate import RateLimit++rate_limit = RateLimit(max_count=2000, per=100)+++def get_compute():+    compute = googleapiclient.discovery.build('compute',+                                              'beta',+                                              cache_discovery=False)+    logging.getLogger(+        'googleapiclient.discovery_cache').setLevel(logging.ERROR)+    return compute+++def is_hosted_on_sole_tenant(instance):+    try:+        if (instance.get('scheduling')):+            if (instance.get('scheduling').get('nodeAffinities')):+                if (isinstance(instance['scheduling']['nodeAffinities'], list)+                        and len(instance['scheduling']['nodeAffinities']) > 0):+                    if (instance['scheduling']['nodeAffinities'][0].get('key')+                            == 'compute.googleapis.com/node-group-name'):+                        return True+        return False+    except KeyError:+        return False+++def get_node_group(instance):+    if (is_hosted_on_sole_tenant(instance)):+        return instance['scheduling']['nodeAffinities'][0]['values'][0]+    return None+++def get_updated_node_group(node_group):+    try:+        if (node_group_mapping.FIND.get(node_group)):+            config = {+                "scheduling": {+                    "nodeAffinities": [{+                        "key": 'compute.googleapis.com/node-group-name',+                        "operator": "IN",+                        "values": [node_group_mapping.FIND[node_group]]+                    }]+                }+            }+            logging.info("Found a matching node group %s for %s" %+                         (node_group, node_group_mapping.FIND.get(node_group)))+            return config+        else:+            return None+    except KeyError:+        return None+++def parse_self_link(self_link):+    if (self_link.startswith('projects')):+        self_link = "/" + self_link+    response = re.search(r"\/projects\/(.*?)\/zones\/(.*?)\/instances\/(.*?)$",+                         self_link)+    if (len(response.groups()) != 3):+        raise Exception('Invalid SelfLink Format')+    return {+        'instance_id': response.group(3),+        'zone': response.group(2),+        'project': response.group(1)+    }+++def shutdown_instance(compute, project, zone, instance_name):+    result = compute.instances().stop(project=project,+                                      zone=zone,+                                      instance=instance_name).execute()+    return result+++def shutdown(project, zone, instance_name):+    try:+        waited_time = rate_limit.wait()  # wait before starting the task+        logging.info(f"  task: waited for {waited_time} secs")+        compute = get_compute()+        logging.info("Shutting Down Instance %s ", (instance_name))+        result = shutdown_instance(compute, project, zone, instance_name)+        wait_for_zonal_operation(compute, project, zone, result['name'])+        return instance_name+    except Exception as ex:+        logging.error(ex)+        print(ex)+        raise ex+++def start(project, zone, instance_name):+    try:+        waited_time = rate_limit.wait()  # wait before starting the task+        logging.info(f"  task: waited for {waited_time} secs")+        compute = get_compute()+        logging.info("Starting Instance %s ", (instance_name))+        result = compute.instances().start(project=project,+                                           zone=zone,+                                           instance=instance_name).execute()+        wait_for_zonal_operation(compute, project, zone, result['name'])+        return instance_name+    except Exception as ex:+        logging.error(ex)+        print(ex)+        raise ex+

removed the print statement, but if i remove handling Exception i am not sure which custom exception to raise since the documentation to use compute.instances().start does not mention any custom exception it raises hence handling everything and re-raising it to mark the executor as failed.

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+#!/usr/bin/env python+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""+This file provides functionality related to migrating disks.+"""++import re+import logging+from . import instance+from . import machine_image+from ratemate import RateLimit++DISK_RATE_LIMIT = RateLimit(max_count=2000, per=100)+++def parse_self_link(self_link):+    if (self_link.startswith('projects')):+        self_link = "/" + self_link+    response = re.search(r"\/projects\/(.*?)\/zones\/(.*?)\/disks\/(.*?)$",+                         self_link)+    if (len(response.groups()) != 3):+        raise Exception('Invalid SelfLink Format')+    return {+        'name': response.group(3),+        'zone': response.group(2),+        'project': response.group(1)+    }+++def delete_disk(disk, project, zone, name):+    logging.info("Deleting Disk %s ", (name))+    return disk.delete(project=project, zone=zone, disk=name).execute()+++def delete(project, zone, instance_name, disk_name):+    try:+        waited_time = DISK_RATE_LIMIT.wait()  # wait before starting the task+        logging.info(f"  task: waited for {waited_time} secs")+        compute = instance.get_compute()+        image = machine_image.get(project, instance_name)+        if (image):+            logging.info("Found machine image can safely delete the disk %s"+                % disk_name)+            disks = compute.disks()+            try:+                disk = disks.get(project=project, zone=zone, disk=disk_name).execute()+            except:+                disk = None+            if disk:+                delete_operation = delete_disk(disks, project, zone, disk_name)+                instance.wait_for_zonal_operation(compute, project, zone,+                    delete_operation['name'])+            return disk_name+        else:+            raise Exception(+                "Can't delete the disk as machine image not found")+    except Exception as ex:+        logging.error(ex)+        print(ex)+        raise ex

using custom exception here for better clarity

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+#!/usr/bin/env python+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""+This file provides functionality related to migrating disks.+"""++import re+import logging+from . import instance+from . import machine_image+from ratemate import RateLimit++DISK_RATE_LIMIT = RateLimit(max_count=2000, per=100)+++def parse_self_link(self_link):+    if (self_link.startswith('projects')):+        self_link = "/" + self_link+    response = re.search(r"\/projects\/(.*?)\/zones\/(.*?)\/disks\/(.*?)$",+                         self_link)+    if (len(response.groups()) != 3):+        raise Exception('Invalid SelfLink Format')+    return {+        'name': response.group(3),+        'zone': response.group(2),+        'project': response.group(1)+    }+++def delete_disk(disk, project, zone, name):+    logging.info("Deleting Disk %s ", (name))+    return disk.delete(project=project, zone=zone, disk=name).execute()+++def delete(project, zone, instance_name, disk_name):+    try:+        waited_time = DISK_RATE_LIMIT.wait()  # wait before starting the task+        logging.info(f"  task: waited for {waited_time} secs")+        compute = instance.get_compute()+        image = machine_image.get(project, instance_name)+        if (image):+            logging.info("Found machine image can safely delete the disk %s"+                % disk_name)+            disks = compute.disks()+            try:+                disk = disks.get(project=project, zone=zone, disk=disk_name).execute()+            except:+                disk = None+            if disk:+                delete_operation = delete_disk(disks, project, zone, disk_name)+                instance.wait_for_zonal_operation(compute, project, zone,+                    delete_operation['name'])+            return disk_name+        else:+            raise Exception(+                "Can't delete the disk as machine image not found")+    except Exception as ex:+        logging.error(ex)+        print(ex)+        raise ex

correct, but because this is called from within a concurrent.futures.ThreadPoolExecutor if one of the instances raise an error it will kill the entire program, ideally we would want to log it mark the executor as failed and . move ahead with other execturs

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+#!/usr/bin/env python+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""+This file creates a machine image and them creates an instance+from the machine image in another subnet+"""++import argparse+import logging+import sys+import concurrent.futures+import time+from . import machine_image+from . import instance+from . import disk+from . import subnet+from . import zone_mapping+from . import fields+from csv import DictReader+from csv import DictWriter++def bulk_image_create(project,+                      source_zone,+                      source_subnet,+                      machine_image_region,+                      file_name='export.csv'):++    with open(file_name, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        count = 0+        tracker = 0+        machine_image_future = []+        machine_image_name = ''+        # We can use a with statement to ensure threads are cleaned up promptly+        with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:+            # Start the load operations and mark each future with its URL+            for row in csv_dict_reader:+                machine_image_future.append(+                    executor.submit(machine_image.create, project,+                                    machine_image_region, row['self_link'],+                                    row['name']))+                count = count + 1++            for future in concurrent.futures.as_completed(machine_image_future):+                try:+                    machine_image_name = future.result()+                    tracker = tracker + 1+                    logging.info('%r machine image created sucessfully' %+                                 (machine_image_name))+                    logging.info("Machine image %i out of %i completed" %+                                 (tracker, count))+                except Exception as exc:+                    logging.error(+                        'machine image creation generated an exception: %s' %+                        (exc))+++def bulk_delete_instances(file_name):+    with open(file_name, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        count = 0+        tracker = 0+        instance_future = []+        instance_name = ''++        # We can use a with statement to ensure threads are cleaned up promptly+        with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:+            # Start the load operations and mark each future with its URL+            for row in csv_dict_reader:+                parsed_link = instance.parse_self_link(row['self_link'])+                instance_future.append(+                    executor.submit(instance.delete, parsed_link['project'],+                                    parsed_link['zone'], row['name']))+                count = count + 1++            for future in concurrent.futures.as_completed(instance_future):+                try:+                    instance_name = future.result()+                    tracker = tracker + 1+                    logging.info('%r machine deleted sucessfully' %+                                 (instance_name))+                    logging.info("%i out of %i deleted" % (tracker, count))+                except Exception as exc:+                    logging.error(+                        'machine deletion generated an exception: %s' % (exc))+++def bulk_delete_disks(file_name):+    with open(file_name, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        count = 0+        tracker = 0+        disk_future = []+        disk_name = ''+        # We can use a with statement to ensure threads are cleaned up promptly+        with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:+            # Start the load operations and mark each future with its URL+            for row in csv_dict_reader:+                parsed_link = instance.parse_self_link(row['self_link'])+                for i in range(9):+                    if (row['disk_name_' + str(i + 1)] != ''):+                        disk_future.append(+                            executor.submit(disk.delete,+                                            parsed_link['project'],+                                            parsed_link['zone'], row['name'],+                                            row['disk_name_' + str(i + 1)]))+                count = count + 1++            for future in concurrent.futures.as_completed(disk_future):+                try:+                    disk_name = future.result()+                    tracker = tracker + 1+                    logging.info('%r disk deleted sucessfully' %+                                 (disk_name))+                    logging.info("%i out of %i deleted" % (tracker, count))+                except Exception as exc:+                    logging.error(+                        'disk deletion generated an exception: %s' % (exc))+++def bulk_instance_shutdown(file_name):+    with open(file_name, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        count = 0+        tracker = 0+        instance_future = []+        machine_name = ''+        # We can use a with statement to ensure threads are cleaned up promptly+        with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:+            # Start the load operations and mark each future with its URL+            for row in csv_dict_reader:+                parsed_link = instance.parse_self_link(row['self_link'])+                instance_future.append(+                    executor.submit(instance.shutdown, parsed_link['project'],+                                    parsed_link['zone'],+                                    parsed_link['instance_id']))+                count = count + 1+            for future in concurrent.futures.as_completed(instance_future):+                try:+                    machine_name = future.result()+                    tracker = tracker + 1+                    logging.info('%r machine shutdown sucessfully' %+                                 (machine_name))+                    logging.info("%i out of %i shutdown" % (tracker, count))+                except Exception as exc:+                    logging.error(+                        'machine shutdown generated an exception: %s' % (exc))+++def bulk_instance_start(file_name):+    with open(file_name, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        count = 0+        tracker = 0+        instance_future = []+        machine_name = ''+        # We can use a with statement to ensure threads are cleaned up promptly+        with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:+            # Start the load operations and mark each future with its URL+            for row in csv_dict_reader:+                parsed_link = instance.parse_self_link(row['self_link'])+                instance_future.append(+                    executor.submit(instance.start, parsed_link['project'],+                                    parsed_link['zone'],+                                    parsed_link['instance_id']))+                count = count + 1++            for future in concurrent.futures.as_completed(instance_future):+                try:+                    machine_name = future.result()+                    tracker = tracker + 1+                    logging.info('%r machine started sucessfully' %+                                 (machine_name))+                    logging.info("%i out of %i started up" % (tracker, count))+                except Exception as exc:+                    logging.error(+                        'machine strating generated an exception: %s' % (exc))+++def bulk_create_instances(file_name, target_subnet, retain_ip):+    with open(file_name, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        count = 0+        tracker = 0+        instance_future = []+        machine_name = ''+        # We can use a with statement to ensure threads are cleaned up promptly+        with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:+            # Start the load operations and mark each future with its URL+            for row in csv_dict_reader:+                ip = None+                if (retain_ip):+                    ip = row['internal_ip']++                parsed_link = instance.parse_self_link(row['self_link'])+                alias_ip_ranges = []+                # Re create the alias ip object from CSV if any+                # This support upto 4 ip ranges but they can be easily extended+                for i in range(4):+                    alias_range = {}+                    if (row['range_name_' + str(i + 1)] != ''):+                        alias_range['subnetworkRangeName'] = row['range_name_' ++                                                                 str(i + 1)]+                    if (row['alias_ip_name_' + str(i + 1)]):+                        alias_range['aliasIpName'] = row['alias_ip_name_' ++                                                         str(i + 1)]+                    if (row['alias_ip_' + str(i + 1)]):+                        alias_range['ipCidrRange'] = row['alias_ip_' ++                                                         str(i + 1)]+                        alias_ip_ranges.append(alias_range)+                # This supports up to 4 disks+                disk_names = {}+                for i in range(9):+                    if (row['device_name_' + str(i + 1)] != ''):+                        disk_names[row['device_name_' + str(i + 1)]] = row['disk_name_' ++                                                                      str(i + 1)]++                node_group = None+                if (row['node_group'] and row['node_group'] != ''):+                    node_group = row['node_group']++                target_zone = zone_mapping.FIND[parsed_link['zone']]+                instance_future.append(+                    executor.submit(instance.create, parsed_link['project'],+                                    target_zone, row['network'], target_subnet,+                                    row['name'], alias_ip_ranges, node_group,+                                    disk_names, ip, row['machine_type']))+                count = count + 1++            for future in concurrent.futures.as_completed(instance_future):+                try:+                    machine_name = future.result()+                    tracker = tracker + 1+                    logging.info('%r machine created sucessfully' %+                                 (machine_name))+                    logging.info("%i out of %i created " % (tracker, count))+                except Exception as exc:+                    logging.error(+                        'machine creation generated an exception: %s' % (exc))+++def query_yes_no(question, default="yes"):+    """Ask a yes/no question via raw_input() and return their answer.++    "question" is a string that is presented to the user.+    "default" is the presumed answer if the user just hits <Enter>.+        It must be "yes" (the default), "no" or None (meaning+        an answer is required of the user).++    The "answer" return value is True for "yes" or False for "no".+    """+    valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False}+    if default is None:+        prompt = " [y/n] "+    elif default == "yes":+        prompt = " [Y/n] "+    elif default == "no":+        prompt = " [y/N] "+    else:+        raise ValueError("invalid default answer: '%s'" % default)++    while True:+        sys.stdout.write(question + prompt)+        choice = input().lower()+        if default is not None and choice == '':+            return valid[default]+        if choice in valid:+            return valid[choice]+        else:+            sys.stdout.write("Please respond with 'yes' or 'no' "+                             "(or 'y' or 'n').\n")+++def filter_records(source_file, filter_file, destination_file):+    machine_names_to_filter = []+    with open(filter_file, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        for row in csv_dict_reader:+            machine_names_to_filter.append(row['name'])++    filtered = []+    headers = fields.HEADERS++    with open(source_file, 'r') as csvfile:+        csv_dict_reader = DictReader(csvfile)+        for row in csv_dict_reader:+            if (row['name'] in machine_names_to_filter):+                filtered.append(row)++    overrtie_file = query_yes_no(+        "About to overrite %s with %i records, please confirm to continue" %+        (destination_file, len(filtered)),+        default='no')++    if (overrtie_file):+        with open(destination_file, 'w') as csvfile:+            writer = DictWriter(csvfile, fieldnames=headers)+            writer.writeheader()+            writer.writerows(filtered)+    return overrtie_file+++def release_ips_from_file(file_name):+    with open(file_name, 'r') as read_obj:+        csv_dict_reader = DictReader(read_obj)+        for row in csv_dict_reader:+            parsed_link = instance.parse_self_link(row['self_link'])+            region = instance.get_region_from_zone(parsed_link['zone'])+            project = parsed_link['project']+            ips = []+            # The first ip for a machine is reserved with the same name as the VM+            ips.append(row['name'])+            # Find the reserved alias ips+            for i in range(4):+                ip_name = row.get('alias_ip_name_' + str(i + 1))+                if (ip_name != '' and row['range_name_' + str(i + 1)] == ""):+                    ips.append(ip_name)+            subnet.release_specific_ips(project, region, ips)+            time.sleep(2) # Prevent making too many requests in loop+++# main function+def main(project,+         source_zone,+         source_zone_2,+         source_zone_3,+         source_subnet,+         machine_image_region,+         target_subnet,+         target_region,+         source_region,+         subnet_name,+         step,+         log,+         file_name='export.csv',+         filter_file_name='filter.csv'):+    """+    The main method to trigger the VM migration+    """+    numeric_level = getattr(logging, log.upper(), None)+    if not isinstance(numeric_level, int):+        raise ValueError('Invalid log level: %s' % log)+    logging.basicConfig(filename='migrator.log',+                        format='%(asctime)s  %(levelname)s %(message)s',+                        level=numeric_level)++    logging.info("executing step %s" % (step))+    if (step == 'prepare_inventory'):+        logging.info("Preparing the inventory to be exported")+        subnet.export_instances(project, source_zone, source_zone_2,+                                source_zone_3, source_subnet, "source.csv")+    if (step == 'filter_inventory'):+        logging.info("Preparing the inventory to be exported")+        subnet.export_instances(project, source_zone, source_zone_2,+                                source_zone_3, source_subnet, "source.csv")+        logging.info("filtering out the inventory")+        overrite_file = filter_records("source.csv", filter_file_name,

fixed

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+#!/usr/bin/env python+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License..+"""+This file is used to create instance from a machine image.+"""++import time+import re+import googleapiclient.discovery+import logging+from . import node_group_mapping+from . import machine_type_mapping+from . import machine_image+from ratemate import RateLimit++rate_limit = RateLimit(max_count=2000, per=100)

done

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+2021-01-06 20:47:27,629  INFO executing step prepare_inventory

Removing log file, it will be automatically generator for the users

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+# The default ``config.py``

moved requirements.txt and pylintrc to the same directory as setup.py

suchitpuri

comment created time in 2 months

Pull request review commentGoogleCloudPlatform/professional-services

VM Migrator to move VM's between GCP zones, regions & projects

+# The default ``config.py``

makes sense, i checked in it by mistake, removing it

suchitpuri

comment created time in 2 months