profile
viewpoint
Feilong Wang openstacker Catalyst Cloud New Zealand https://openstacker.github.io OpenStack and Kubernetes ecosystem contributor

catalyst-cloud/magnum 2

Container Infrastructure Management Service for OpenStack

catalyst-cloud/requirements 0

Global requirements for OpenStack

openstacker/AdminLTE 0

AdminLTE - Free Premium Admin control Panel Theme Based On Bootstrap 3.x

openstacker/airnotifier 0

Easy to use push notifications for iOS, Android and Windows

openstacker/ansible 0

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications — automate in a language that approaches plain English, using SSH, with no agents to install on remote systems. https://docs.ansible.com/ansible/

openstacker/atomic-system-containers 0

Collection of system containers images

openstacker/autoscaler 0

Autoscaling components for Kubernetes

openstacker/autoscaling-testing 0

testing templates for autoscaling and ceilometer

pull request commentkubernetes-sigs/external-dns

Fix Designate doc

/assign @hjacobs

openstacker

comment created time in 5 days

push eventopenstacker/external-dns

Feilong Wang

commit sha cf26b450a0bf38c9015ffa96c5e5257935e550f6

Fix Designate doc

view details

push time in 5 days

PR opened kubernetes-sigs/external-dns

Fix Designate doc

Checklist

  • [ ] Update changelog in CHANGELOG.md, use section "Unreleased".
+1 -0

0 comment

1 changed file

pr created time in 5 days

create barnchopenstacker/external-dns

branch : add-serviceAccountName

created branch time in 5 days

fork openstacker/external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services

fork in 6 days

pull request commentkubernetes/autoscaler

Support Magnum node groups

@tghartland Could this work with k8s v1.16.x?

tghartland

comment created time in 13 days

startedshakthi-divyaa/books-1

started time in 21 days

PR opened cncf/k8s-conformance

Add Catalyst Cloud Kubernetes Service v1.17 conformance results

Pre-submission checklist:

Please check each of these after submitting your pull request:

  • [x] If this is a new entry, have you submitted a signed participation form?
  • [x] Did you include the product/project logo in SVG, EPS or AI format?
  • [x] Does your logo clearly state the name of the product/project and follow the other logo guidelines?
  • [x ] Did you copy and paste the installation and configuration instructions into the README.md file in addition to linking to them?
+26713 -0

0 comment

4 changed files

pr created time in 25 days

create barnchopenstacker/k8s-conformance

branch : catalyst-cloud-v1.17

created branch time in 25 days

PR opened cncf/k8s-conformance

Add Catalyst Cloud Kubernetes Service v1.18 conformance results

Pre-submission checklist:

Please check each of these after submitting your pull request:

  • [x] If this is a new entry, have you submitted a signed participation form?
  • [x] Did you include the product/project logo in SVG, EPS or AI format?
  • [x] Does your logo clearly state the name of the product/project and follow the other logo guidelines?
  • [x] If your product/project is open source, did you include the repo_url?
  • [x ] Did you copy and paste the installation and configuration instructions into the README.md file in addition to linking to them?
+26186 -0

0 comment

4 changed files

pr created time in 25 days

create barnchopenstacker/k8s-conformance

branch : catalyst-cloud-v1.18

created branch time in 25 days

pull request commentcatalyst-cloud/catalystcloud-docs

WIP: added release note for release_20200610

/lgtm Thanks.

chelios

comment created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

changes to add list of permissions to roles

 manage their own account information. This role cannot view, create or destroy project resources and it does not permit the uploading of SSH keys or the viewing of project usage and quota information. +More information+^^^^^^^^^^^^^^^^^^^^^^^^^^+.. raw:: html++   <details>+   <summary>For a more comprehensive list of the exact permissions that each+   role gives, you can <a>click here</a> to view a full list for the+   non-kubernetes roles.</summary>++.. code-block:: console++   +-----------------------+------------------------------------------------------------------------++   | Role                  | Permissions                                                            |+   +=======================+========================================================================++   | Project Member        | ALARM SERVICE                                                          |+   |                       | openstack.alarm.create                                                 |+   |                       | openstack.alarm.delete                                                 |+   |                       | openstack.alarm.list                                                   |+   |                       | openstack.alarm.show                                                   |+   |                       | openstack.alarm.state get                                              |+   |                       | openstack.alarm.state set                                              |+   |                       | openstack.alarm-history.search                                         |+   |                       | openstack.alarm-history.show                                           |+   |                       |                                                                        |+   |                       | COMPUTE SERVICE                                                        |+   |                       | openstack.compute.create                                               |+   |                       | openstack.compute.attach_network                                       |+   |                       | openstack.compute.attach_volume                                        |+   |                       | openstack.compute.detach_volume                                        |+   |                       | openstack.compute.get_all                                              |+   |                       | openstack.compute.start                                                |+   |                       | openstack.compute.stop                                                 |+   |                       | openstack.compute.get                                                  |+   |                       | openstack.compute.shelve                                               |+   |                       | openstack.compute.unshelve                                             |+   |                       | openstack.compute.resize                                               |+   |                       | openstack.compute.confirm_resize                                       |+   |                       | openstack.compute.revert_resize                                        |+   |                       | openstack.compute.rebuild                                              |+   |                       | openstack.compute.reboot                                               |+   |                       | openstack.compute.volume_snapshot_create                               |+   |                       | openstack.compute.volume_snapshot_delete                               |+   |                       | openstack.compute.add_fixed_ip                                         |+   |                       | openstack.compute.remoive_fixed_ip                                     |+   |                       | openstack.compute.attach_interface                                     |+   |                       | openstack.compute.delete_interface                                     |+   |                       | openstack.compute.backup                                               |+   |                       | openstack.compute.lock                                                 |+   |                       | openstack.compute.unlock                                               |+   |                       | openstack.compute.pause                                                |+   |                       | openstack.compute.unpause                                              |+   |                       | openstack.compute.rescue                                               |+   |                       | openstack.compute.unrescue                                             |+   |                       | openstack.compute.resume                                               |+   |                       | openstack.compute.security_groups:add_to_instance                      |+   |                       | openstack.compute.security_groups:remove_from_instance                 |+   |                       | openstack.compute.network.associate                                    |+   |                       | openstack.compute.network.disassociate                                 |+   |                       | openstack.compute.network.allocate_for_instance                        |+   |                       | openstack.compute.network.deallocate_for_instance                      |+   |                       | openstack.compute.snapshot                                             |+   |                       | openstack.compute.suspend                                              |+   |                       | openstack.compute.swap_volume                                          |+   |                       | openstack.compute.compute_extension:keypairs.create                    |+   |                       | openstack.compute.compute_extension:keypairs.delete                    |+   |                       | openstack.compute.compute_extension:keypairs.index                     |+   |                       | openstack.compute.compute_extension:keypairs.show                      |+   |                       |                                                                        |+   |                       | IMAGES                                                                 |+   |                       | openstack.image.add_image                                              |+   |                       | openstack.image.delete_image                                           |+   |                       | openstack.image.get_image                                              |+   |                       | openstack.image.get_images                                             |+   |                       | openstack.image.modify_image                                           |+   |                       | openstack.image.copy_from                                              |+   |                       | openstack.image.download_image                                         |+   |                       | openstack.image.upload_image                                           |+   |                       | openstack.image.delete_image_location                                  |+   |                       | openstack.image.get_image_location                                     |+   |                       | openstack.image.set_image_location                                     |+   |                       |                                                                        |+   |                       | NETWORK SERVICE                                                        |+   |                       | openstack.subnet.create_subnet                                         |+   |                       | openstack.subnet.get_subnet                                            |+   |                       | openstack.subnet.update_subnet                                         |+   |                       | openstack.subnet.delete_subnet                                         |+   |                       | openstack.subnet.create_subnetpool                                     |+   |                       | openstack.subnet.get_subnetpool                                        |+   |                       | openstack.subnet.update_subnetpool                                     |+   |                       | openstack.subnet.delete_subnetpool                                     |+   |                       | openstack.address.create_address_scope                                 |+   |                       | openstack.address.get_address_scope                                    |+   |                       | openstack.address.update_address_scope                                 |+   |                       | openstack.address.delete_address_scope                                 |+   |                       | openstack.network.create_network                                       |+   |                       | openstack.network.get_network                                          |+   |                       | openstack.network.update_network                                       |+   |                       | openstack.network.delete_network                                       |+   |                       | openstack.port.create_port                                             |+   |                       | openstack.port.create_port:device                                      |+   |                       | openstack.port.create_port:mac_address                                 |+   |                       | openstack.port.create_port:fixed_ips                                   |+   |                       | openstack.port.create_port:security_port_enabled                       |+   |                       | openstack.port.create_port:mac_learning_enabled                        |+   |                       | openstack.port.create_port:allowed_address_pairs                       |+   |                       | openstack.port.get_port                                                |+   |                       | openstack.port.update_port                                             |+   |                       | openstack.port.update_port:device_owner                                |+   |                       | openstack.port.update_port:fixed_ips                                   |+   |                       | openstack.port.update_port:port_security_enabled                       |+   |                       | openstack.port.update_port:mac_learning_enabled                        |+   |                       | openstack.port.update_port:allowed_address_pairs                       |+   |                       | openstack.port.delete_port                                             |+   |                       | openstack.router.create_router                                         |+   |                       | openstack.router.get_router                                            |+   |                       | openstack.router.delete_router                                         |+   |                       | openstack.router.add_router_interface                                  |+   |                       | openstack.router.remove_router_interface                               |+   |                       | firewall.create_firewall                                               |+   |                       | firewall.get_firewall                                                  |+   |                       | firewall.update_firewall                                               |+   |                       | firewall.delete_firewall                                               |+   |                       | firewall.create_firewall_policy                                        |+   |                       | firewall.get_firewall_policy                                           |+   |                       | firewall.create_firewall_policy:shared                                 |+   |                       | firewall.update_firewall_policy                                        |+   |                       | firewall.delete_firewall_policy                                        |+   |                       | firewall.create_firewall_rule                                          |+   |                       | firewall.get_firewall_rule                                             |+   |                       | firewall.update_firewall_rule                                          |+   |                       | firewall.delete_firewall_rule                                          |+   |                       | openstack.floatingip.create_floating_ip                                |+   |                       | openstack.floatingip.update_floating_ip                                |+   |                       | openstack.floatingip.delete_floating_ip                                |+   |                       | openstack.floatingip.get_floating_ip                                   |+   |                       |                                                                        |+   |                       | LOAD BALANCER SERVICE                                                  |+   |                       | openstack.loadbalancer.read                                            |+   |                       | openstack.loadbalancer.write                                           |+   |                       | openstack.loadbalancer.read-quota                                      |+   |                       | openstack.loadbalancer.healthmonitor.get_all                           |+   |                       | openstack.loadbalancer.healthmonitor.post                              |+   |                       | openstack.loadbalancer.healthmonitor.get_one                           |+   |                       | openstack.loadbalancer.healthmonitor.put                               |+   |                       | openstack.loadbalancer.healthmonitor.delete                            |+   |                       | openstack.loadbalancer.policy.*                                        |+   |                       | openstack.loadbalancer.rule.*                                          |+   |                       | openstack.loadbalancer.loadbalancer.*                                  |+   |                       | openstack.loadbalancer.pool.*                                          |+   |                       |                                                                        |+   |                       | VOLUME SERVICE                                                         |+   |                       | openstack.volume.create                                                |+   |                       | openstack.volume.delete                                                |+   |                       | openstack.volume.get                                                   |+   |                       | openstack.volume.get_all                                               |+   |                       | openstack.volume.get_volume_metadata                                   |+   |                       | openstack.volume.get_snapshot                                          |+   |                       | openstack.volume.get_all_snapshots                                     |+   |                       | openstack.volume.create_snapshot                                       |+   |                       | openstack.volume.delete_snapshot                                       |+   |                       | openstack.volume.update_snapshot                                       |+   |                       | openstack.volume.extend                                                |+   |                       | openstack.volume.update                                                |+   |                       | openstack.volume_extension.volume_type_access                          |+   |                       | openstack.volume_extension.encryption_metadata                         |+   |                       | openstack.volume_extension.snapshot_attributes                         |+   |                       | openstack.volume_extension.volume_image_metadata                       |+   |                       | openstack.volume_extension.quota.show                                  |+   |                       | openstack.volume_extension.volume_tenant_attribute                     |+   |                       | openstack.volume.create_transfer                                       |+   |                       | openstack.volume.accept_transfer                                       |+   |                       | openstack.volume.delete_transfer                                       |+   |                       | openstack.volume.get_all_transfers                                     |+   |                       | openstack.backup.create                                                |+   |                       | openstack.backup.delete                                                |+   |                       | openstack.backup.get                                                   |+   |                       | openstack.backup.get_all                                               |+   |                       | openstack.backup.restore                                               |+   |                       | openstack.snapshot_extension.snapshot_actions.update_snapshot_status   |+   |                       |                                                                        |+   |                       | ORCHESTRATION SERVICE                                                  |+   |                       | openstack.stacks.lookup                                                |+   +-----------------------+------------------------------------------------------------------------++   | Authentication Only   | openstack.keypair.create                                               |+   |                       | openstack.quota.show                                                   |+   +-----------------------+------------------------------------------------------------------------++   | Project Administrator | openstack.volume.get                                                   |+   |                       | openstack.volume.initialize_connection                                 |+   |                       | keystone.identity.project_users_access                                 |+   +-----------------------+------------------------------------------------------------------------++   | Project Moderator     | keystone.identity.project_users_access                                 |+   +-----------------------+------------------------------------------------------------------------++   | Compute Start/Stop    | openstack.compute.start                                                |+   |                       | openstack.compute.stop                                                 |+   |                       | openstack.compute.shelve                                               |+   |                       | openstack.compute.unshelve                                             |+   +-----------------------+------------------------------------------------------------------------++   | Heat Stack Owner      | openstack.Orchestration.*                                              |

openstack.Orchestration. is a bit confused for me. Do you mean user can do any action about orchestration? It's not consistent with above permission format.

danielobyrne

comment created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

changes to add list of permissions to roles

 manage their own account information. This role cannot view, create or destroy project resources and it does not permit the uploading of SSH keys or the viewing of project usage and quota information. +More information+^^^^^^^^^^^^^^^^^^^^^^^^^^+.. raw:: html++   <details>+   <summary>For a more comprehensive list of the exact permissions that each+   role gives, you can <a>click here</a> to view a full list for the+   non-kubernetes roles.</summary>++.. code-block:: console++   +-----------------------+------------------------------------------------------------------------++   | Role                  | Permissions                                                            |+   +=======================+========================================================================++   | Project Member        | ALARM SERVICE                                                          |

Project Member is kind of a name or description of the role. We'd better use the real role name in Keystone (as you did for k8s roles).

danielobyrne

comment created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

changes to add list of permissions to roles

 manage their own account information. This role cannot view, create or destroy project resources and it does not permit the uploading of SSH keys or the viewing of project usage and quota information. +More information+^^^^^^^^^^^^^^^^^^^^^^^^^^+.. raw:: html++   <details>+   <summary>For a more comprehensive list of the exact permissions that each+   role gives, you can <a>click here</a> to view a full list for the+   non-kubernetes roles.</summary>++.. code-block:: console++   +-----------------------+------------------------------------------------------------------------++   | Role                  | Permissions                                                            |+   +=======================+========================================================================++   | Project Member        | ALARM SERVICE                                                          |+   |                       | openstack.alarm.create                                                 |+   |                       | openstack.alarm.delete                                                 |+   |                       | openstack.alarm.list                                                   |+   |                       | openstack.alarm.show                                                   |+   |                       | openstack.alarm.state get                                              |+   |                       | openstack.alarm.state set                                              |+   |                       | openstack.alarm-history.search                                         |+   |                       | openstack.alarm-history.show                                           |+   |                       |                                                                        |+   |                       | COMPUTE SERVICE                                                        |+   |                       | openstack.compute.create                                               |+   |                       | openstack.compute.attach_network                                       |+   |                       | openstack.compute.attach_volume                                        |+   |                       | openstack.compute.detach_volume                                        |+   |                       | openstack.compute.get_all                                              |+   |                       | openstack.compute.start                                                |+   |                       | openstack.compute.stop                                                 |+   |                       | openstack.compute.get                                                  |+   |                       | openstack.compute.shelve                                               |+   |                       | openstack.compute.unshelve                                             |+   |                       | openstack.compute.resize                                               |+   |                       | openstack.compute.confirm_resize                                       |+   |                       | openstack.compute.revert_resize                                        |+   |                       | openstack.compute.rebuild                                              |+   |                       | openstack.compute.reboot                                               |+   |                       | openstack.compute.volume_snapshot_create                               |+   |                       | openstack.compute.volume_snapshot_delete                               |+   |                       | openstack.compute.add_fixed_ip                                         |+   |                       | openstack.compute.remoive_fixed_ip                                     |+   |                       | openstack.compute.attach_interface                                     |+   |                       | openstack.compute.delete_interface                                     |+   |                       | openstack.compute.backup                                               |+   |                       | openstack.compute.lock                                                 |+   |                       | openstack.compute.unlock                                               |+   |                       | openstack.compute.pause                                                |+   |                       | openstack.compute.unpause                                              |+   |                       | openstack.compute.rescue                                               |+   |                       | openstack.compute.unrescue                                             |+   |                       | openstack.compute.resume                                               |+   |                       | openstack.compute.security_groups:add_to_instance                      |+   |                       | openstack.compute.security_groups:remove_from_instance                 |+   |                       | openstack.compute.network.associate                                    |+   |                       | openstack.compute.network.disassociate                                 |+   |                       | openstack.compute.network.allocate_for_instance                        |+   |                       | openstack.compute.network.deallocate_for_instance                      |+   |                       | openstack.compute.snapshot                                             |+   |                       | openstack.compute.suspend                                              |+   |                       | openstack.compute.swap_volume                                          |+   |                       | openstack.compute.compute_extension:keypairs.create                    |+   |                       | openstack.compute.compute_extension:keypairs.delete                    |+   |                       | openstack.compute.compute_extension:keypairs.index                     |+   |                       | openstack.compute.compute_extension:keypairs.show                      |+   |                       |                                                                        |+   |                       | IMAGES                                                                 |+   |                       | openstack.image.add_image                                              |+   |                       | openstack.image.delete_image                                           |+   |                       | openstack.image.get_image                                              |+   |                       | openstack.image.get_images                                             |+   |                       | openstack.image.modify_image                                           |+   |                       | openstack.image.copy_from                                              |+   |                       | openstack.image.download_image                                         |+   |                       | openstack.image.upload_image                                           |+   |                       | openstack.image.delete_image_location                                  |+   |                       | openstack.image.get_image_location                                     |+   |                       | openstack.image.set_image_location                                     |+   |                       |                                                                        |+   |                       | NETWORK SERVICE                                                        |+   |                       | openstack.subnet.create_subnet                                         |+   |                       | openstack.subnet.get_subnet                                            |+   |                       | openstack.subnet.update_subnet                                         |+   |                       | openstack.subnet.delete_subnet                                         |+   |                       | openstack.subnet.create_subnetpool                                     |+   |                       | openstack.subnet.get_subnetpool                                        |+   |                       | openstack.subnet.update_subnetpool                                     |+   |                       | openstack.subnet.delete_subnetpool                                     |+   |                       | openstack.address.create_address_scope                                 |+   |                       | openstack.address.get_address_scope                                    |+   |                       | openstack.address.update_address_scope                                 |+   |                       | openstack.address.delete_address_scope                                 |+   |                       | openstack.network.create_network                                       |+   |                       | openstack.network.get_network                                          |+   |                       | openstack.network.update_network                                       |+   |                       | openstack.network.delete_network                                       |+   |                       | openstack.port.create_port                                             |+   |                       | openstack.port.create_port:device                                      |+   |                       | openstack.port.create_port:mac_address                                 |+   |                       | openstack.port.create_port:fixed_ips                                   |+   |                       | openstack.port.create_port:security_port_enabled                       |+   |                       | openstack.port.create_port:mac_learning_enabled                        |+   |                       | openstack.port.create_port:allowed_address_pairs                       |+   |                       | openstack.port.get_port                                                |+   |                       | openstack.port.update_port                                             |+   |                       | openstack.port.update_port:device_owner                                |+   |                       | openstack.port.update_port:fixed_ips                                   |+   |                       | openstack.port.update_port:port_security_enabled                       |+   |                       | openstack.port.update_port:mac_learning_enabled                        |+   |                       | openstack.port.update_port:allowed_address_pairs                       |+   |                       | openstack.port.delete_port                                             |+   |                       | openstack.router.create_router                                         |+   |                       | openstack.router.get_router                                            |+   |                       | openstack.router.delete_router                                         |+   |                       | openstack.router.add_router_interface                                  |+   |                       | openstack.router.remove_router_interface                               |+   |                       | firewall.create_firewall                                               |+   |                       | firewall.get_firewall                                                  |+   |                       | firewall.update_firewall                                               |+   |                       | firewall.delete_firewall                                               |+   |                       | firewall.create_firewall_policy                                        |+   |                       | firewall.get_firewall_policy                                           |+   |                       | firewall.create_firewall_policy:shared                                 |+   |                       | firewall.update_firewall_policy                                        |+   |                       | firewall.delete_firewall_policy                                        |+   |                       | firewall.create_firewall_rule                                          |+   |                       | firewall.get_firewall_rule                                             |+   |                       | firewall.update_firewall_rule                                          |+   |                       | firewall.delete_firewall_rule                                          |+   |                       | openstack.floatingip.create_floating_ip                                |+   |                       | openstack.floatingip.update_floating_ip                                |+   |                       | openstack.floatingip.delete_floating_ip                                |+   |                       | openstack.floatingip.get_floating_ip                                   |+   |                       |                                                                        |+   |                       | LOAD BALANCER SERVICE                                                  |+   |                       | openstack.loadbalancer.read                                            |+   |                       | openstack.loadbalancer.write                                           |+   |                       | openstack.loadbalancer.read-quota                                      |+   |                       | openstack.loadbalancer.healthmonitor.get_all                           |+   |                       | openstack.loadbalancer.healthmonitor.post                              |+   |                       | openstack.loadbalancer.healthmonitor.get_one                           |+   |                       | openstack.loadbalancer.healthmonitor.put                               |+   |                       | openstack.loadbalancer.healthmonitor.delete                            |+   |                       | openstack.loadbalancer.policy.*                                        |+   |                       | openstack.loadbalancer.rule.*                                          |+   |                       | openstack.loadbalancer.loadbalancer.*                                  |+   |                       | openstack.loadbalancer.pool.*                                          |+   |                       |                                                                        |+   |                       | VOLUME SERVICE                                                         |+   |                       | openstack.volume.create                                                |+   |                       | openstack.volume.delete                                                |+   |                       | openstack.volume.get                                                   |+   |                       | openstack.volume.get_all                                               |+   |                       | openstack.volume.get_volume_metadata                                   |+   |                       | openstack.volume.get_snapshot                                          |+   |                       | openstack.volume.get_all_snapshots                                     |+   |                       | openstack.volume.create_snapshot                                       |+   |                       | openstack.volume.delete_snapshot                                       |+   |                       | openstack.volume.update_snapshot                                       |+   |                       | openstack.volume.extend                                                |+   |                       | openstack.volume.update                                                |+   |                       | openstack.volume_extension.volume_type_access                          |+   |                       | openstack.volume_extension.encryption_metadata                         |+   |                       | openstack.volume_extension.snapshot_attributes                         |+   |                       | openstack.volume_extension.volume_image_metadata                       |+   |                       | openstack.volume_extension.quota.show                                  |+   |                       | openstack.volume_extension.volume_tenant_attribute                     |+   |                       | openstack.volume.create_transfer                                       |+   |                       | openstack.volume.accept_transfer                                       |+   |                       | openstack.volume.delete_transfer                                       |+   |                       | openstack.volume.get_all_transfers                                     |+   |                       | openstack.backup.create                                                |+   |                       | openstack.backup.delete                                                |+   |                       | openstack.backup.get                                                   |+   |                       | openstack.backup.get_all                                               |+   |                       | openstack.backup.restore                                               |+   |                       | openstack.snapshot_extension.snapshot_actions.update_snapshot_status   |+   |                       |                                                                        |+   |                       | ORCHESTRATION SERVICE                                                  |+   |                       | openstack.stacks.lookup                                                |+   +-----------------------+------------------------------------------------------------------------++   | Authentication Only   | openstack.keypair.create                                               |+   |                       | openstack.quota.show                                                   |+   +-----------------------+------------------------------------------------------------------------++   | Project Administrator | openstack.volume.get                                                   |+   |                       | openstack.volume.initialize_connection                                 |+   |                       | keystone.identity.project_users_access                                 |+   +-----------------------+------------------------------------------------------------------------++   | Project Moderator     | keystone.identity.project_users_access                                 |+   +-----------------------+------------------------------------------------------------------------++   | Compute Start/Stop    | openstack.compute.start                                                |+   |                       | openstack.compute.stop                                                 |+   |                       | openstack.compute.shelve                                               |+   |                       | openstack.compute.unshelve                                             |+   +-----------------------+------------------------------------------------------------------------++   | Heat Stack Owner      | openstack.Orchestration.*                                              |+   +-----------------------+------------------------------------------------------------------------++   | Object Storage        | Swift.*                                                                |

Ditto

danielobyrne

comment created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

changes to add list of permissions to roles

 assigned to their account. |               | admin namespace.                                                 | +---------------+------------------------------------------------------------------+ +More information+----------------++.. raw:: html++   <details>+   <summary>For a more comprehensive list of the exact permissions that each+   role gives, you can <a>click here</a> to view a full list for the kubernetes+   roles.</summary>++.. code-block:: console++   +----------------------+-----------------------------------------------------++   | Role                 | Permissions                                         |+   +======================+=====================================================++   | k8s_admin            | container.*                                         |+   |                      | resourcemanager.projects.get                        |+   |                      | resourcemanager.projects.list                       |+   +----------------------+-----------------------------------------------------++   | cluster_admin        | container.clusters.create                           |

I'd suggest removing this cluster_admin role because it's not used at this moment.

danielobyrne

comment created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

changes to add list of permissions to roles

 assigned to their account. |               | admin namespace.                                                 | +---------------+------------------------------------------------------------------+ +More information+----------------++.. raw:: html++   <details>+   <summary>For a more comprehensive list of the exact permissions that each+   role gives, you can <a>click here</a> to view a full list for the kubernetes+   roles.</summary>++.. code-block:: console++   +----------------------+-----------------------------------------------------++   | Role                 | Permissions                                         |+   +======================+=====================================================++   | k8s_admin            | container.*                                         |

@danielobyrne May I know where did you get the source to generate this form? Because it doesn't match what we have, see https://github.com/openstack/magnum/blob/master/etc/magnum/keystone_auth_default_policy.sample

danielobyrne

comment created time in a month

startedkubernetes-sigs/external-dns

started time in a month

startedheytrav/drs-api

started time in a month

create barnchopenstacker/catalystcloud-docs

branch : v1.18.x

created branch time in a month

PullRequestEvent

push eventopenstacker/catalystcloud-docs

Feilong Wang

commit sha 79e959ee4d86126fece8430abc516654066aa482

Release k8s v1.18.2

view details

push time in a month

create barnchopenstacker/catalystcloud-docs

branch : release-v1.18.x

created branch time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

+.. _kubernetes-versions:++########+Versions+########++*******************+Kubernetes Versions+*******************++Kubernetes community releases a minor version about very three months. In those+minor version releases, there are some new features and bug fixes. Patch versions+will be released more frequent (e.g. weekly) and genereally to include critical+fixes , e.g. security fixes.++Catalyst Cloud Kubernetes Service supports each ``minor`` version for at least+``6`` months before deprecating it to give users enough time to upgrade their+clusters. ++What is Kubernetes Versions+===========================++Kubernetes follows the standard `Semantic Versioning`_ terminology. Versions are+expressed as x.y.z, where x is the major version, y is the minor version and z+is the patch version.+++---------------+------------------------------------------------------------------++| Version Part  | Description                                                      |++===============+==================================================================++| MAJOR         | versions that may make incompatible API changes                  |++---------------+------------------------------------------------------------------++| MINOR         | versions that adds functionality in a backwards compatible manner|++---------------+------------------------------------------------------------------++| PATCH         | versions that makes backwards compatible bug fixes               |++---------------+------------------------------------------------------------------+++For example:++.. code-block:: bash++  [major].[minor].[patch]+  +  v1.18.2+  v1.17.5+  v1.16.9++Catalyst Cloud Kubernetes Service uses cluster template to manage each Kubernetes+version and the matrix of addons running on top of Kubernetes cluster. And users+should be able to see the Kubernetes version from the cluster template name. For+example: *kubernetes-v1.16.9-prod-20200602*+++For more information, see `Kubernetes Release Versioning`_.++.. _`Semantic Versioning`: http://semver.org/+.. _`Kubernetes Release Versioning`: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning++Kubernetes Versions Support Policy+==================================++Catalyst Cloud Kubernetes Service supports at least ``3`` minors versions. As long+as there is a new minor version released, Catalyst Cloud Kubernetes Service will+try to get it certified (pass the CNCF conformance test) and released in ``30``+days. And then deprecate the oldest minor version. If the current 3 minor versions +are v1.17.x, v1.16.x and v1.15.x. Then as long as the new v1.18.x released, the+v1.15.x versions(both the two patch versions) will be removed and out of supported.+Here, out of support means whenever users ask for support, you will be asked+to upgrade your clusters to supported versions first.++Catalyst Cloud Kubernetes Service supports the latest ``2`` stable patch versions+for each version. As long as there is a patch version released, the oldest patch+version will be removed and out of supported. For example, if current versions+supported for v1.16.x are v1.16.9 and v1.16.8, then v1.16.8 will be removed in+favor of the release of v1.16.10.++.. code-block:: bash++    v1.18.2, v1.18.1, v1.17.5, v1.17.4, v1.16.9, v1.16.8++Users should always aim to run the latest patch version for each minor version+to get the latest security enhancements. For example, if the current Kubernetes+cluster is running on v1.16.9 and the new patch version is v1.16.10, then it+is highly recommended to upgrade to v1.16.10 as soon as possible.++When there is a new patch version released, users have ``30`` days to upgrade+to a supported patch version if cluster's current version is deprecated by the+new version released.+++.. note::++    If the cluster is running on a version which has been deprecated, then the+    cluster is out of support.++.. note::++    Catalyst Cloud reserves the right to add/remove a new/existing cluster+    template if there is a cirtical issue identified in the version without+    further notice.++****************************+Upgrading Kubernetes version+****************************++When doing Kubernetes version upgrade, minor vesion cannot be skipped. For+example, if the current cluster version is v1.16.x, then it's not allowed+to upgrade to v1.18.x. You have to upgrade to v1.17.x and then do another+upgrade to v1.18.x.

We're not waiting. I just need time to get it done :)

openstacker

comment created time in a month

pull request commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

This is pretty clear, and it makes the versioning policy and timeframes very understandable.

It does not offer very much info regarding how users should make sure they stay up to date (it really only says that they must do so), so that is maybe something that can be added. But, what is there now does a good job of explaining the approach from the cloud's side.

I think the timeframes are reasonable assuming there is some way that users are notified when their cluster versions are going stale.

Did you see the latest changes I pushed? After discussed with @teolupus we decided that the patch version will be still under-supported as long as the minor version is under support. The purpose of hiding a patch version is mainly to encourage user upgrade to a newer patch version or minor version.

openstacker

comment created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

+.. _kubernetes-versions:++########+Versions+########++*******************+Kubernetes Versions+*******************++Kubernetes community releases a minor version about very three months. In those+minor version releases, there are some new features and bug fixes. Patch versions+will be released more frequent (e.g. weekly) and genereally to include critical+fixes , e.g. security fixes.++Catalyst Cloud Kubernetes Service supports each ``minor`` version for at least+``6`` months before deprecating it to give users enough time to upgrade their+clusters. ++What is Kubernetes Versions+===========================++Kubernetes follows the standard `Semantic Versioning`_ terminology. Versions are+expressed as x.y.z, where x is the major version, y is the minor version and z+is the patch version.+++---------------+------------------------------------------------------------------++| Version Part  | Description                                                      |++===============+==================================================================++| MAJOR         | versions that may make incompatible API changes                  |++---------------+------------------------------------------------------------------++| MINOR         | versions that adds functionality in a backwards compatible manner|++---------------+------------------------------------------------------------------++| PATCH         | versions that makes backwards compatible bug fixes               |++---------------+------------------------------------------------------------------+++For example:++.. code-block:: bash++  [major].[minor].[patch]+  +  v1.18.2+  v1.17.5+  v1.16.9++Catalyst Cloud Kubernetes Service uses cluster template to manage each Kubernetes+version and the matrix of addons running on top of Kubernetes cluster. And users+should be able to see the Kubernetes version from the cluster template name. For+example: *kubernetes-v1.16.9-prod-20200602*+++For more information, see `Kubernetes Release Versioning`_.++.. _`Semantic Versioning`: http://semver.org/+.. _`Kubernetes Release Versioning`: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning++Kubernetes Versions Support Policy+==================================++Catalyst Cloud Kubernetes Service supports at least ``3`` minors versions. As long+as there is a new minor version released, Catalyst Cloud Kubernetes Service will+try to get it certified (pass the CNCF conformance test) and released in ``30``+days. And then deprecate the oldest minor version. If the current 3 minor versions +are v1.17.x, v1.16.x and v1.15.x. Then as long as the new v1.18.x released, the+v1.15.x versions(both the two patch versions) will be removed and out of supported.+Here, out of support means whenever users ask for support, you will be asked+to upgrade your clusters to supported versions first.++Catalyst Cloud Kubernetes Service supports the latest ``2`` stable patch versions+for each version. As long as there is a patch version released, the oldest patch+version will be removed and out of supported. For example, if current versions+supported for v1.16.x are v1.16.9 and v1.16.8, then v1.16.8 will be removed in+favor of the release of v1.16.10.++.. code-block:: bash++    v1.18.2, v1.18.1, v1.17.5, v1.17.4, v1.16.9, v1.16.8++Users should always aim to run the latest patch version for each minor version+to get the latest security enhancements. For example, if the current Kubernetes+cluster is running on v1.16.9 and the new patch version is v1.16.10, then it+is highly recommended to upgrade to v1.16.10 as soon as possible.++When there is a new patch version released, users have ``30`` days to upgrade+to a supported patch version if cluster's current version is deprecated by the+new version released.

Good point, I will think about how to leverage the cluster page to show some useful information.

openstacker

comment created time in a month

push eventopenstacker/catalystcloud-docs

Feilong Wang

commit sha c246d02362d343b3ae024730b40f59177e17fb4c

Add k8s versions

view details

push time in a month

push eventopenstacker/catalystcloud-docs

Feilong Wang

commit sha b1ecc5d1cb0a785444335a7a872e1fe77239ab61

Add k8s versions

view details

push time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

+.. _kubernetes-versions:++########+Versions+########++*******************+Kubernetes Versions+*******************++Kubernetes community releases a minor version about very three months. In those+minor version releases, there are some new features and bug fixes. Patch versions+will be released more frequent (e.g. weekly) and genereally to include critical+fixes , e.g. security fixes.++Catalyst Cloud Kubernetes Service supports each ``minor`` version for at least+``6`` months before deprecating it to give users enough time to upgrade their

I guess the only time the 30 days period might be too short a time frame is where we introduce a breaking change that does not support the rolling upgrade functionality e.g. the recent change of OS for the cluster nodes. In a case like this we may want to consider a slightly longer window perhaps?

With regards to the including every patch version I like your idea of going with the stable patch version and I feel that if we rolled them up and did it every 3 months then they (the customer) could retain the version they are on for a period of around 6 months before needing to upgrade in order to stay on a supported version.

The Fedora Atomic -> Fedora CoreOS is a special case, it won't happen again after GA.

openstacker

comment created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

+.. _kubernetes-versions:++########+Versions+########++*******************+Kubernetes Versions+*******************++Kubernetes community releases a minor version about very three months. In those+minor version releases, there are some new features and bug fixes. Patch versions+will be released more frequent (e.g. weekly) and genereally to include critical+fixes , e.g. security fixes.++Catalyst Cloud Kubernetes Service supports each ``minor`` version for at least+``6`` months before deprecating it to give users enough time to upgrade their+clusters. ++What is Kubernetes Versions+===========================++Kubernetes follows the standard `Semantic Versioning`_ terminology. Versions are+expressed as x.y.z, where x is the major version, y is the minor version and z+is the patch version.+++---------------+------------------------------------------------------------------++| Version Part  | Description                                                      |++===============+==================================================================++| MAJOR         | versions that may make incompatible API changes                  |++---------------+------------------------------------------------------------------++| MINOR         | versions that adds functionality in a backwards compatible manner|++---------------+------------------------------------------------------------------++| PATCH         | versions that makes backwards compatible bug fixes               |++---------------+------------------------------------------------------------------+++For example:++.. code-block:: bash++  [major].[minor].[patch]+  +  v1.18.2+  v1.17.5+  v1.16.9++Catalyst Cloud Kubernetes Service uses cluster template to manage each Kubernetes+version and the matrix of addons running on top of Kubernetes cluster. And users+should be able to see the Kubernetes version from the cluster template name. For+example: *kubernetes-v1.16.9-prod-20200602*+++For more information, see `Kubernetes Release Versioning`_.++.. _`Semantic Versioning`: http://semver.org/+.. _`Kubernetes Release Versioning`: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning++Kubernetes Versions Support Policy+==================================++Catalyst Cloud Kubernetes Service supports at least ``3`` minors versions. As long+as there is a new minor version released, Catalyst Cloud Kubernetes Service will+try to get it certified (pass the CNCF conformance test) and released in ``30``+days. And then deprecate the oldest minor version. If the current 3 minor versions +are v1.17.x, v1.16.x and v1.15.x. Then as long as the new v1.18.x released, the+v1.15.x versions(both the two patch versions) will be removed and out of supported.+Here, out of support means whenever users ask for support, you will be asked+to upgrade your clusters to supported versions first.++Catalyst Cloud Kubernetes Service supports the latest ``2`` stable patch versions+for each version. As long as there is a patch version released, the oldest patch+version will be removed and out of supported. For example, if current versions+supported for v1.16.x are v1.16.9 and v1.16.8, then v1.16.8 will be removed in+favor of the release of v1.16.10.++.. code-block:: bash++    v1.18.2, v1.18.1, v1.17.5, v1.17.4, v1.16.9, v1.16.8++Users should always aim to run the latest patch version for each minor version+to get the latest security enhancements. For example, if the current Kubernetes+cluster is running on v1.16.9 and the new patch version is v1.16.10, then it+is highly recommended to upgrade to v1.16.10 as soon as possible.++When there is a new patch version released, users have ``30`` days to upgrade+to a supported patch version if cluster's current version is deprecated by the+new version released.+++.. note::++    If the cluster is running on a version which has been deprecated, then the+    cluster is out of support.++.. note::++    Catalyst Cloud reserves the right to add/remove a new/existing cluster+    template if there is a cirtical issue identified in the version without+    further notice.++****************************+Upgrading Kubernetes version+****************************++When doing Kubernetes version upgrade, minor vesion cannot be skipped. For+example, if the current cluster version is v1.16.x, then it's not allowed+to upgrade to v1.18.x. You have to upgrade to v1.17.x and then do another+upgrade to v1.18.x.

@evhan We don't enforce it now. The upstream team is working on a feature to support this soon. We would like to ask users to follow this before we have a mechanism to enforce it, though technically skip version upgrade should work from the Magnum perspective. The main concern is that the compatibility of k8s api may break between minor versions.

openstacker

comment created time in a month

pull request commentgophercloud/gophercloud

Support `merge_labels` when creating Magnum cluster

@openstacker The OpenLab failure is ignorable - it was a temporary failure. This looks good to me - is this ready for review/merge?

Yep, it's ready for review. I tested locally. Thanks.

openstacker

comment created time in a month

startedovh/cds

started time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

+.. _kubernetes-versions:++########+Versions+########++*******************+Kubernetes Versions+*******************++Kubernetes community releases a minor version about very three months. In those+minor version releases, there are some new features and bug fixes. Patch versions+will be released more frequent (e.g. weekly) and genereally to include critical+fixes , e.g. security fixes.++Catalyst Cloud Kubernetes Service supports each ``minor`` version for at least+``6`` months before deprecating it to give users enough time to upgrade their+clusters. ++What is Kubernetes Versions+===========================++Kubernetes follows the standard `Semantic Versioning`_ terminology. Versions are+expressed as x.y.z, where x is the major version, y is the minor version and z+is the patch version.+++---------------+------------------------------------------------------------------++| Version Part  | Description                                                      |++===============+==================================================================++| MAJOR         | versions that may make incompatible API changes                  |++---------------+------------------------------------------------------------------++| MINOR         | versions that adds functionality in a backwards compatible manner|++---------------+------------------------------------------------------------------++| PATCH         | versions that makes backwards compatible bug fixes               |++---------------+------------------------------------------------------------------+++For example:++.. code-block:: bash++  [major].[minor].[patch]+  +  v1.18.2+  v1.17.5+  v1.16.9++Catalyst Cloud Kubernetes Service uses cluster template to manage each Kubernetes+version and the matrix of addons running on top of Kubernetes cluster. And users+should be able to see the Kubernetes version from the cluster template name. For+example: *kubernetes-v1.16.9-prod-20200602*+++For more information, see `Kubernetes Release Versioning`_.++.. _`Semantic Versioning`: http://semver.org/+.. _`Kubernetes Release Versioning`: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning++Kubernetes Versions Support Policy+==================================++Catalyst Cloud Kubernetes Service supports at least ``3`` minors versions. As long+as there is a new minor version released, Catalyst Cloud Kubernetes Service will+try to get it certified (pass the CNCF conformance test) and released in ``30``+days. And then deprecate the oldest minor version. If the current 3 minor versions +are v1.17.x, v1.16.x and v1.15.x. Then as long as the new v1.18.x released, the+v1.15.x versions(both the two patch versions) will be removed and out of supported.+Here, out of support means whenever users ask for support, you will be asked+to upgrade your clusters to supported versions first.++Catalyst Cloud Kubernetes Service supports the latest ``2`` stable patch versions+for each version. As long as there is a patch version released, the oldest patch+version will be removed and out of supported. For example, if current versions+supported for v1.16.x are v1.16.9 and v1.16.8, then v1.16.8 will be removed in+favor of the release of v1.16.10.++.. code-block:: bash++    v1.18.2, v1.18.1, v1.17.5, v1.17.4, v1.16.9, v1.16.8++Users should always aim to run the latest patch version for each minor version+to get the latest security enhancements. For example, if the current Kubernetes+cluster is running on v1.16.9 and the new patch version is v1.16.10, then it+is highly recommended to upgrade to v1.16.10 as soon as possible.++When there is a new patch version released, users have ``30`` days to upgrade+to a supported patch version if cluster's current version is deprecated by the+new version released.

Good questions, @evhan

Before having notification support on Horizon, we may have to use the release notes and this versions page to indicate the currently supported versions.

Whenever there is a new version template releaed or deprecated, it will be published on the release note page. Meanwhile, a table on this versions page will show all the current supported versions and a warning note that the version is going to be deprecated. Does that make sense for you?

openstacker

comment created time in a month

PR opened gophercloud/gophercloud

Support `merge_labels` when creating Magnum cluster

For #1984

Links to the line numbers/files in the OpenStack source code that support the code in this PR:

The feature was introduced in OpenStack Magnum by https://review.opendev.org/#/c/720221/

+2 -0

0 comment

2 changed files

pr created time in a month

create barnchopenstacker/gophercloud

branch : support-merge-labels

created branch time in a month

issue openedgophercloud/gophercloud

ContainerInfra v1: Support merge-labels when creating cluster

Now Magnum supports a new parameter named marge-labels when creating a new cluster. The default value is False, the labels passed in by the user will be automatically merged with the labels defined in the cluster template when it's set True. It's a quite useful feature that improves the UX a lot.

created time in a month

Pull request review commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

+.. _kubernetes-versions:++########+Versions+########++*******************+Kubernetes Versions+*******************++Kubernetes community releases a minor version about very three months. In those+minor version releases, there are some new features and bug fixes. Patch versions+will be released more frequent (e.g. weekly) and genereally to include critical+fixes , e.g. security fixes.++Catalyst Cloud Kubernetes Service supports each ``minor`` version for at least+``6`` months before deprecating it to give users enough time to upgrade their

There are some questions we need to discuss since I don't have a good answer yet:

  1. Are we going to provide every patch version? Or we can start from some patch we believe it's stable for that minor version.
  2. Personally, I'd like to support 1 year for each minor version. And at any time, we support two patch versions as I proposed below.

However, assuming we release a new patch version per month, e.g. v1.16.8, after about 2 months, there could be a new patch version v1.16.10, and we're going to deprecate/remove the v1.16.8 patch version. And we give users 30 days to upgrade. For this case, that means for about every 3 months, users have to upgrade their clusters to a new patch version. Do you think that's acceptable?

openstacker

comment created time in 2 months

Pull request review commentcatalyst-cloud/catalystcloud-docs

Add k8s versions

+.. _kubernetes-versions:++########+Versions+########++*******************+Kubernetes Versions+*******************++Kubernetes community releases a minor version about very three months. In those+minor version releases, there are some new features and bug fixes. Patch versions+will be released more frequent (e.g. weekly) and genereally to include critical+fixes , e.g. security fixes.++Catalyst Cloud Kubernetes Service supports each ``minor`` version for at least+``6`` months before deprecating it to give users enough time to upgrade their

I'd like to get you guys comments about the versions support policies. Because I find it is not an easy task. Generally, there are about 10 (more or less) patch versions for each minor version. They're published per 3-4 weeks. So the whole period for one minor version may take 1 year. And I have seen some cloud providers support that long, e.g. IBM and EKS.

The challenging part is the patch version really. Now we generally only maintain one patch version for each minor version, that said, the old patch version will be removed as long as there is a new patch version. For example, if the current minor version is v.1.16.8 and the v1.16.9 just released. Now we just hide the v1.16.8 in favor of v.1.16.9. This is OK actually, we can still say we support each minor version for at least 6 months. It's easy to achieve. However, the tricky thing is, should we say the cluster still on v1.16.8 is out of support if it's not upgraded in 30 days.

openstacker

comment created time in 2 months

push eventopenstacker/catalystcloud-docs

Feilong Wang

commit sha 2a938f4e7b99d3b0c6858c23b91ef63f94baf20e

Add k8s versions

view details

push time in 2 months

PR opened catalyst-cloud/catalystcloud-docs

Add k8s versions

Hi Bruno,

This document about versions is quite important for us, so I really need your comments here. Feel free to give me a call if there is anything unclear.

+1 -0

0 comment

1 changed file

pr created time in 2 months

create barnchopenstacker/catalystcloud-docs

branch : add-versions

created branch time in 2 months

issue commentcoreos/zincati

agent: delay reboot if ongoing interactive sessions

@lucab Now we(OpenStack Magnum) are using Fedora CoreOS as nodes for k8s cluster. But because of the auto update, we have to totally disable it now to avoid interrupting the k8s bootstrapping. However, I would like to see a delayed reboot or manual reboot to leave the option to the owner of the server. Looking forward to seeing your thoughts. Cheers.

lucab

comment created time in 2 months

PR opened catalyst-cloud/catalystcloud-docs

Remove the known issue of "15-25mins to create cluser"

Now we're migrating to Fedora CoreOS driver with Podman and hyperkube, on POR and HLZ regions, a cluster creation now takes 6-10 mins.

+0 -8

0 comment

1 changed file

pr created time in 2 months

create barnchopenstacker/catalystcloud-docs

branch : remove-known-issues

created branch time in 2 months

startedjen20/lambda-cert

started time in 2 months

startedcamelaissani/loca

started time in 2 months

startedComcast/kuberhealthy

started time in 2 months

push eventopenstacker/cloud-provider-openstack

Feilong Wang

commit sha 692148a419d71bfd2219390ac7844ad06107aa5e

[magnum-auto-healer] Detaching volumes before power-off nodes When there are volumes attached on nodes, the auto healing action will fail because the node has been powered off. This patch fixes it by detaching volumes before power-off the node.

view details

push time in 2 months

push eventopenstacker/cloud-provider-openstack

Feilong Wang

commit sha 9a2769c2ba2fdfd18258f8442a69e55faa95d504

[magnum-auto-healer] Support updating cluster health status (#985) Magnum supports private cluster which means the Magnum control plane can reach the k8s cluster to get the cluster health status. As a result, Magnum cannot update the cluster health status. Aa a controller running inside of the cluster, magnum-auto-healer can get a good insight of the cluster health status. This PR will call the Magnum cluster update API to set the latest health status.

view details

Feilong Wang

commit sha 7e2866912b677ed4251e962f433377bc4d43f3a6

[magnum-auto-healer] Detaching volumes before power-off nodes When there are volumes attached on nodes, the auto healing action will fail because the node has been powered off. This patch fixes it by detaching volumes before power-off the node.

view details

push time in 2 months

push eventcatalyst-cloud/magnum

bismog

commit sha f553558e53827bf0c2859e04d31267dffd339a25

Add oslo_log command options to magnum-db-manage As we all know, 'command magnum-api' and 'magnum-conductor' have many options such as 'log-file' and 'use-syslog'. But as for magnum-db-magnum, these are not available. Add oslo_log command options will help. Change-Id: If61efbde56e1d7dd0ed88d76fa42dd00501cc938

view details

wanghui

commit sha e94f1a22e6eca054e7592135a859f1dad8946789

Move openstackdocstheme to extensions in api-ref Move openstackdocstheme to extensions. According to the guide below: https://docs.openstack.org/openstackdocstheme/latest/ Change-Id: I58de11278e0cd203312c910057285800f82eb5d7

view details

Jangwon Lee

commit sha bc36ef8fb6604f0285cc922d20430ed13c7bc332

Add prometheus-monitoring namespace When using 'prometheus_monitoring=true' in the label option, 'kube-enable-monitoring.service' in the master node has stuck in 'Wait for Grafana pod and then inject data source'. It caused the 'prometheus-monitoring' namespace doesn't exist, so scripts don't create pods about Prometheus and Grafana. To fix the error, I added codes in 'magnum/drivers/common/templates/ kubernetes/fragments/enable-prometheus-monitoring.sh' to make 'prometheus-monitoring' namespace. We could put codes in a new file like 'magnum/magnum/drivers/ k8s_coreos_v1/templates/fragments/create-kube-namespace.yaml', but I think it's ok. Change-Id: I23395b41919c6f39cfcc2b4480bcd4b040cae031 Task: 26347 Story: 2003697

view details

Erik Olof Gunnar Andersson

commit sha f2fd732ce22b001e7aa11d2a7ff6dda95a0a86a5

Trivial code cleanups Cleaning up comments and logging to make sure they properly adhere to Openstack standards. * Consistently use """ instead of ''' for comments. * Always lazy-load logging parameters. * Fixed bad log line in cert_manager. Change-Id: I547f5dfa61609a899aef9b1470be8d8a6d8e4b81

view details

chestack

commit sha 05f0cddc46beb55e3e05938ad18d5aa596ee5713

Make master node schedulable with taints --register-with-taints take no effect when --register-schedulable=false configured. It's better to drop --register-schedulable and leave --register-with-taints to make master schedulable add --pod-infra-container-image=CONTAINER_INFRA_PREFIX for kubelet on master nodes. Change-Id: Ia2ce59841d823ba02a65224088e5af1a8c9610b1

view details

Erik Olof Gunnar Andersson

commit sha 423d1863128efaa788bf2de355597b0796531d9c

Fixing gate failing due to bad AMQP virtual_host We are currently hitting this error with the gate. > NOT_ALLOWED - access to vhost 'None' refused for user 'stackrabbit' This patch fixes this by using the inbuilt devstack construct to create an appropriate transport_url. Change-Id: I9aae96094b7bd8bc148ae3e42c118ba160eff8ae

view details

Feilong Wang

commit sha 48e2e77406b16019d5e2273c0f563e37f204c482

Update heat-container-agent version tag Update heat-container-agent version tag to include the multi region fix. Task: 27051 Story: 2003992 Change-Id: Ided337dafa52cce771126e96ef41a62a3358fda1

view details

Tobias Urdin

commit sha 095b49e6f532f961854d8e0363e0f4aae01d189f

[swarm-mode] Remove --live-restore from Docker daemon options Ensure the --live-restore is not in the Docker daemon OPTIONS. Some images has this option by default which will cause the node not being able to perform it swarm init process. Change-Id: I287a5274143903fad5d4476e9d1640b26bdb46d4 Story: 2004095 Task: 27497

view details

Zuul

commit sha 63fffda0263bf22d4b971fe87f72314bac182728

Merge "[swarm-mode] Remove --live-restore from Docker daemon options"

view details

Julia Kreger

commit sha 547f9309a1e5b92dbbc44165d3c8a7ff6256adc6

Minor fixes to re-align with Ironic Ironic has evolved and a few items were no longer correct in the contributed scripts for use with ironic. Additionally a database workaround was fixed, and as such commented out. Change-Id: I105791985973e8348d43d41982ac7ba3e0cf970c

view details

Zuul

commit sha 1abf21da0575bb9783c45cddb723b25b2a093915

Merge "Make master node schedulable with taints"

view details

Zuul

commit sha 62a143c1c18be3d86d4b70275780f3a8c0971841

Merge "Add prometheus-monitoring namespace"

view details

Zuul

commit sha ff0dd4aeed651e6db1f38cd0453d1c1656f4fbea

Merge "Minor fixes to re-align with Ironic"

view details

Spyros Trigazis

commit sha c98e9525c7db34734afb29d1b9fb409a08d16ef7

Add heat_container_agent_tag label Add heat_container_agent_tag label to allow users select the heat-agent tag. Stein default: stein-dev story: 2003992 task: 26936 Change-Id: I6a8d8dbb2ec7bd4b7d01fa7cd790a8966ea88f73 Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>

view details

Zuul

commit sha c8019ea77f33609452dd1a973e0f421b118c2079

Merge "Add heat_container_agent_tag label"

view details

Zuul

commit sha 83fa9d1b4e4ec764896e13d95587dbe3094e92ab

Merge "Trivial code cleanups"

view details

Lingxian Kong

commit sha 5d1eab9d9f896f6adf5a31a17c43995377a93f78

[K8S] Pass cluster name to controller-manager The cluster name is useful to identify resources created in different k8s clusters, especially in the cloud environment, the cluster name is always injected into the name of the cloud resources(e.g. the load balancer, volume, etc.), which is helpful for the cluster resource clean up. The magnum cluster UUID is used as the value of '--cluster-name' option. Story: 2004242 Task: 27766 Change-Id: I245a8869948a0b8bfa8d5cc32e7fb9277477026a

view details

Jim Bach

commit sha 9a6698fb4535e408b6c4a522088197af0ab4aa4d

Add Octavia python client for Magnum Adding the client enables the manipulation of Octavia resources with Magnum such as during cluster deletion, being able to clean up non-heat created resouces. Change-Id: I976ab136e24b98d447d61028ce07d0f5dd9d255a story: 2004259 task: 27795

view details

Erik Olof Gunnar Andersson

commit sha 718cb9c9b475a705783c0cd07a0c02b9be33f0c6

Add support for www_authentication_uri We do currently not support www_authentication_uri at all, which is the new standard, as auth_uri has long been deprecated. * Make sure we support both auth_uri and www_authenticate_uri. * Switched to www_authenticate_uri for devstack. * Fixed a bug where a bad exception would be thrown if auth_uri was not set. Story: 2004271 Task: 27819 Change-Id: Ibc932d35f3d6ba2ac7ffb6193aa37bd4a3d4422e

view details

Erik Olof Gunnar Andersson

commit sha daa7d0495119f02abfe53142ca237a4084db5297

Cleaned up devstack logging Switch to systemd logging to take advantage of some of the newer logging features. Story: 2004272 Task: 27820 Change-Id: I475bf26e24b3a725f68c7da355807374bf1e189b

view details

push time in 2 months

push eventcatalyst-cloud/magnum

push time in 2 months

delete branch catalyst-cloud/magnum

delete branch : github/master

delete time in 2 months

create barnchcatalyst-cloud/magnum

branch : github/master

created branch time in 2 months

create barnchcatalyst-cloud/magnum

branch : stable/ussuri

created branch time in 2 months

create barnchcatalyst-cloud/magnum

branch : stable/train

created branch time in 2 months

push eventopenstacker/cloud-provider-openstack

Feilong Wang

commit sha 121452d83b62c582597e97888df5b9d938e35350

[magnum-auto-healer] Detaching volumes before power-off nodes When there are volumes attached on nodes, the auto healing action will fail because the node has been powered off. This patch fixes it by detaching volumes before power-off the node.

view details

push time in 3 months

PR opened kubernetes/cloud-provider-openstack

[magnum-auto-healer] Detaching volumes before power-off nodes

When there are volumes attached on nodes, the auto healing action will fail because the node has been powered off. This patch fixes it by detaching volumes before power-off the node.

What this PR does / why we need it:

Which issue this PR fixes(if applicable): fixes #1070

Special notes for reviewers: <!-- e.g. How to test this PR -->

Release note:

[magnum-auto-healer] Detaching volumes before power-off nodes when doing auto-healing.
+41 -2

0 comment

1 changed file

pr created time in 3 months

create barnchopenstacker/cloud-provider-openstack

branch : Issue-1070

created branch time in 3 months

issue openedkubernetes/cloud-provider-openstack

[magnum-auto-healer] Detach volumes before power off node(instance)

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Auto healing failed when there are nodes attached on the target node because the instance has been powered off.

What you expected to happen:

Auto healing can be run successfully.

How to reproduce it:

Create a cluster with docker volume attached to worker nodes.

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager(or other related binary) version: v0.18.0
  • OpenStack version: master
  • Others:

created time in 3 months

push eventopenstacker/cloud-provider-openstack

ji chen

commit sha 2493d936afe901a63066e4506dcfa716f1d96dc9

[All] Fix klog/v2 not found issue (#1028)

view details

Feilong Wang

commit sha c7a42ebe21aaee327bd2e6105a3a7d613b619306

[magnum-auto-healer] Support node group (#1032) Magnum supports node group for its k8s cluster, and its resize API has been updated accordingly. Therefore, magnum-auto-healer needs to pass in the correct node group name and the node count.

view details

Anusha Ramineni

commit sha 55b0f12b63d66a9155246b52d67e14e78b7232e8

Reorganize all tests to tests/ dir (#1058)

view details

Feilong Wang

commit sha ff502ade542ba8a2cd77d946b2e778291918b0d8

[magnum-auto-healer] Support updating cluster health status Magnum supports private cluster which means the Magnum control plane can reach the k8s cluster to get the cluster health status. As a result, Magnum cannot update the cluster health status. Aa a controller running inside of the cluster, magnum-auto-healer can get a good insight of the cluster health status. This PR will call the Magnum cluster update API to set the latest health status.

view details

push time in 3 months

push eventopenstacker/cloud-provider-openstack

Lingxian Kong

commit sha 59832421c5d1fdc17e31364940df87056e777420

[occm] Remove volume related code from openstack-cloud-controller-manager (#1036)

view details

Anusha Ramineni

commit sha 2b7d64c2843109150c862f7f03b4c05b4c7e306c

Update reviewers list (#1042)

view details

kayrus

commit sha b8b45549c2602c540cf7f6919de68ec8a4b3533c

Core: print error messages, when openstack cleint returns error (#1038)

view details

Anusha Ramineni

commit sha 45620373c17643a1ee2e821cec2c72b2b9776d9f

Add pagination support (#969)

view details

kayrus

commit sha 776e19bf24e070002d2c90eccce3f00a32cedfda

Bump gophercloud dependencies (#1043) * Bump gophercloud dependencies * Bump for a path typo fix

view details

Lingxian Kong

commit sha be101073fb4e1fc5655a642ad8ad13d11f2d87bd

[cinder-csi-plugin] Support image-csi-plugin target in Makefile (#1045)

view details

ji chen

commit sha d8765551086a5087e5da8deec2d2480619cff10f

[cinder-csi-plugin] Add version into cinder CSI driver info (#1035)

view details

Anusha Ramineni

commit sha 6a18d8b9ab024cef6d4f1a5582ad77a85b8220fd

Fix sanity test failures (#946)

view details

ji chen

commit sha 001042a731838a32fba4a1d12a77af2b3f37f88e

Allow create snapshot when the volume is in in-use state (#1034)

view details

Takeaki Matsumoto

commit sha a687cccc6361c4d2d78aab8f2f6aa3647a326aec

[cinder-csi-pugin] Support reconcile volume attachements (#976) * Support reconcile volume attachements * Fix tests for new ListVolumes()

view details

Hamza Zafar

commit sha 2aa0ba82411873ac164a87adaf5b2f2d90504e43

make cascade deletion configurable (#1040)

view details

kayrus

commit sha c544421eaeaf8124786f2ef54cfaa6f794b1fe87

[occm] consider private network names when node contains IPs from other private networks (#1041) * OCCM: consider network names, when specified * Move v1 helper functions into openstack_instances.go * [occm] Address detection logic fixes Signed-off-by: Andrey Klimentyev <andrey.klimentyev@flant.com> * Typo fix * Move LB related helpers into occm package * Added comment Co-authored-by: Andrey Klimentyev <andrey.klimentyev@flant.com>

view details

Anusha Ramineni

commit sha 7c66efde556959e4babaeaa3afcd5d349e9b09b0

[cinder-csi-plugin] Fix sanity test failures 2/3 (#1051)

view details

Hamza Zafar

commit sha bff65f4f79a4df5580903d1df90531c64866dc41

fix nil pointer dereference in create loadbalancer func (#1055)

view details

Anusha Ramineni

commit sha 31ab8967703a86546e053e84c76090cb9b08f2b5

[cinder-csi-plugin] Fix sanity failures 3/3 (#1057)

view details

Feilong Wang

commit sha 66ba6bdcbf53626c7b44198ebbcf285ea795072b

[magnum-auto-healer] Support updating cluster health status Magnum supports private cluster which means the Magnum control plane can reach the k8s cluster to get the cluster health status. As a result, Magnum cannot update the cluster health status. Aa a controller running inside of the cluster, magnum-auto-healer can get a good insight of the cluster health status. This PR will call the Magnum cluster update API to set the latest health status.

view details

push time in 3 months

pull request commentgophercloud/gophercloud

ContainerInfra v1: Fix cluster Get

@openstacker We've had a few other small bug fixes come in. If you can confirm this works, I can release version 0.11.1 for you - if that would help?

Thank you very much. I think I can just use the latest commit in CPO. That's OK. Thanks again.

openstacker

comment created time in 3 months

pull request commentgophercloud/gophercloud

ContainerInfra v1: Fix cluster Get

@jtopjian I'm sorry this is a regression issue. Please review it as it's blocked the very basic cluster Get function. BTW, when the version 0.12 will be released? Thanks.

openstacker

comment created time in 3 months

push eventopenstacker/gophercloud

Feilong Wang

commit sha aea91bba7d8db012f033cd1dd73f2740f33e397d

ContainerInfra v1: Fix cluster Get This is a regression issue introduced by[1], which may impact the cluster Get. The health_status_reason attribute should be a dict instead of string. [1] https://github.com/gophercloud/gophercloud/pull/1910

view details

push time in 3 months

PR opened gophercloud/gophercloud

ContainerInfra v1: Fix cluster Get

This is a regression issue introduced by[1], which may impact the cluster Get. The health_status_reason attribute should be a dict instead of a string.

[1] https://github.com/gophercloud/gophercloud/pull/1910

For #1909

+40 -36

0 comment

3 changed files

pr created time in 3 months

create barnchopenstacker/gophercloud

branch : fix-magnum-health-status

created branch time in 3 months

PR opened cncf/k8s-conformance

Conformance results for v1.16/catalyst-cloud

Pre-submission checklist:

Please check each of these after submitting your pull request:

  • [x] Did you include the product/project logo in SVG, EPS or AI format?
  • [x] Does your logo clearly state the name of the product/project and follow the other logo guidelines?
  • [x] Did you copy and paste the installation and configuration instructions into the README.md file in addition to linking to them?
+27804 -0

0 comment

4 changed files

pr created time in 3 months

create barnchopenstacker/k8s-conformance

branch : catalyst-cloud-v1.16

created branch time in 3 months

push eventopenstacker/cloud-provider-openstack

Lingxian Kong

commit sha 59832421c5d1fdc17e31364940df87056e777420

[occm] Remove volume related code from openstack-cloud-controller-manager (#1036)

view details

Anusha Ramineni

commit sha 2b7d64c2843109150c862f7f03b4c05b4c7e306c

Update reviewers list (#1042)

view details

kayrus

commit sha b8b45549c2602c540cf7f6919de68ec8a4b3533c

Core: print error messages, when openstack cleint returns error (#1038)

view details

Anusha Ramineni

commit sha 45620373c17643a1ee2e821cec2c72b2b9776d9f

Add pagination support (#969)

view details

kayrus

commit sha 776e19bf24e070002d2c90eccce3f00a32cedfda

Bump gophercloud dependencies (#1043) * Bump gophercloud dependencies * Bump for a path typo fix

view details

Lingxian Kong

commit sha be101073fb4e1fc5655a642ad8ad13d11f2d87bd

[cinder-csi-plugin] Support image-csi-plugin target in Makefile (#1045)

view details

ji chen

commit sha d8765551086a5087e5da8deec2d2480619cff10f

[cinder-csi-plugin] Add version into cinder CSI driver info (#1035)

view details

Anusha Ramineni

commit sha 6a18d8b9ab024cef6d4f1a5582ad77a85b8220fd

Fix sanity test failures (#946)

view details

ji chen

commit sha 001042a731838a32fba4a1d12a77af2b3f37f88e

Allow create snapshot when the volume is in in-use state (#1034)

view details

Takeaki Matsumoto

commit sha a687cccc6361c4d2d78aab8f2f6aa3647a326aec

[cinder-csi-pugin] Support reconcile volume attachements (#976) * Support reconcile volume attachements * Fix tests for new ListVolumes()

view details

Hamza Zafar

commit sha 2aa0ba82411873ac164a87adaf5b2f2d90504e43

make cascade deletion configurable (#1040)

view details

kayrus

commit sha c544421eaeaf8124786f2ef54cfaa6f794b1fe87

[occm] consider private network names when node contains IPs from other private networks (#1041) * OCCM: consider network names, when specified * Move v1 helper functions into openstack_instances.go * [occm] Address detection logic fixes Signed-off-by: Andrey Klimentyev <andrey.klimentyev@flant.com> * Typo fix * Move LB related helpers into occm package * Added comment Co-authored-by: Andrey Klimentyev <andrey.klimentyev@flant.com>

view details

Anusha Ramineni

commit sha 7c66efde556959e4babaeaa3afcd5d349e9b09b0

[cinder-csi-plugin] Fix sanity test failures 2/3 (#1051)

view details

Hamza Zafar

commit sha bff65f4f79a4df5580903d1df90531c64866dc41

fix nil pointer dereference in create loadbalancer func (#1055)

view details

Anusha Ramineni

commit sha 31ab8967703a86546e053e84c76090cb9b08f2b5

[cinder-csi-plugin] Fix sanity failures 3/3 (#1057)

view details

Feilong Wang

commit sha 4061fba268e6e7160e84c1d304ee68b9b3d30037

[magnum-auto-healer] Support node group Magnum supports node group for its k8s cluster, and its resize API has been updated accordingly. Therefore, magnum-auto-healer needs to pass in the correct node group name and the node count.

view details

push time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func (c *Controller) Start(ctx context.Context) { 			}  			wg.Wait()+			c.provider.UpdateHealthStatus(masterUnhealthyNodes, workerUnhealthyNodes)

I can see your point. My understanding is, given the repairing is a time-consuming operation, so it should be acceptable to do the status update here. I will do more testing about this. Thanks.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 const ( 	LabelNodeRoleMaster = "node-role.kubernetes.io/master" ) +var (+	masterUnhealthyNodes []healthcheck.NodeInfo+	workerUnhealthyNodes []healthcheck.NodeInfo+)+

They're global variables, called at line 348 and 372.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func CheckNodes(checkers []HealthCheck, nodes []NodeInfo, controller NodeControl 	for _, node := range nodes { 		for _, checker := range checkers { 			if !checker.Check(node, controller) {+				node.FailedCheck = reflect.TypeOf(checker).String()

OK, I will do that in next commit.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func (provider OpenStackCloudProvider) Repair(nodes []healthcheck.NodeInfo) erro 	return nil } +// UpdateHealthStatus can update the cluster health status by

Will fix it in the next commit.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func (provider OpenStackCloudProvider) Repair(nodes []healthcheck.NodeInfo) erro 	return nil } +// UpdateHealthStatus can update the cluster health status by+func (provider OpenStackCloudProvider) UpdateHealthStatus(masters []healthcheck.NodeInfo, workers []healthcheck.NodeInfo) error {+	log.Infof("Start to update cluster health status.")+	clusterName := provider.Config.ClusterName++	healthStatus := "UNHEALTHY"+	healthStatusReasonMap := make(map[string]string)+	healthStatusReasonMap["updated_at"] = time.Now().String()++	if len(masters) == 0 && len(workers) == 0 {+		// No unhealthy node passed in means the cluster is healthy+		healthStatus = "HEALTHY"+		healthStatusReasonMap["api"] = "ok"+		healthStatusReasonMap["nodes"] = "ok"+	} else {+		healthStatus = "UNHEALTHY"

Will remove it in next commit.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func (provider OpenStackCloudProvider) Repair(nodes []healthcheck.NodeInfo) erro 	return nil } +// UpdateHealthStatus can update the cluster health status by+func (provider OpenStackCloudProvider) UpdateHealthStatus(masters []healthcheck.NodeInfo, workers []healthcheck.NodeInfo) error {+	log.Infof("Start to update cluster health status.")+	clusterName := provider.Config.ClusterName++	healthStatus := "UNHEALTHY"+	healthStatusReasonMap := make(map[string]string)+	healthStatusReasonMap["updated_at"] = time.Now().String()++	if len(masters) == 0 && len(workers) == 0 {+		// No unhealthy node passed in means the cluster is healthy+		healthStatus = "HEALTHY"+		healthStatusReasonMap["api"] = "ok"+		healthStatusReasonMap["nodes"] = "ok"+	} else {+		healthStatus = "UNHEALTHY"+		if len(workers) > 0 {+			for _, n := range workers {+				// TODO: Need to figure out a way to reflect the detailed error information+				healthStatusReasonMap[n.KubeNode.Name+"."+n.FailedCheck] = "error"+			}+		} else {+			// TODO: Need to figure out a way to reflect detailed error information+			healthStatusReasonMap["api"] = "error"+		}+	}++	jsonDumps, err := json.Marshal(healthStatusReasonMap)+	if err != nil {+		return fmt.Errorf("Failed to build health status reason for cluster %s, error: %v", clusterName, err)

No problem. Will do.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func (provider OpenStackCloudProvider) Repair(nodes []healthcheck.NodeInfo) erro 	}  	clusterName := provider.Config.ClusterName+	masters := nodes+	workers := nodes+	isWorkerNode := nodes[0].IsWorker+	if isWorkerNode {+		masters = []healthcheck.NodeInfo{}+	} else {+		workers = []healthcheck.NodeInfo{}

Right, it's a bit confusing. My original purpose is initializing them at the same time then at line 200 and line 202 setting either masters or workers as EMPTY. I will see if there is a better way to express this.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 import ( 	"k8s.io/apimachinery/pkg/util/sets" 	"k8s.io/apimachinery/pkg/util/wait" 	"k8s.io/client-go/kubernetes"+	"k8s.io/klog"

Will do.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 import ( 	"k8s.io/apimachinery/pkg/util/sets" 	"k8s.io/apimachinery/pkg/util/wait" 	"k8s.io/client-go/kubernetes"+	"k8s.io/klog"

Will do.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func (provider OpenStackCloudProvider) Repair(nodes []healthcheck.NodeInfo) erro 	return nil } +// UpdateHealthStatus can update the cluster health status by

Will fix in next commit.

openstacker

comment created time in 3 months

Pull request review commentkubernetes/cloud-provider-openstack

[magnum-auto-healer] Support updating cluster health status

 func (provider OpenStackCloudProvider) Repair(nodes []healthcheck.NodeInfo) erro 	}  	clusterName := provider.Config.ClusterName+	masters := nodes+	workers := nodes+	isWorkerNode := nodes[0].IsWorker+	if isWorkerNode {+		masters = []healthcheck.NodeInfo{}+	} else {+		workers = []healthcheck.NodeInfo{}

I'm using 2 vars to avoid the thread-safe issue of list appending.

openstacker

comment created time in 3 months

more