profile
viewpoint

ubuntu/gnome-shell-communitheme 153

GNOME Shell Ubuntu community theme "communitheme"

ubuntu/adsys 45

Active Directory bridging tool suite

ubuntu/face-detection-demo 33

Code for face detection demo snap

ubuntu/codelabs 22

Ubuntu codelabs offline website

ubuntu/font-ubuntu 15

Polymer imports for ubuntu fonts.

ubuntu/docker-snapcraft 12

Docker image autobuild for latests snapcraft on latest ubuntu LTS version

ubuntu/communitheme-snap-helpers 7

Various build and run helper for communitheme snap

ubuntu/communitheme-sounds 7

The Ubuntu community sound theme "Communitheme"

ubuntu/cursor-communitheme 6

Cursor Theme For The Ubuntu Community Theme

pull request commentubuntu/adsys

Refactor policies

Codecov Report

Merging #257 (d78036a) into main (670d231) will increase coverage by 0.00%. The diff coverage is 88.88%.

:exclamation: Current head d78036a differs from pull request most recent head 43b29a1. Consider uploading reports for the commit 43b29a1 to get more accurate results Impacted file tree graph

@@           Coverage Diff           @@
##             main     #257   +/-   ##
=======================================
  Coverage   85.70%   85.70%           
=======================================
  Files          53       54    +1     
  Lines        3497     3498    +1     
=======================================
+ Hits         2997     2998    +1     
  Misses        325      325           
  Partials      175      175           
Impacted Files Coverage Δ
internal/ad/admxgen/admxgen.go 94.14% <ø> (ø)
internal/ad/admxgen/common/common.go 100.00% <ø> (ø)
internal/ad/admxgen/dconf/dconf.go 93.05% <ø> (ø)
internal/ad/admxgen/main.go 50.00% <ø> (ø)
internal/ad/ad.go 90.90% <88.88%> (ø)
internal/policies/ad/registry/registry.go
internal/policies/entry/entry.go
internal/policies/ad/download.go
internal/policies/ad/definitions.go
internal/policies/ad/adsys-gpolist
... and 9 more

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 670d231...43b29a1. Read the comment docs.

didrocks

comment created time in an hour

push eventubuntu/adsys

Didier Roche

commit sha 43b29a1be10c409af67eb6ab879631d960f09250

Fix Github CI with new admxgen path

view details

push time in an hour

PR opened ubuntu/adsys

Refactor policies
+3104 -1519

0 comment

638 changed files

pr created time in an hour

create barnchubuntu/adsys

branch : refactor_policies

created branch time in an hour

startedubuntu/microk8s

started time in an hour

push eventubuntu/thunderbird

Sebastien Bacher

commit sha f0f7370f351e24519e49215e161c7ae7aa5a527c

Update to 97.0b2

view details

push time in an hour

pull request commentubuntu/microk8s

Deprecate storage addon

Should we consider this issue? I.e. RemoveSelfLink=false being deprecated.

Yes, this will be handled by a separate PR.

joedborg

comment created time in 2 hours

issue commentubuntu/microk8s

Warning logs after updating to 1.23: etcd-client "retrying of unary invoker failed"

Microk8s uses kine to sort of bridge kubernetes etcd to dqlite. Not all etcd endpoints are implemented in kine.

Thanks for your quick answer and clarification!

Quick question, is the cluster operational?

It's a staging cluster so I don't have any real workloads on it, but I do have some services that seem to be working (e.g. OpenEBS, calico, dns, nvidia's gpu-operator). I tried deploying a sample app right now and a sample daemonset and everything seems to be fine.

I'm pasting here some more logs, but apart for some "Trace" entries there's not much to see I guess

Jan 18 07:28:52 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:28:52.349Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:29:12 <hostname> microk8s.daemon-kubelite[94967]: I0118 07:29:12.360053   94967 controller.go:611] quota admission added evaluator for: zfsvolumes.zfs.openebs.io
Jan 18 07:29:31 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:29:31.387Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:30:14 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:30:14.696Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:30:54 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:30:54.532Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:31:27 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:31:27.496Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:32:11 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:32:11.276Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:32:22 <hostname> microk8s.daemon-kubelite[94967]: I0118 07:32:22.633156   94967 trace.go:205] Trace[1848698229]: "GuaranteedUpdate etcd3" type:*core.Endpoints (18-Jan-2022 07:32:22.131) (total time: 501ms):
Jan 18 07:32:22 <hostname> microk8s.daemon-kubelite[94967]: Trace[1848698229]: ---"Transaction committed" 501ms (07:32:22.633)
Jan 18 07:32:22 <hostname> microk8s.daemon-kubelite[94967]: Trace[1848698229]: [501.615564ms] [501.615564ms] END
Jan 18 07:32:22 <hostname> microk8s.daemon-kubelite[94967]: I0118 07:32:22.633837   94967 trace.go:205] Trace[799998063]: "Update" url:/api/v1/namespaces/openebs/endpoints/openebs.io-provisioner-iscsi,user-agent:openebs-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:a6ccdb15-c080-416a-917e-6a05a0e5e088,client:10.1.2.138,accept:application/json, */*,protocol:HTTP/2.0 (18-Jan-2022 07:32:22.131) (total time: 502ms):
Jan 18 07:32:22 <hostname> microk8s.daemon-kubelite[94967]: Trace[799998063]: ---"Object stored in database" 502ms (07:32:22.633)
Jan 18 07:32:22 <hostname> microk8s.daemon-kubelite[94967]: Trace[799998063]: [502.432428ms] [502.432428ms] END
Jan 18 07:32:42 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:32:42.030Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:33:18 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-18T07:33:18.698Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}
Jan 18 07:33:29 <hostname> microk8s.daemon-kubelite[94967]: I0118 07:33:29.299466   94967 trace.go:205] Trace[1356061602]: "GuaranteedUpdate etcd3" type:*core.Endpoints (18-Jan-2022 07:33:28.716) (total time: 583ms):
Jan 18 07:33:29 <hostname> microk8s.daemon-kubelite[94967]: Trace[1356061602]: ---"Transaction committed" 582ms (07:33:29.299)
Jan 18 07:33:29 <hostname> microk8s.daemon-kubelite[94967]: Trace[1356061602]: [583.115005ms] [583.115005ms] END
Jan 18 07:33:29 <hostname> microk8s.daemon-kubelite[94967]: I0118 07:33:29.300162   94967 trace.go:205] Trace[47377414]: "Update" url:/api/v1/namespaces/openebs/endpoints/openebs.io-local,user-agent:provisioner-localpv/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:1fa886dc-de9e-4f4f-80f6-88a63e3ee73d,client:65.21.239.141,accept:application/json, */*,protocol:HTTP/2.0 (18-Jan-2022 07:33:28.716) (total time: 584ms):
Jan 18 07:33:29 <hostname> microk8s.daemon-kubelite[94967]: Trace[47377414]: ---"Object stored in database" 583ms (07:33:29.300)
Jan 18 07:33:29 <hostname> microk8s.daemon-kubelite[94967]: Trace[47377414]: [584.053752ms] [584.053752ms] END
luca-nardelli

comment created time in 2 hours

issue openedubuntu/microk8s

1.22 worked, 1.23 caused "connection refused"

I'm aware that installing microk8s inside of a circleci workflow isn't exactly standard usage, but I had it running reliably. This week, it started failing:

 RAN: /usr/bin/sh -c 'kubectl get namespaces -o json'
 STDOUT: The connection to the server 10.142.2.11:16443 was refused - did you specify the right host or port?

I went through the logs of successful runs and found this:

sudo snap install microk8s --classic
Download snap "microk8s" (2695) from channel "1.22/stable"  

The failing run had this:

sudo snap install microk8s --classic
Download snap "microk8s" (2848) from channel "1.23/stable"   

So I told snap to use the old version like below and the problem went away

sudo snap install microk8s --classic --channel=1.22/stable

I ssh'ed into the VM with the problem, here's an inspection report:

inspection-report-20220118_040235.tar.gz

Here's an edited-down version of the workflow that had the problem. I removed all of the stuff that had nothing to do with microk8s (haven't tested the edited down version directly--I'm hoping it tells enough of the story--but I'm willing to test further if that would be helpful).

version: 2.1
jobs:
  ci:
    machine:
      image: ubuntu-2004:202107-02
    resource_class: large
    steps:
      - run:
          name: snap install
          command: sudo snap install microk8s --classic --channel=1.22/stable
      - run:
          name: "get config"
          command: |
            mkdir -p ~/.kube/
            sudo microk8s config >> ~/.kube/config
      - run:
          name: "enable microk8s features"
          command: sudo microk8s enable dns storage
      - run:
          name: "install kubectl"
          command: |
            cd ~/bin
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
      - run:
          name: "install helm"
          command: |
            curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
      - run:
          name: "prepare networking"
          command: |
            sudo snap install yq
            KUBE_IP=$(cat ~/.kube/config \
                | yq e '.clusters[0].cluster.server' - \
                | cut -d '/' -f3 \
                | cut -d ':' -f1)
            echo "      [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"${KUBE_IP}:30500\"]
                    endpoint = [\"http://${KUBE_IP}:30500\"]" | sudo tee -a /var/snap/microk8s/current/args/containerd-template.toml
            sudo microk8s stop
            sudo microk8s start

            sudo mkdir -p /etc/docker
            echo '{"insecure-registries":["'$KUBE_IP':30500"]}' | jq . | sudo tee -a /etc/docker/daemon.json
            sudo service docker stop
            sudo service docker start

      - run:
          name: "do stuff"
          command: |
            cd qa-tb11c-dags
            export KUBECTL_COMMAND_HEADERS=false # https://github.com/kubernetes/kubectl/issues/1098
            export KUBECONFIG=/home/circleci/.kube/config
            /usr/bin/sh -c 'kubectl get namespaces -o json'

workflows:
  ci-workflow:
    jobs:
      - ci:
          context:
            - qa

Downgrading fixed it, so this isn't a high priority for me. But if you guys think this is a microk8s bug then I'm happy to help with gathering info that would help with a fix.

created time in 5 hours

create barnchubuntu/yaru

branch : upstream-adwaita-symbolics-update

created branch time in 8 hours

issue commentubuntu/microk8s

Warning logs after updating to 1.23: etcd-client "retrying of unary invoker failed"

Microk8s uses kine to sort of bridge kubernetes etcd to dqlite. Not all etcd endpoints are implemented in kine. This must be something recently added in kubernetes. Quick question, is the cluster operational?

luca-nardelli

comment created time in 10 hours

startedubuntu/microk8s

started time in 11 hours

issue openedubuntu/microk8s

Warning logs after updating to 1.23: etcd-client "retrying of unary invoker failed"

microk8s-inspect.tar.gz

Hi,

today I upgraded a staging 3-node cluster from 1.22 to 1.23. While checking the logs for kubelite, I noticed this message that keeps repeating

Jan 17 18:15:20 <hostname> microk8s.daemon-kubelite[94967]: {"level":"warn","ts":"2022-01-17T18:15:20.234Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e1c0/#initially=[unix:///var/snap/microk8s/x3/var/kubernetes/backend/kine.sock:12379]","attempt":0,"error":"rpc error: code = Unimplemented desc = unknown service etcdserverpb.Maintenance"}

My cluster is running using dqlite, so I'm not entirely sure why I'm getting etcd-related logs. Is there something I can do to fix this?

created time in 16 hours

issue openedubuntu/microk8s

1.22.5 breaking changes in ingress-nginx

We use microk8s 1.22 installed from snap. From Friday 14 January 2022 this has started using v1.22.5 instead of the previous 1.22.4.

1.22.5 installs k8s.gcr.io/ingress-nginx/controller:v1.0.5 1.22.4 used k8s.gcr.io/ingress-nginx/controller:v1.0.0-alpha.2

Looking at the changelog for ingress-nginx https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md this upgrade of the ingress-nginx controller has brought in a number of breaking changes.

I am concerned that breaking changes such as these are being made to an existing release.

The principle issue of concern is https://github.com/kubernetes/ingress-nginx/issues/7837 which was implemented to address CVE-2021-25742.

The problem is that this has been indiscriminately backported to 1.20, 1.21 and 1.22 in November without due regard for these breaking changes - see https://github.com/ubuntu/microk8s/pull/2732

created time in 16 hours

issue openedubuntu/microk8s

Help with API Call

looking for some help with my setup. I have an external application that needs to make an API call to my cluster. I'm currently running Microk8s v1.22.5-3+66632586920c77

I have my API Server: https://10.10.40.11:16443

I've created an Service account:

  • Name: pan-plugin-user
  • Namespace: kube-system
  • Labels: app=pan-plugin
  • Annotations: <none>
  • Image pull secrets: <none>
  • Mountable secrets: pan-plugin-user-token-ccqrr

I've added the token to the application and when I try to validate I get: "Failed to get Pods. Max retry exceeded. Error: SSL certificate error"

MicroK8s says that it consolidated a few services into daemon-kubelite

Used in release 1.21 and later. The kubelite daemon runs as subprocesses the scheduler, controller, proxy, kubelet, and apiserver services. Each of these individual services can be configured using arguments in the matching ${SNAP_DATA}/args/ directory:

- scheduler ${SNAP_DATA}/args/kube-scheduler
- controller ${SNAP_DATA}/args/kube-controller-manager
- proxy ${SNAP_DATA}/args/kube-proxy
- kubelet ${SNAP_DATA}/args/kubelet
- apiserver ${SNAP_DATA}/args/kube-apiserver

Also it seems like I'm using one of the approved K8s auth methods: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authentication-strategies

The app has me generate a Service Account:

$ k describe sa -n kube-system pan-plugin-user

- Name:                pan-plugin-user
- Namespace:           kube-system
- Labels:              app=pan-plugin
- Annotations:         <none>
- Image pull secrets:  <none>
- Mountable secrets:   pan-plugin-user-token-zr7xq
- Tokens:              pan-plugin-user-token-zr7xq
- Events:              <none>

which has a secret:

$ k describe secret -n kube-system pan-plugin-user-token-zr7xq

- Name:         pan-plugin-user-token-zr7xq
- Namespace:    kube-system
- Labels:       <none>
- Annotations:  kubernetes.io/service-account.name: pan-plugin-user
- kubernetes.io/service-account.uid: removed
- Type:  kubernetes.io/service-account-token
- Data
- ====
- ca.crt:     1123 bytes
- namespace:  11 bytes
- token: Removed

And then it generates a cred.json file:

kubectl -n kube-system get secrets <secrets-from-above-command> -o json >> cred.json

- {
- "apiVersion": "v1",
- "data": {
- "ca.crt": "REMOVED",
- "namespace": "a3ViZS1zeXN0ZW0=",
- "token": "REMOVED"
- },
- "kind": "Secret",
- "metadata": {
- "annotations": {
- "kubernetes.io/service-account.name": "pan-plugin-user",
- "kubernetes.io/service-account.uid": "REMOVED"
- },
- "creationTimestamp": "2022-01-17T16:49:33Z",
- "name": "pan-plugin-user-token-zr7xq",
- "namespace": "kube-system",
- "resourceVersion": "1447233",
- "selfLink": "/api/v1/namespaces/kube-system/secrets/pan-plugin-user-token-zr7xq",
- "uid": "REMOVED"
- },
- "type": "kubernetes.io/service-account-token"
- }

So the file has the ca.crt in there, I feel like that should work for the auth

If I look at:

$ microk8s config

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: "THIS CERT"

matches the one in the json file.

inspection-report-20220117_121858.tar.gz

created time in 16 hours

startedubuntu/WSL

started time in 17 hours

startedubuntu/gnome-shell-extension-appindicator

started time in 17 hours

startedubuntu/microk8s

started time in 18 hours

PR merged ubuntu/WSL

Reviewers
Quality checks on CI with LLVM/Clang tools

Hi @didrocks!

Here we go again. As promissed in #66 I did the best I could to find the balance between the upstream original source files and capture it into the form of a .clang-format file. While I benefit from the help of ClangPowerTools Format Style Detector (I highly recommend it btw), I needed to loop in try-and-error for a while doing fine grained adjustments to the config and yet I couldn't get to the perfection, as the sources are not formatted consistently.

So I had to establish a couple of criteria:

  • Desconsider WslApiLoader.h and WslApiLoader.cpp. Those files have very long lines and are far more divergent to the rest of the sources.
  • Avoid too long lines. There are up to 147 chars long lines in the original sources (DistributionInfo.cpp line 48). That should not be our base line. There are lines in other files broken at 80th-90th column, and it is very common to find lines close to 100.
  • Purge tabs from our sources. OOBE.cpp will be completely rewritten mainly due tabs.
  • Preserve what seems consistent across the original sources. Brace breaks at function definitions and namespaces seem pretty common. That's not my preferred style, but it won't hurt and we'll benefit from the consistency.

A theoretical exercise:

If we were to reformat the original sources per current clang format definition, it would look like this:

Most consistent files: (Distr*.{cpp,h} and Helpers.{cpp,h}) -- notice how the headers wouldn't change.

diff --git a/DistroLauncher/DistributionInfo.cpp b/DistroLauncher/DistributionInfo.cpp
index 3367094..34c7fae 100644
--- a/DistroLauncher/DistributionInfo.cpp
+++ b/DistroLauncher/DistributionInfo.cpp
@@ -45,7 +45,8 @@ ULONG DistributionInfo::QueryUid(std::wstring_view userName)
         command += userName;
         int returnValue = 0;
         HANDLE child;
-        HRESULT hr = g_wslApi.WslLaunch(command.c_str(), true, GetStdHandle(STD_INPUT_HANDLE), writePipe, GetStdHandle(STD_ERROR_HANDLE), &child);
+        HRESULT hr = g_wslApi.WslLaunch(command.c_str(), true, GetStdHandle(STD_INPUT_HANDLE),
+                                        writePipe, GetStdHandle(STD_ERROR_HANDLE), &child);
         if (SUCCEEDED(hr)) {
             // Wait for the child to exit and ensure process exited successfully.
             WaitForSingleObject(child, INFINITE);
@@ -65,7 +66,7 @@ ULONG DistributionInfo::QueryUid(std::wstring_view userName)
                     try {
                         uid = std::stoul(buffer, nullptr, 10);
 
-                    } catch( ... ) { }
+                    } catch (...) { }
                 }
             }
         }
diff --git a/DistroLauncher/DistroLauncher.cpp b/DistroLauncher/DistroLauncher.cpp
index 83828f2..7c57f84 100644
--- a/DistroLauncher/DistroLauncher.cpp
+++ b/DistroLauncher/DistroLauncher.cpp
@@ -5,7 +5,7 @@
 
 #include "stdafx.h"
 
-// Commandline arguments: 
+// Commandline arguments:
 #define ARG_CONFIG              L"config"
 #define ARG_CONFIG_DEFAULT_USER L"--default-user"
 #define ARG_INSTALL             L"install"
@@ -29,7 +29,8 @@ HRESULT InstallDistribution(bool createUser)
         return hr;
     }
 
-    // Delete /etc/resolv.conf to allow WSL to generate a version based on Windows networking information.
+    // Delete /etc/resolv.conf to allow WSL to generate a version based on Windows networking
+    // information.
     DWORD exitCode;
     hr = g_wslApi.WslLaunchInteractive(L"rm /etc/resolv.conf", true, &exitCode);
     if (FAILED(hr)) {
@@ -38,7 +39,7 @@ HRESULT InstallDistribution(bool createUser)
 
     // Create a user account.
     if (createUser) {
-        if (DistributionInfo::isOOBEAvailable()){
+        if (DistributionInfo::isOOBEAvailable()) {
             return DistributionInfo::OOBESetup();
         }
         Helpers::PrintMessage(MSG_CREATE_USER_PROMPT);
@@ -103,7 +104,8 @@ int wmain(int argc, wchar_t const *argv[])
     if (!g_wslApi.WslIsDistributionRegistered()) {
 
         // If the "--root" option is specified, do not create a user account.
-        bool useRoot = ((installOnly) && (arguments.size() > 1) && (arguments[1] == ARG_INSTALL_ROOT));
+        bool useRoot =
+          ((installOnly) && (arguments.size() > 1) && (arguments[1] == ARG_INSTALL_ROOT));
         hr = InstallDistribution(!useRoot);
         if (FAILED(hr)) {
             if (hr == HRESULT_FROM_WIN32(ERROR_ALREADY_EXISTS)) {
@@ -128,8 +130,7 @@ int wmain(int argc, wchar_t const *argv[])
                 Helpers::PromptForInput();
             }
 
-        } else if ((arguments[0] == ARG_RUN) ||
-                   (arguments[0] == ARG_RUN_C)) {
+        } else if ((arguments[0] == ARG_RUN) || (arguments[0] == ARG_RUN_C)) {
 
             std::wstring command;
             for (size_t index = 1; index < arguments.size(); index += 1) {
diff --git a/DistroLauncher/Helpers.cpp b/DistroLauncher/Helpers.cpp
index 524e8b6..e9809a5 100644
--- a/DistroLauncher/Helpers.cpp
+++ b/DistroLauncher/Helpers.cpp
@@ -5,7 +5,8 @@
 
 #include "stdafx.h"
 
-namespace {
+namespace
+{
     HRESULT FormatMessageHelperVa(DWORD messageId, va_list vaList, std::wstring* message);
     HRESULT PrintMessageVa(DWORD messageId, va_list vaList);
 }
@@ -32,7 +33,7 @@ std::wstring Helpers::GetUserInput(DWORD promptMsg, DWORD maxCharacters)
 
 void Helpers::PrintErrorMessage(HRESULT error)
 {
-    PWSTR buffer = nullptr; 
+    PWSTR buffer = nullptr;
     ::FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_ALLOCATE_BUFFER,
                      nullptr,
                      error,
@@ -65,17 +66,19 @@ void Helpers::PromptForInput()
     return;
 }
 
-namespace {
+namespace
+{
     HRESULT FormatMessageHelperVa(DWORD messageId, va_list vaList, std::wstring* message)
     {
-        PWSTR buffer = nullptr; 
-        DWORD written = ::FormatMessageW(FORMAT_MESSAGE_FROM_HMODULE | FORMAT_MESSAGE_ALLOCATE_BUFFER,
-                                         nullptr,
-                                         messageId,
-                                         0,
-                                         (PWSTR)&buffer,
-                                         10,
-                                         &vaList);
+        PWSTR buffer = nullptr;
+        DWORD written =
+          ::FormatMessageW(FORMAT_MESSAGE_FROM_HMODULE | FORMAT_MESSAGE_ALLOCATE_BUFFER,
+                           nullptr,
+                           messageId,
+                           0,
+                           (PWSTR)&buffer,
+                           10,
+                           &vaList);
         *message = buffer;
         if (buffer != nullptr) {
             HeapFree(GetProcessHeap(), 0, buffer);

The most complicated ones (WslApiLoader.{cpp,h})

diff --git a/DistroLauncher/WslApiLoader.cpp b/DistroLauncher/WslApiLoader.cpp
index 53fdf83..1032fef 100644
--- a/DistroLauncher/WslApiLoader.cpp
+++ b/DistroLauncher/WslApiLoader.cpp
@@ -11,11 +11,16 @@ WslApiLoader::WslApiLoader(const std::wstring& distributionName) :
 {
     _wslApiDll = LoadLibraryEx(L"wslapi.dll", nullptr, LOAD_LIBRARY_SEARCH_SYSTEM32);
     if (_wslApiDll != nullptr) {
-        _isDistributionRegistered = (WSL_IS_DISTRIBUTION_REGISTERED)GetProcAddress(_wslApiDll, "WslIsDistributionRegistered");
-        _registerDistribution = (WSL_REGISTER_DISTRIBUTION)GetProcAddress(_wslApiDll, "WslRegisterDistribution");
-        _configureDistribution = (WSL_CONFIGURE_DISTRIBUTION)GetProcAddress(_wslApiDll, "WslConfigureDistribution");
-        _getDistributionConfiguration = (WSL_GET_DISTRIBUTION_CONFIGURATION)GetProcAddress(_wslApiDll, "WslGetDistributionConfiguration");
-        _launchInteractive = (WSL_LAUNCH_INTERACTIVE)GetProcAddress(_wslApiDll, "WslLaunchInteractive");
+        _isDistributionRegistered =
+          (WSL_IS_DISTRIBUTION_REGISTERED)GetProcAddress(_wslApiDll, "WslIsDistributionRegistered");
+        _registerDistribution =
+          (WSL_REGISTER_DISTRIBUTION)GetProcAddress(_wslApiDll, "WslRegisterDistribution");
+        _configureDistribution =
+          (WSL_CONFIGURE_DISTRIBUTION)GetProcAddress(_wslApiDll, "WslConfigureDistribution");
+        _getDistributionConfiguration = (WSL_GET_DISTRIBUTION_CONFIGURATION)GetProcAddress(
+          _wslApiDll, "WslGetDistributionConfiguration");
+        _launchInteractive =
+          (WSL_LAUNCH_INTERACTIVE)GetProcAddress(_wslApiDll, "WslLaunchInteractive");
         _launch = (WSL_LAUNCH)GetProcAddress(_wslApiDll, "WslLaunch");
     }
 }
@@ -29,12 +34,9 @@ WslApiLoader::~WslApiLoader()
 
 BOOL WslApiLoader::WslIsOptionalComponentInstalled()
 {
-    return ((_wslApiDll != nullptr) && 
-            (_isDistributionRegistered != nullptr) &&
-            (_registerDistribution != nullptr) &&
-            (_configureDistribution != nullptr) &&
-            (_getDistributionConfiguration != nullptr) &&
-            (_launchInteractive != nullptr) &&
+    return ((_wslApiDll != nullptr) && (_isDistributionRegistered != nullptr) &&
+            (_registerDistribution != nullptr) && (_configureDistribution != nullptr) &&
+            (_getDistributionConfiguration != nullptr) && (_launchInteractive != nullptr) &&
             (_launch != nullptr));
 }
 
@@ -53,9 +55,11 @@ HRESULT WslApiLoader::WslRegisterDistribution()
     return hr;
 }
 
-HRESULT WslApiLoader::WslConfigureDistribution(ULONG defaultUID, WSL_DISTRIBUTION_FLAGS wslDistributionFlags)
+HRESULT WslApiLoader::WslConfigureDistribution(ULONG defaultUID,
+                                               WSL_DISTRIBUTION_FLAGS wslDistributionFlags)
 {
-    HRESULT hr = _configureDistribution(_distributionName.c_str(), defaultUID, wslDistributionFlags);
+    HRESULT hr =
+      _configureDistribution(_distributionName.c_str(), defaultUID, wslDistributionFlags);
     if (FAILED(hr)) {
         Helpers::PrintMessage(MSG_WSL_CONFIGURE_DISTRIBUTION_FAILED, hr);
     }
@@ -65,7 +69,7 @@ HRESULT WslApiLoader::WslConfigureDistribution(ULONG defaultUID, WSL_DISTRIBUTIO
 
 HRESULT WslApiLoader::WslGetDistributionConfiguration(ULONG* distributionVersion,
                                                       ULONG* defaultUID,
-                                                      WSL_DISTRIBUTION_FLAGS* wslDistributionFlags, 
+                                                      WSL_DISTRIBUTION_FLAGS* wslDistributionFlags,
                                                       PSTR** defaultEnvironmentVariables,
                                                       ULONG* defaultEnvironmentVariableCount)
 {
@@ -77,9 +81,12 @@ HRESULT WslApiLoader::WslGetDistributionConfiguration(ULONG* distributionVersion
                                          defaultEnvironmentVariableCount);
 }
 
-HRESULT WslApiLoader::WslLaunchInteractive(PCWSTR command, BOOL useCurrentWorkingDirectory, DWORD *exitCode)
+HRESULT WslApiLoader::WslLaunchInteractive(PCWSTR command,
+                                           BOOL useCurrentWorkingDirectory,
+                                           DWORD* exitCode)
 {
-    HRESULT hr = _launchInteractive(_distributionName.c_str(), command, useCurrentWorkingDirectory, exitCode);
+    HRESULT hr =
+      _launchInteractive(_distributionName.c_str(), command, useCurrentWorkingDirectory, exitCode);
     if (FAILED(hr)) {
         Helpers::PrintMessage(MSG_WSL_LAUNCH_INTERACTIVE_FAILED, command, hr);
     }
@@ -87,9 +94,20 @@ HRESULT WslApiLoader::WslLaunchInteractive(PCWSTR command, BOOL useCurrentWorkin
     return hr;
 }
 
-HRESULT WslApiLoader::WslLaunch(PCWSTR command, BOOL useCurrentWorkingDirectory, HANDLE stdIn, HANDLE stdOut, HANDLE stdErr, HANDLE *process)
+HRESULT WslApiLoader::WslLaunch(PCWSTR command,
+                                BOOL useCurrentWorkingDirectory,
+                                HANDLE stdIn,
+                                HANDLE stdOut,
+                                HANDLE stdErr,
+                                HANDLE* process)
 {
-    HRESULT hr = _launch(_distributionName.c_str(), command, useCurrentWorkingDirectory, stdIn, stdOut, stdErr, process);
+    HRESULT hr = _launch(_distributionName.c_str(),
+                         command,
+                         useCurrentWorkingDirectory,
+                         stdIn,
+                         stdOut,
+                         stdErr,
+                         process);
     if (FAILED(hr)) {
         Helpers::PrintMessage(MSG_WSL_LAUNCH_FAILED, command, hr);
     }
diff --git a/DistroLauncher/WslApiLoader.h b/DistroLauncher/WslApiLoader.h
index ef89fe6..b513ad2 100644
--- a/DistroLauncher/WslApiLoader.h
+++ b/DistroLauncher/WslApiLoader.h
@@ -11,12 +11,17 @@
 #define ERROR_LINUX_SUBSYSTEM_NOT_PRESENT 414L
 #endif // !ERROR_LINUX_SUBSYSTEM_NOT_PRESENT
 
-typedef BOOL    (STDAPICALLTYPE* WSL_IS_DISTRIBUTION_REGISTERED)(PCWSTR);
-typedef HRESULT (STDAPICALLTYPE* WSL_REGISTER_DISTRIBUTION)(PCWSTR, PCWSTR);
-typedef HRESULT (STDAPICALLTYPE* WSL_CONFIGURE_DISTRIBUTION)(PCWSTR, ULONG, WSL_DISTRIBUTION_FLAGS);
-typedef HRESULT (STDAPICALLTYPE* WSL_GET_DISTRIBUTION_CONFIGURATION)(PCWSTR, ULONG *, ULONG *, WSL_DISTRIBUTION_FLAGS *, PSTR **, ULONG *);
-typedef HRESULT (STDAPICALLTYPE* WSL_LAUNCH_INTERACTIVE)(PCWSTR, PCWSTR, BOOL, DWORD *);
-typedef HRESULT (STDAPICALLTYPE* WSL_LAUNCH)(PCWSTR, PCWSTR, BOOL, HANDLE, HANDLE, HANDLE, HANDLE *);
+typedef BOOL(STDAPICALLTYPE* WSL_IS_DISTRIBUTION_REGISTERED)(PCWSTR);
+typedef HRESULT(STDAPICALLTYPE* WSL_REGISTER_DISTRIBUTION)(PCWSTR, PCWSTR);
+typedef HRESULT(STDAPICALLTYPE* WSL_CONFIGURE_DISTRIBUTION)(PCWSTR, ULONG, WSL_DISTRIBUTION_FLAGS);
+typedef HRESULT(STDAPICALLTYPE* WSL_GET_DISTRIBUTION_CONFIGURATION)(PCWSTR,
+                                                                    ULONG*,
+                                                                    ULONG*,
+                                                                    WSL_DISTRIBUTION_FLAGS*,
+                                                                    PSTR**,
+                                                                    ULONG*);
+typedef HRESULT(STDAPICALLTYPE* WSL_LAUNCH_INTERACTIVE)(PCWSTR, PCWSTR, BOOL, DWORD*);
+typedef HRESULT(STDAPICALLTYPE* WSL_LAUNCH)(PCWSTR, PCWSTR, BOOL, HANDLE, HANDLE, HANDLE, HANDLE*);
 
 class WslApiLoader
 {
@@ -30,26 +35,22 @@ class WslApiLoader
 
     HRESULT WslRegisterDistribution();
 
-    HRESULT WslConfigureDistribution(ULONG defaultUID,
-                                     WSL_DISTRIBUTION_FLAGS wslDistributionFlags);
+    HRESULT WslConfigureDistribution(ULONG defaultUID, WSL_DISTRIBUTION_FLAGS wslDistributionFlags);
 
     HRESULT WslGetDistributionConfiguration(ULONG* distributionVersion,
                                             ULONG* defaultUID,
                                             WSL_DISTRIBUTION_FLAGS* wslDistributionFlags,
                                             PSTR** defaultEnvironmentVariables,
-                                            ULONG* defaultEnvironmentVariableCount
-                                            );
+                                            ULONG* defaultEnvironmentVariableCount);
 
-    HRESULT WslLaunchInteractive(PCWSTR command,
-                                 BOOL useCurrentWorkingDirectory,
-                                 DWORD *exitCode);
+    HRESULT WslLaunchInteractive(PCWSTR command, BOOL useCurrentWorkingDirectory, DWORD* exitCode);
 
     HRESULT WslLaunch(PCWSTR command,
                       BOOL useCurrentWorkingDirectory,
                       HANDLE stdIn,
                       HANDLE stdOut,
                       HANDLE stdErr,
-                      HANDLE *process);
+                      HANDLE* process);
 
   private:
     std::wstring _distributionName;

Of course that's only an exercise. The changes I am applying for review are only in our files.

If the proposed style is good enough for you, let's get this merged so, as long as we let the tools to their job, we should never talk about formatting issues in C++ sources from now on.

+1850 -10

11 comments

33 changed files

CarlosNihelton

pr closed time in 20 hours

push eventubuntu/WSL

Carlos Nihelton

commit sha e9bae6cdfa73c44f1fd53f56c00ef3db902b46a2

Removing vcpkg from the build workflow.

view details

Carlos Nihelton

commit sha 7b0c71948fa5a5b1ec2afae6274ef08c9c265cd6

Testing harness - Google Testing Framework choosen due GMock - Added tests for ExitStatusParser - Upgraded SDK version on Test vcxproj files

view details

Carlos Nihelton

commit sha 9113cf5e0fc2fd6198aa8bdc3fe41bc7fff1ae4c

Setting up clang-format and clang-tidy - Enables MSBuild CLI to run clang-tidy - Extending default MSBuild support was required. - That's done with the combination of batch and python scripts under msbuild/ folder. - Custom runner supports a .clang-ignore file similar to a .gitignore syntax.

view details

Carlos Nihelton

commit sha e91068b8ed48f774fcc6e2a3830b5fc843681f9f

Created the inline_comments component - It's responsible for calling the GitHub API - Review comments must have been created before.

view details

Carlos Nihelton

commit sha 6ac312cd91b58e7d5c1ce9a7af54a4452e52c375

Created diff_to_review component - Turns any unidiff format into a list of PR review comments to be posted. - The diff could be generated by any tool. - The diff hunks becomes suggestions in a PR Review comment context.

view details

Carlos Nihelton

commit sha f2d7a1c43bcc0f8c58f8ec8ad7084427a1013261

clang_lint pipeline implemented - Obeys to the .clang-ignore file described before. - Expends clang-tidy to have run and produced the fixes.yaml output file. - Runs clang-format to generate a diff on unformatted code to turn them into PR Review comments.

view details

Carlos Nihelton

commit sha 62072c211e0ccdd9e598c60693a626efed35a8ff

The workflow entrypoint - Intended to be run under Windows or Linux - Some changes might be required to ensure portability. - Tested only on Windows due our need to build and test the app.

view details

Carlos Nihelton

commit sha b281327e39fd48cdd3da84ac08d20d621670903e

Finally, the workflow is implemented. - Runs tests - Runs clang-tidy - Runs the pipeline: - Creates comments from clang-tidy's fixes.yaml - Runs clang-format. - Creates comments from any diff generated by the formatter. - Aggregates the comments. - Post them. - When applicable, comments generated comes with sugestions.

view details

Carlos Nihelton

commit sha 5919ff587f1035ecc0e4f1fa6745c0784aa57baf

Complex type cannot be vararg

view details

Carlos Nihelton

commit sha 4fc9cbe3e0f3148340c4d53920f1600dc39c9a9a

Fix Python3 shebang.

view details

Carlos Nihelton

commit sha cce62b29b6dd4d31ed668b5714bee2b4d0422b65

FIx undesired double spaces.

view details

Carlos Nihelton

commit sha 49910e538bd38e756fc73227d4bf3a0e4efee6d7

Created or enhanced existing docstrings.

view details

Carlos Nihelton

commit sha 0baec8fff8e063b991fa78703400b4fce3ac7e47

Formatting strings instead of concatenating.

view details

Carlos Nihelton

commit sha 37c46ee319ecffb71191b216cf27d5f61598faab

Try-except around GitPython client code. - As pointed out, CI could crash if not in a git repository - That is possible if actions/checkout needs to fallback, forinstance.

view details

Carlos Nihelton

commit sha 2495565c0a02247ee75349bacb429aeb79be4e33

Taking tests to its own workflow.

view details

Carlos Nihelton

commit sha 7074711977173ec4637d40f5b3f7ab507320cc05

Merge pull request #69 from ubuntu/fix-clang-format Quality checks on CI with LLVM/Clang tools

view details

push time in 20 hours

delete branch ubuntu/WSL

delete branch : fix-clang-format

delete time in 20 hours

pull request commentubuntu/WSL

Quality checks on CI with LLVM/Clang tools

I will let you merge :)

My honor! :smile:

CarlosNihelton

comment created time in 20 hours

issue openedubuntu/microk8s

Calicoctl ipam configure not available

I am trying to add Windows Workers to existing 3 Ubuntu nodes MicroK8s cluster. Docs state that I should calicoctl and that I should set strict affinity to true with following command: DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl ipam configure --strictaffinity=true

However command calicoctl ipam configure is not available. I can issue eg. DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl ipam -h and I get:

calicoctl ipam <command> [<args>...]

    release      Release a Calico assigned IP address.
    show         Show details of a Calico assigned IP address,
                 or of overall IP usage.
there is not "configure" option for ipam.

Issuing: DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl ipam configure --strictaffinity=true results in:

Set the Calico datastore access information in the environment variables or
supply details in a config file.

Usage:
  calicoctl ipam <command> [<args>...]

    release      Release a Calico assigned IP address.
    show         Show details of a Calico assigned IP address,
                 or of overall IP usage.

Options:
  -h --help      Show this screen.

Description:
  IP Address Management specific commands for calicoctl.

  See 'calicoctl ipam <command> --help' to read about a specific subcommand.

calicoctl is configured correctly because get commands works eg. DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get node results:

NAME          
k8s-linux-1   
k8s-linux-3   
k8s-linux-2   

How should I configure affinity to proceed with adding Windows worker nodes. Is it bug or docs are not updated ?

created time in 20 hours

pull request commentubuntu/WSL

Quality checks on CI with LLVM/Clang tools

I will let you merge :)

CarlosNihelton

comment created time in 20 hours

PullRequestReviewEvent

Pull request review commentubuntu/WSL

Quality checks on CI with LLVM/Clang tools

+name: QA - Lint C++ Sources+on:+  push:+    branches:+    - main+  pull_request:+concurrency: quality++env:+  pull_request_id: ${{ github.event.pull_request.number }}+  INPUT_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}+    +jobs:+  quality:+    name: Lint C++ sources with LLVM+    runs-on: windows-latest+    steps:+      - uses: actions/checkout@v2+      - uses: microsoft/setup-msbuild@v1.0.2+      # Ensures compilation of message resources before clang-tidy. Also fails fast.+      - name: Run tests+        shell: powershell+        run: |+          msbuild -p:Platform=x64 -p:Configuration=Debug -p:RestorePackagesConfig=true -t:restore .\DistroLauncher-Tests\DistroLauncher-Tests.sln+          msbuild -p:Platform=x64 -p:Configuration=Debug .\DistroLauncher-Tests\DistroLauncher-Tests.sln+          .\DistroLauncher-Tests\x64\Debug\DistroLauncher-Tests.exe+      # Produces the fixes.yaml to be consumed by the commenter.+      - name: Run Clang-Tidy.

Thanks for considering, let’s merge it now!

CarlosNihelton

comment created time in 20 hours

PullRequestReviewEvent

PR opened ubuntu/WSL

Support for pushing notifications to the Ubuntu ISO Tracker

Sending this out for an initial review. The goal of this PR is to send out notifications to the Ubuntu ISO Tracker about any new WSL builds that are completed, notifying the respective per-series products. This way we'd have Ubuntu WSL images appearing on the tracker automatically. It also plays in nicely with another feature of this related to the ability to re-trigger builds via the tracker.

http://iso.qa.ubuntu.com/

This is the first time I deal with github actions, so that part is mostly written blindly by just looking at other sections and reading documentation. There's also some open questions here.

Open questions:

  1. Version of the resulting WSL build. Right now I went with the simple route of just using build_id, but from what I see it's like a small number right now. Ideas I had was things like DATESTAMP.build_id or something, or does build_id suffice?
  2. Not sure if the ISO Tracker API will work over HTTPS (which we need as it's outside of Canonical network)
  3. I have no idea how github actions are triggered and how long the environment survives, but in the code I used /tmp/all-releases.csv in case that it might 'stick around' from previous job runs. I can switch to just doing it via LP directly, but if it's persistent for long enough I thought it might save a few LP API calls.

Other than that I think it's good enough. I only tested it in dry-run mode on my local machine. I'm open to suggestions!

+160 -0

0 comment

3 changed files

pr created time in 21 hours

Pull request review commentubuntu/WSL

Quality checks on CI with LLVM/Clang tools

+name: QA - Lint C++ Sources+on:+  push:+    branches:+    - main+  pull_request:+concurrency: quality++env:+  pull_request_id: ${{ github.event.pull_request.number }}+  INPUT_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}+    +jobs:+  quality:+    name: Lint C++ sources with LLVM+    runs-on: windows-latest+    steps:+      - uses: actions/checkout@v2+      - uses: microsoft/setup-msbuild@v1.0.2+      # Ensures compilation of message resources before clang-tidy. Also fails fast.+      - name: Run tests+        shell: powershell+        run: |+          msbuild -p:Platform=x64 -p:Configuration=Debug -p:RestorePackagesConfig=true -t:restore .\DistroLauncher-Tests\DistroLauncher-Tests.sln+          msbuild -p:Platform=x64 -p:Configuration=Debug .\DistroLauncher-Tests\DistroLauncher-Tests.sln+          .\DistroLauncher-Tests\x64\Debug\DistroLauncher-Tests.exe+      # Produces the fixes.yaml to be consumed by the commenter.+      - name: Run Clang-Tidy.

Done. I made running the tests on its own worflow. We should be good to go.

CarlosNihelton

comment created time in 21 hours

more