profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/ianpartridge/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Ian Partridge ianpartridge IBM UK Senior cloud engineer @IBM.

apple/swift-corelibs-foundation 4293

The Foundation Project, providing core utilities, internationalization, and OS independence

apple/swift-docker 1154

Docker Official Image packaging for Swift

Evolution-App/iOS 222

Unofficial app for Swift Evolution

Evolution-App/Backend 122

Backend is responsible to provide data to EVOlution App - iOS

dokun1/slackin-swift 32

Invite people to your public slack instance - but in Swift!

ianpartridge/coffeeshop-demo 4

OpenLiberty, Kafka and Reactive are ordering serverless coffee with KEDA

appsody/appsody-buildah 1

A docker image with Appsody CLI installed that can be used for running Appsody with buildah in Tekton pipelines.

pull request commentredhat-developer/service-binding-operator

Drop support for empty application binding

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: To complete the pull request process, please ask for approval from baijum after the PR has been reviewed.

The full list of commands accepted by this bot can be found here.

<details open> Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":["baijum"]} -->

baijum

comment created time in 35 minutes

pull request commentredhat-developer/service-binding-operator

Successful Service Binding Resource should be Immutable

@isutton SBO's deployment is service-binding-operator (https://github.com/redhat-developer/service-binding-operator/pull/913/checks?check_run_id=2804236225#step:8:72), however your detector in tests is looking for service-binding-operator-controller-manager

make deploy uses kustomize, which I could find a namePrefix field set to service-binding-operator-, resulting in a Deployment named service-binding-operator-controller-manager, as seen below:

$ kubectl -n service-binding-operator-system get all
NAME                                                               READY   STATUS              RESTARTS   AGE
pod/service-binding-operator-controller-manager-68fbd6f645-9grwz   0/1     ContainerCreating   0          4m37s

NAME                                               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/service-binding-operator-webhook-service   ClusterIP   10.97.49.240   <none>        443/TCP   4m37s

NAME                                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/service-binding-operator-controller-manager   0/1     1            0           4m37s

NAME                                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/service-binding-operator-controller-manager-68fbd6f645   1         1         0       4m37s

To build the image, I've used make image OPERATOR_REGISTRY=$(minikube ip):5000 producing the image 192.168.49.2:5000/redhat-developer/servicebinding-operator:991bc81d, which looks correct. The image is being built using Minikube's Docker, which has been exposed by running eval $(minikube docker-env).

As seen in here WATCH_NAMESPACE is being set by default to the resource's namespace:

env:
- name: WATCH_NAMESPACE
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace

@pedjak If you do not oppose, I'll remove this configuration from manager.yaml.

Avni-Sharma

comment created time in 39 minutes

push eventk8s-service-bindings/spec

Scott Andrews

commit sha 9c5594c41989ed865e251de2bc24acafd4f352fe

Remove mappings from the ServiceBinding resource The existing mappings on the ServiceBinding resource were introduced as a way to make it easier for a user to enrich an existing Secret into a form appropriate for binding to an application workload. This approch had a few issues that can be better addressed by other resources that interoperate with the ServiceBiding resource. The issues include: - the ServiceBinding controller needs to be able to read and write Secrets - Go templates were used to compose new values, which worked for basic templating, but were limited in their capabilities - the capabilities of the Go templates are an implemenation detail of how the controller is built and could change over time independent of the speced behavior. The behavior applied by mappings can be reintroduced as dedicated resources that can themselves expose a Secret as a ProvisionedService, which can be consumed by a ServiceBinding. This change further separates the concerns of provisioning a service from binding a service. Refs #145 Signed-off-by: Scott Andrews <andrewssc@vmware.com>

view details

Scott Andrews

commit sha bf6c0c04e45ce5c83bdad5f2f30046e93105b149

Add should/must clarification for .data.type in secrets Signed-off-by: Scott Andrews <andrewssc@vmware.com>

view details

Scott Andrews

commit sha dbd710590011c571dc1514c75bcf10c99f2b9941

Restore .spec.type and .spec.provider While bring back these fields and the general capability, I removed references that mandated these fields be added to a Secret. Instead, the requirement is that they are part of the application projection. Implementors can figure out the best way to make that happen, either by creating a derivative Secret, or by using a projected volume. Signed-off-by: Scott Andrews <andrewssc@vmware.com>

view details

Arthur De Magalhaes

commit sha 2dceb4624e010f2412cce5745383c1a62dd76c36

Merge pull request #154 from scothis/mappings Remove mappings from the ServiceBinding resource

view details

push time in 9 hours

PR merged k8s-service-bindings/spec

Remove mappings from the ServiceBinding resource

The existing mappings on the ServiceBinding resource were introduced as a way to make it easier for a user to enrich an existing Secret into a form appropriate for binding to an application workload. This approch had a few issues that can be better addressed by other resources that interoperate with the ServiceBiding resource. The issues include:

  • the ServiceBinding controller needs to be able to read and write Secrets
  • Go templates were used to compose new values, which worked for basic templating, but were limited in their capabilities
  • the capabilities of the Go templates are an implemenation detail of how the controller is built and could change over time independent of the speced behavior.

The behavior applied by mappings can be reintroduced as dedicated resources that can themselves expose a Secret as a ProvisionedService, which can be consumed by a ServiceBinding. This change further separates the concerns of provisioning a service from binding a service.

The .spec.type and .spec.provider fields are also removed, as they are syntactic sugar on top of mappings.

Refs #145 Resolves #155

Signed-off-by: Scott Andrews andrewssc@vmware.com

+7 -78

12 comments

3 changed files

scothis

pr closed time in 9 hours

issue closedk8s-service-bindings/spec

Inconsistent information about `type` entry in binding secret

The Provisioned Service section of the README says:

The Secret data SHOULD contain a type entry with a value that identifies the abstract classification of the binding. 

The Application Projection section of the README says:

The Secret data MUST contain a type entry with a value that identifies the abstract classification of the binding.

The difference is between SHOULD and MUST convention.

closed time in 9 hours

DhritiShikhar

issue openedw3c/smufl

Flags missing from classes.json documentation

On the classes.json documentation page, there is an example excerpt from a metadata file that includes "clefs", "noteheads", and "flags". However, in the table below, flags are not mentioned as a class defined by the spec.

created time in 18 hours

issue openedk8s-service-bindings/spec

Deny list for containers to limit which containers in the application are not bound

The current spec supports an allow-list for containers to limit which containers in the application are bound. Sometimes a deny-list for containers would be more appropriate. The deny-list would limit which containers in the application are not bound. The allow-list could be mutually exclusive with the deny-list (only one of them exist). I propose to add .spec.application.skipContainers field to specify the deny-list for containers.

This can be added post 1.0 release in a backward-compatible way.

created time in 2 days

issue openedswift-server/swift-backtrace

Support Segmentation Fault

It would be awesome if we could get a strack trace for SIGSEGV signals. My docker container exits with exit code 139 / SIGSEV. However, there is no stack trace printed.

Code to reproduce:

var array: [Int] = []

let parallelQueue = DispatchQueue(label: "test", attributes: .concurrent)

parallelQueue.asyncAfter(deadline: .now() + 15) {
    for i in 0...100000 {
        array.append(i)
    }
}

parallelQueue.asyncAfter(deadline: .now() + 15) {
    for i in 0...100000 {
        array.append(i)
    }
}

Using Backtrace version 1.2.3.

created time in 3 days

release simpleigh/spookyhash

2.0.0

released time in 3 days

release simpleigh/spookyhash

1.2.1

released time in 3 days

issue openedk8s-service-bindings/spec

Normative text missing for ServiceBinding resource's .status.observedGeneration field

ServiceBinding resource type schema has a .status.observedGeneration field:

...
status:
  binding:              # LocalObjectReference, optional
    name:               # string
  conditions:           # []metav1.Condition containing at least one entry for `Ready`
  observedGeneration:   # int64

But there is no normative text about the .status.observedGeneration field explaining its significance. Probably the spec should give some info about that field's usage pattern.

There was some comment about that field in the past here: https://github.com/k8s-service-bindings/spec/issues/126#issuecomment-710105155

created time in 3 days

pull request commentredhat-developer/service-binding-operator

Successful Service Binding Resource should be Immutable

@isutton SBO's deployment is most likely called controller-manager (https://github.com/redhat-developer/service-binding-operator/pull/913/files#diff-acdef1a63fe932b7de3c48f1a4fce36f79787a3f5b9136539c3d214491a27cacR4), however your detector in tests is looking for service-binding-operator-controller-manager (https://github.com/redhat-developer/service-binding-operator/pull/913/files#diff-6b43685101835ebb3b69db6de7d76eb64e396b3cdf560619630e0055d4c593b8R44) that is why it does not find it

The Collect Logs in GHA (the archived sbo.log) does not care about the particular name - it takes whatever is there (https://github.com/redhat-developer/service-binding-operator/blob/master/.github/workflows/pr-checks.yaml#L114) so that is why the log is fine.

Avni-Sharma

comment created time in 3 days

push eventswift-server/swift-backtrace

tomer doron

commit sha d9655c7867edd9c8e708c620349bf63c9d5d3dc3

add 5.4 CI (#46) motivation: 5.4 was released, add 5.4 CI changes: add docker-compose setup for 5.4

view details

tomer doron

commit sha 5f7686a18bc6063df5dbecdfdb09e92ca698210b

Merge branch 'main' into tomerd-patch-1

view details

push time in 3 days

push eventswift-server/swift-backtrace

tomer doron

commit sha d9655c7867edd9c8e708c620349bf63c9d5d3dc3

add 5.4 CI (#46) motivation: 5.4 was released, add 5.4 CI changes: add docker-compose setup for 5.4

view details

push time in 3 days

PR merged swift-server/swift-backtrace

Reviewers
add 5.4 CI

motivation: 5.4 was released, add 5.4 CI

changes: add docker-compose setup for 5.4

+20 -2

1 comment

2 changed files

tomerd

pr closed time in 3 days

Pull request review commentk8s-service-bindings/spec

Move Secret Generation Strategies extension

+# Secret Generator Extension for Service Binding
  • Created an "extensions" directory and moved the Secret Generation extension inside
  • Created "extensions/README.md" with a brief introduction and a table of extensions (No., Title, and Status columns)
  • Updated the extension with a brief introduction and status section
baijum

comment created time in 3 days

pull request commentredhat-developer/service-binding-operator

Successful Service Binding Resource should be Immutable

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: To complete the pull request process, please assign isutton after the PR has been reviewed. You can assign the PR to them by writing /assign @isutton in a comment when ready.

The full list of commands accepted by this bot can be found here.

<details open> Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":["isutton"]} -->

Avni-Sharma

comment created time in 3 days

Pull request review commentk8s-service-bindings/spec

Move Secret Generation Strategies extension

+# Secret Generator Extension for Service Binding

By and large, this is the issue with keeping placeholder content in the repo. It becomes murky as to what it means and why it exists.

baijum

comment created time in 3 days

Pull request review commentk8s-service-bindings/spec

Move Secret Generation Strategies extension

+# Secret Generator Extension for Service Binding

What if we moved it into a directory named "extensions" and added a README.md that contained this paragraph?

baijum

comment created time in 3 days

Pull request review commentk8s-service-bindings/spec

Move Secret Generation Strategies extension

+# Secret Generator Extension for Service Binding

One more sentence pointing to the release cycle:

Note: This spec is dependent on "Service Binding Specification for Kubernetes", but there is no direct association -- neither containment nor aggregation. Both specs are going to have their independent release cycles.

baijum

comment created time in 3 days

Pull request review commentk8s-service-bindings/spec

Move Secret Generation Strategies extension

+# Secret Generator Extension for Service Binding

We can add a note. I need some help with phrasing it. Here is my attempt:

Note: This spec is dependent on "Service Binding Specification for Kubernetes", but there is no direct association -- neither containment nor aggregation.

baijum

comment created time in 3 days

issue commentw3c/smufl

add SSMN1.0 to SMuFL

Hi,

After a long covid-type delay I'm submitting the SSMN 01 font and a descriptive PDF. The license chosen is the CC Creative Commons. I hope that this font meets with your requirements.

Best, Emile Ellberger SSMN 01_ font and PDF.zip

ellberg

comment created time in 3 days

Pull request review commentk8s-service-bindings/spec

Move Secret Generation Strategies extension

+# Secret Generator Extension for Service Binding

it needs to be clear that this content is not a required part of the spec

baijum

comment created time in 3 days

pull request commentredhat-developer/service-binding-operator

Successful Service Binding Resource should be Immutable

There are a couple of options to proceed with testing the admission webhook in this PR:

I would suggest that we keep the scope of this PR focused and avoid spending time on crafting a superb dev experience, covering both CRC and Minikube. For local testing and for most of the cases, we can have an option to disable webhooks, so that we can still start a local process to connect to a cluster. Probably that could be also the default behaviour as well.

Otherwise, for running all acceptance tests, I would pick today option number 2, and work on improving over time. There is no need to use quay at all, if using minikube, we can use local cluster registry, and hence even run the tests without an internet connection. This is how our CI jobs are running tests even today, see:

https://github.com/redhat-developer/service-binding-operator/blob/4cf9811fe24804cc884ea0e2c2a83bd4cd065fd9/.github/workflows/pr-checks.yaml#L97

in case of CRC you can either use its local registry or have another registry running on your machine and use the same approach. IMHO, there is really no need to use CRC locally anymore, minikube works great and covers all our needs meanwhile.

cc @pmacik - wdyt?

Avni-Sharma

comment created time in 3 days

push eventredhat-developer/service-binding-operator

Predrag Knezevic

commit sha fea16795327cb38a67d62f1e0eb1175292a096e9

Fix Service.OwnedResources() to return correct result (#975) We were bitten by https://github.com/golang/go/wiki/CommonMistakes#using-reference-to-loop-iterator-variable Fixed the loop to use the new variable. Signed-off-by: Predrag Knezevic <pknezevi@redhat.com>

view details

push time in 3 days

PR merged redhat-developer/service-binding-operator

Reviewers
Fix Service.OwnedResources() to return correct result approved lgtm

We were bitten by https://github.com/golang/go/wiki/CommonMistakes#using-reference-to-loop-iterator-variable

Fixed the loop to use the new variable.

Fixes #966

+120 -3

13 comments

2 changed files

pedjak

pr closed time in 3 days

issue closedredhat-developer/service-binding-operator

`detectBindingResources` is binding to a wrong Service

What is the environment (Minikube, Openshift)?

minikube

What is the SBO version used?

v0.7.1

What are the steps to reproduce this issue?

1. deployed sample deployment and operator and operand into the same namespace called test

<details> <summary>deployment.yaml</summary>

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: busybox
        name: busybox

</details>

<details> <summary>operator.yaml</summary>


apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: operatorgroup
  namespace: test
spec:
  targetNamespaces:
  - test
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-postgresql-operator-dev4devs-com
  namespace: test
spec:
  channel: alpha
  name: postgresql-operator-dev4devs-com
  source: operatorhubio-catalog
  sourceNamespace: olm

</details>

<details> <summary>operand.yaml</summary>

apiVersion: postgresql.dev4devs.com/v1alpha1
kind: Database
metadata:
  name: postgresql
  namespace: test
spec:
  databaseName: example
  databasePassword: postgres
  databaseStorageRequest: 1Gi
  databaseUser: postgres
  image: centos/postgresql-96-centos7
  size: 1

</details>

2. created SericeBinding with detectBindingResources: true

<details> <summary>binding.yaml</summary>

apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
  name: busybox-postgresql
  namespace: test
spec:
  application:
    group: apps
    name: busybox
    resource: deployments
    version: v1
  bindAsFiles: false
  detectBindingResources: true
  services:
    - group: postgresql.dev4devs.com
      kind: Database
      name: postgresql
      namespace: test
      version: v1alpha1

</details>

In my namespace, there are two Services

▶ k get svc
NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
postgresql                    ClusterIP   10.110.53.111   <none>        5432/TCP            14m
postgresql-operator-metrics   ClusterIP   10.99.217.236   <none>        8383/TCP,8686/TCP   14m

postgresql-operator-metrics Service has operator (Database.postgresql.dev4devs.com) as ownerReference

▶ k get svc postgresql-operator-metrics -o jsonpath="{.metadata.ownerReferences}" | jq
[
  {
    "apiVersion": "apps/v1",
    "blockOwnerDeletion": true,
    "controller": true,
    "kind": "Deployment",
    "name": "postgresql-operator",
    "uid": "8c11676c-fb8d-4881-8f71-ddcba37581f2"
  }
]

postgresql Service has operand (Database.postgresql.dev4devs.com) as ownerReference

▶ k get svc postgresql -o jsonpath="{.metadata.ownerReferences}" | jq                 
[
  {
    "apiVersion": "postgresql.dev4devs.com/v1alpha1",
    "blockOwnerDeletion": true,
    "controller": true,
    "kind": "Database",
    "name": "postgresql",
    "uid": "43149969-d701-460a-8566-4337cbe4cc50"
  }
]

What is the expected behaviour?

Deployment busybox has DATABASE_CLUSTERIP env variable containing IP address of the postgresql Service

What is the actual behaviour?

Deployment busybox has DATABASE_CLUSTERIP env variable containing IP address of the postgresql-operator-metrics Service which is not tied to the service defined in ServiceBinding

▶ k exec -it busybox-75fcb776bc-5m4hr -- env | grep CLUSTERIP                                         
DATABASE_CLUSTERIP=10.99.217.236
▶ k get svc
NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
postgresql                    ClusterIP   10.110.53.111   <none>        5432/TCP            14m
postgresql-operator-metrics   ClusterIP   10.99.217.236   <none>        8383/TCP,8686/TCP   14m

Service Binding Operator Logs

<details> <summary>logs</summary>

{"level":"info","ts":1621435172.1368315,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1621435172.1373563,"logger":"setup","msg":"starting manager"}
{"level":"info","ts":1621435172.1376226,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0519 14:39:32.137605       1 leaderelection.go:243] attempting to acquire leader lease  operators/8fa65150.coreos.com...
I0519 14:39:32.143465       1 leaderelection.go:253] successfully acquired lease operators/8fa65150.coreos.com
{"level":"info","ts":1621435172.1436667,"logger":"controller","msg":"Starting EventSource","reconcilerGroup":"binding.operators.coreos.com","reconcilerKind":"ServiceBinding","controller":"servicebinding","source":"kind source: /, Kind="}
{"level":"info","ts":1621435172.2441118,"logger":"controller","msg":"Starting Controller","reconcilerGroup":"binding.operators.coreos.com","reconcilerKind":"ServiceBinding","controller":"servicebinding"}
{"level":"info","ts":1621435172.244153,"logger":"controller","msg":"Starting workers","reconcilerGroup":"binding.operators.coreos.com","reconcilerKind":"ServiceBinding","controller":"servicebinding","worker count":1}
{"level":"info","ts":1621436719.492417,"logger":"controllers.ServiceBinding","msg":"Reconciling","serviceBinding":"test/busybox-postgresql","sb":{"apiVersion":"binding.operators.coreos.com/v1alpha1","kind":"ServiceBinding","namespace":"test","name":"busybox-postgresql"}}
{"level":"info","ts":1621436720.3018339,"logger":"controllers.ServiceBinding","msg":"Done","serviceBinding":"test/busybox-postgresql","retry":false,"error":null}

</details>

closed time in 3 days

kadel

pull request commentredhat-developer/service-binding-operator

Successful Service Binding Resource should be Immutable

There are a couple of options to proceed with testing the admission webhook in this PR:

  1. Extract OpenShift's tls.key and tls.crt files to /tmp/k8s-webhook-server/serving-certs, which is the default location used by the ServiceBinding Operator (required to receive payloads from the API server).

    • Requires logic to extract tls.key and tls.crt files from vanilla Kubernetes.
    • Requires configuration to access the mutation webhook available outside the cluster.
    • Is it possible to use kubectl port-forward for this?
      • It might be possible that kubectl port-forward could help with this, as it is possible to define a specific service, although to be seen is whether it requires at least one active Pod available, or kube port-forward does work without a running Pod, making the client effectively a Pod of some sorts.
  2. Build the image locally, push to quay.io and execute the tests using a remote instance of the ServiceBinding Operator initialized through make deploy.

    This workflow invokes the following operations, regardless if the build happens at the same physical host as the destination cluster: build -> push to remote -> pull from remote.

    This solution seems to be the least effort, but waste a lot of resources since even when testing on the same host, a roundtrip to quay.io is required. If this approach is chosen, must be considered technical debt and scheduled for further developments.

  3. Build the image locally, and copy it to the destination cluster using skopeo; from what I've seen it can not copy or sync images from local Docker registry cache, only from the remote registry in the case --src docker is informed.

  4. Do not rely on a Kubernetes or OpenShift cluster, and adapt the integration tests to use envtest instead.

Avni-Sharma

comment created time in 3 days