profile
viewpoint
Jonathan Rudenberg titanous @flynn https://titanous.com Co-founder @flynn

startedmitchellh/gon

started time in 9 days

Pull request review commentalloy-commons/alloy-open-source

Update for newest Android release

-var MINIMUM_ANDROID_VERSION = new Date(2019, 9, 5);-var ANDROID_LAST_UPDATE = new Date(2019, 9, 9);+var MINIMUM_ANDROID_VERSION = new Date(2019, 10, 5);+var ANDROID_LAST_UPDATE = new Date(2019, 10, 4);

Shouldn't this be November instead of October?

alex

comment created time in 9 days

pull request commentflynn/go-tuf

encrypted: fix flake in EncryptedSuite.TestTamperedRoundtrip

Thanks!

ComputerDruid

comment created time in 9 days

push eventflynn/go-tuf

Dan Johnson

commit sha 890a6cb82044de20e094222d137721d287f46b71

encrypted: fix flake in EncryptedSuite.TestTamperedRoundtrip If we get really unlucky the encrypted bytes already start with 0x0000 so the tampering fails. Signed-off-by: Dan Johnson <computerdruid@google.com>

view details

push time in 9 days

PR merged flynn/go-tuf

encrypted: fix flake in EncryptedSuite.TestTamperedRoundtrip

If we get really unlucky the encrypted bytes already start with 0x0000 so the tampering fails.

Signed-off-by: Dan Johnson computerdruid@google.com

+1 -2

0 comment

1 changed file

ComputerDruid

pr closed time in 9 days

issue closedCleverCloud/biscuit

Add license

Please add a license to this repo and biscuit-rust/java. Thanks!

closed time in 16 days

titanous

issue commentCleverCloud/biscuit

Add license

Awesome!

titanous

comment created time in 16 days

issue openedCleverCloud/biscuit

Add license

Please add a license to this repo and biscuit-rust/java. Thanks!

created time in 21 days

create barnchtitanous/weap

branch : test-mtu

created branch time in 24 days

push eventtitanous/weap

Jonathan Rudenberg

commit sha 9dac824a2a97f6627bc551b0116012e42fb9819a

Add LICENSE

view details

push time in 24 days

delete branch titanous/caddy

delete branch : prune-imports

delete time in a month

startedvertigo235/Build-Prusa-LA-15

started time in a month

push eventtitanous/weap

Jonathan Rudenberg

commit sha ff5f748f71420af31fedefa74f00a6594a58b485

set Framed-MTU

view details

push time in a month

push eventtitanous/nextdhcp

Jonathan Rudenberg

commit sha 07e615f068e96eb3eb2bff69b8243b3dc6f7080f

wip lease list command

view details

push time in a month

issue openednextdhcp/nextdhcp

Consider always sending Server Identifier

Follow-up to #11. RFC 2131 requires the server identifier option. Any reason it shouldn't be moved to the core packet handling code here?

created time in a month

PR opened caddyserver/caddy

Move certmagic import out of caddy package

<!-- Thank you for contributing to Caddy! Please fill this out to help us make the most of your pull request.

Was this change discussed in an issue first? That can help save time in case the change is not a good fit for the project. Not all pull requests get merged.

It is not uncommon for pull requests to go through several, iterative reviews. Please be patient with us! Every reviewer is a volunteer, and each has their own style. -->

What does this change do, exactly?

<!-- Please be specific. Motivate the problem, and justify why this is the best solution. -->

Moves the certmagic import out of the core caddy package. The only use of certmagic is calling a shutdown hook. The reason behind this change is reducing the import graph for projects that use Caddy's core packages but not certmagic, for example nextdhcp.

Ideally the telemetry import would be eliminated too (perhaps using an interface variable that is populated by an init func in the telemetry package), but I thought I'd start here.

Checklist

  • [ ] I have written tests and verified that they fail without my change
  • [x] I have squashed any insignificant commits
  • [x] This change has comments explaining package types, values, functions, and non-obvious lines of code
  • [ ] I am willing to help maintain this change if there are issues with it later
+1 -6

0 comment

3 changed files

pr created time in a month

push eventtitanous/caddy

Jonathan Rudenberg

commit sha 94f1bc8f0a177c8cec6f3e1ab83f9c5b25bcdbb4

Move certmagic import out of caddy package

view details

push time in a month

create barnchtitanous/caddy

branch : prune-imports

created branch time in a month

PR opened nextdhcp/nextdhcp

Set Server ID for all packets

Some clients, including ChromeOS, require this option for all packets.

+12 -7

0 comment

2 changed files

pr created time in a month

create barnchtitanous/nextdhcp

branch : staging

created branch time in a month

create barnchtitanous/nextdhcp

branch : serverid-always

created branch time in a month

pull request commentnextdhcp/nextdhcp

Call flag.Parse() to allow config flag

Done!

titanous

comment created time in a month

push eventtitanous/nextdhcp

Jonathan Rudenberg

commit sha f6d77feccbef6d09e0a8cd0755d8595db40d9ee6

Call flag.Parse() to allow config flag

view details

push time in a month

delete tag flynn/flynn

delete tag : v20191011.0

delete time in a month

delete branch flynn/flynn

delete branch : kill-timeout

delete time in a month

PR merged flynn/flynn

host: Bump job stop timeout to 30s
+1 -1

0 comment

1 changed file

titanous

pr closed time in a month

push eventflynn/flynn

Jonathan Rudenberg

commit sha fa3d9c08793e7d93bd6779ac17031f3cbaf6f990

host: Bump job stop timeout to 30s Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in a month

PR opened flynn/flynn

host: Bump job stop timeout to 30s
+1 -1

0 comment

1 changed file

pr created time in a month

create barnchflynn/flynn

branch : kill-timeout

created branch time in a month

delete branch flynn/flynn

delete branch : router-499

delete time in a month

push eventflynn/flynn

Jonathan Rudenberg

commit sha 9a45e3986fd4fe492f424bd0efccdf865eb84987

router: Use HTTP status 499 in logs for client errors - Errors like clients disconnecting with DEBUG=1 will log a line with status=499 instead of status=503. - The logs now consistently use fractional milliseconds for all durations. - context is now used consistently and updated to use the APIs available as of Go 1.3.

view details

push time in a month

PR merged flynn/flynn

router: Use HTTP status 499 in logs for client errors
  • Errors like clients disconnecting with DEBUG=1 will log a line with status=499 instead of status=503.
  • The logs now consistently use fractional milliseconds for all durations.
  • context is now used consistently and updated to use the APIs available as of Go 1.3.
+65 -60

0 comment

3 changed files

titanous

pr closed time in a month

PR opened flynn/flynn

router: Use HTTP status 499 in logs for client errors
  • Errors like clients disconnecting with DEBUG=1 will log a line with status=499 instead of status=503.
  • The logs now consistently use fractional milliseconds for all durations.
  • context is now used consistently and updated to use the APIs available as of Go 1.3.
+65 -60

0 comment

3 changed files

pr created time in a month

push eventflynn/flynn

Jonathan Rudenberg

commit sha 41e2b2630bdca5272886073b8ae847ea684025f6

all: Switch to go mod Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 948b3cf7a3ce6426f24932f8bcc6de24a1eee9a9

builder/img: Update to Go 1.3 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 100f8cd5833e41651a3832dbf4afe28b05528b7c

router: Use HTTP status 499 in logs for client errors - Errors like clients disconnecting with DEBUG=1 will log a line with status=499 instead of status=503. - The logs now consistently use fractional milliseconds for all durations. - context is now used consistently and updated to use the APIs available as of Go 1.3.

view details

push time in a month

delete branch flynn/flynn

delete branch : go-mod

delete time in a month

push eventflynn/flynn

Jonathan Rudenberg

commit sha 41e2b2630bdca5272886073b8ae847ea684025f6

all: Switch to go mod Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 948b3cf7a3ce6426f24932f8bcc6de24a1eee9a9

builder/img: Update to Go 1.3 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in a month

PR merged flynn/flynn

Reviewers
all: Switch to go mod, update to Go 1.3

This changes the Go dependency management strategy from dep to go mod with vendoring enabled. Vendoring was kept for two reasons:

  1. Because of the aggressive sandboxing of the build, caching of the downloaded files in the non-vendored mode becomes a major issue that will require a fair amount of work to implement.
  2. Vendoring makes it obvious what the changes in dependencies are when changing or adding dependencies and allows trivial project-wide code searches across all dependencies.

Due to differences in the MVS algorithm that go mod uses, a variety of dependency versions have changed, unfortunately there is no way to avoid this.

+141299 -76840

0 comment

1023 changed files

titanous

pr closed time in a month

PublicEvent

startedjonnrb/mdns_repeater

started time in a month

push eventflynn/flynn

Jonathan Rudenberg

commit sha c627b088b217b9fef01498e566674b8a77aba31d

builder/img: Update to Go 1.3 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in a month

create barnchflynn/flynn

branch : router-499

created branch time in a month

PR opened nextdhcp/nextdhcp

Call flag.Parse() to allow config flag
+1 -0

0 comment

1 changed file

pr created time in a month

create barnchtitanous/nextdhcp

branch : fix-flags

created branch time in a month

fork titanous/nextdhcp

A DHCP server chaining middlewares. Similar to CoreDNS and Caddy

https://nextdhcp.io

fork in a month

PR opened tobmatth/rack-ssl-enforcer

Make middleware thread-safe

Rack middleware must not use instance variables for state, as the same instance can be called by multiple threads. This patch changes the middleware to pass all request state as method arguments.

As a result of the lack of thread safety in the current version, it is possible for the middleware to handle a request incorrectly, using the @request instance variable from another request. This can present itself as random redirects to other URLs for a small subset of requests, when using a threaded Rack server like Puma.

I have reproduced and verified this issue using a test application with many requests in a specific pattern that we observed in production to trigger this issue. After this patch, the issue no longer occurs.

+39 -39

0 comment

1 changed file

pr created time in a month

create barnchtitanous/rack-ssl-enforcer

branch : fix-thread-safety

created branch time in a month

fork titanous/rack-ssl-enforcer

A simple Rack middleware to enforce ssl connections

fork in a month

startedGandem/bonjour-reflector

started time in a month

issue openedsignalapp/Signal-iOS

New Message text misaligned on 5.8" devices

<!-- This is a bug report template. By following the instructions below and filling out the sections with your information, you will help the developers get all the necessary data to fix your issue. You can also preview your report before submitting it. You may remove sections that aren't relevant to your particular case.

Before we begin, please note that this tracker is only for issues. It is not for questions, comments, or feature requests.

If you would like to discuss a new feature or submit suggestions, please visit the community forum: https://community.signalusers.org

If you are looking for support, please visit our support center: https://support.signal.org/ or email support@signal.org

Let's begin with a checklist: Replace the empty checkboxes [ ] below with checked ones [x] accordingly. -->

  • [x] I have searched open and closed issues for duplicates
  • [x] I am submitting a bug report for existing functionality that does not work as intended
  • [x] This isn't a feature request or a discussion topic

Bug description

The New Message input text is misaligned on 5.8" devices.

Screenshots

image0

Device info

<!-- replace the examples with your info --> Device: iPhone 11 Pro (this almost certainly affects the X and XS too)

iOS version: 13.1.1

Signal version: 2.43.2.1

created time in 2 months

startedrohanpadhye/FuzzFactory

started time in 2 months

startedthrottled/throttled

started time in 2 months

startedstripe/safesql

started time in 2 months

PR opened flynn/flynn

Reviewers
all: Switch to go mod, update to Go 1.3

This changes the Go dependency management strategy from dep to go mod with vendoring enabled. Vendoring was kept for two reasons:

  1. Because of the aggressive sandboxing of the build, caching of the downloaded files in the non-vendored mode becomes a major issue that will require a fair amount of work to implement.
  2. Vendoring makes it obvious what the changes in dependencies are when changing or adding dependencies and allows trivial project-wide code searches across all dependencies.

Due to differences in the MVS algorithm that go mod uses, a variety of dependency versions have changed, unfortunately there is no way to avoid this.

+141299 -76840

0 comment

1023 changed files

pr created time in 2 months

push eventflynn/flynn

Jonathan Rudenberg

commit sha 731ca8626cbd738c9a7958473cd7528d002755cd

all: Switch to go mod Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 096e07f8a300261b72c3a843b0e2c8c04dc6cfee

builder/img: Update to Go 1.3 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in 2 months

push eventflynn/flynn

Jonathan Rudenberg

commit sha ed2ff4ee224b43c9d12757c7e6d43151314aadf8

pkg/exec: Allow multiple calls to Wait This fixes a race in gitreceive where Wait is called after the connection closes for cleanup as well as during the request lifecycle. Sometimes the close notify is delivered just before the request finishes resulting in two calls to Wait. Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha f2d027cf47e666964d08c4a397964cb5e1cccced

router: Add connection idle and header read timeouts Closes #4306 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 54ac2b8438cde1c20822ad8a9794d8e9a5d31076

gitreceive/receiver: Fix formatting for warning Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jesse Stuart

commit sha 98fe0fbd713eebd952031d6b2094dd657572cf6b

controller: Move database layer into controller/data (#4541) Signed-off-by: Jesse Stuart <email@jessestuart.ca>

view details

Jonathan Rudenberg

commit sha 972c8f24f5a08d970563b65d936cc0a7239b8bb8

pkg/tlsconfig: Disable non-optimized curves Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 317491a9623fee8583553c873576f10d547b05fe

test: Add backup restore test for v20190730.0 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 5dadfae4d3f9ce146d922b03c3b0e34da8d4463a

builder/img: Add vim-tiny to heroku-18 image Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 59d7beb0d532cf4a46345e8db2264b00ede3f192

gitreceive: Turn off git gc autoDetach When this flag is the default of on, repos uploads can be corrupted when they happen in parallel with the forked `git gc --auto`. Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 2d262d4ece1c910b031e9f9957e4fb3cf5712390

cli: Ensure curl in export uses HTTP/1.1 If curl gets redirected to a server that supports HTTP/2 (for example with the GCS blobstore backend), it will return output that is not parsed correctly and the export will hang. Closes #4548 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha fc842fa8431366e09feb5a79c1f5686c03bb276d

slugbuilder: Silence stderr output from detect hooks Heroku has added debugging output to the detect hooks that shows up on every push. Silence it by diverting stderr to /dev/null. Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 41f24729a252fb413f18bb15e07b92e2803f76aa

script/install-flynn: Bump fd limit Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha a78daccc7ca302ce4927aef28fe810e29e2f1f93

builder/img: Update to Go 1.12.8 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha a543312e5f9ce82c325707f61dad7ae02b5800e8

vendor: Update golang.org/x/net/http2 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha b13bf4e0ac5f5f8ff3b2c5badbcef9e4ba80d899

all: Switch to go mod Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

Jonathan Rudenberg

commit sha 607cfa8aeba0a4ee3f2e4eb9b1b94fe6e9288c37

builder/img: Update to Go 1.3 Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in 2 months

startedbjwbell/gensimd

started time in 2 months

Pull request review commentflynn/flynn

gRPC controller

 $$ LANGUAGE plpgsql`, 		$$ LANGUAGE plpgsql; 		`, 	)+	migrations.Add(36,+		// Add a "type" column to deployments to destinguash between code and

destinguash -> distinguish

jvatic

comment created time in 2 months

Pull request review commentflynn/flynn

gRPC controller

 func (r *DeploymentRepo) AddExpanded(appID, releaseID string) (*ct.ExpandedDeplo 		procCount += i 	} +	releaseType := (func(oldRelease, release *ct.Release) ct.ReleaseType {+		if oldRelease != nil {+			if reflect.DeepEqual(oldRelease.ArtifactIDs, release.ArtifactIDs) {

This should not use reflect.

jvatic

comment created time in 2 months

push eventflynn/flynn

Jonathan Rudenberg

commit sha ee8fe5c3b9a2069e28f85248bf6a930373528fcf

wip

view details

push time in 2 months

created tagflynn/runc

tagv1.0.0-rc1001

CLI tool for spawning and running containers according to the OCI specification

created time in 2 months

push eventflynn/runc

Lifubang

commit sha 472fe623a76a039c438429345c0ccf71dc7722e8

criu image path permission error in rootless checkpoint Signed-off-by: Lifubang <lifubang@acmcoder.com>

view details

Marco Vedovati

commit sha 9a599f62fbdc7cc366c09424485abea8efbbc004

Support for logging from children processes Add support for children processes logging (including nsexec). A pipe is used to send logs from children to parent in JSON. The JSON format used is the same used by logrus JSON formatted, i.e. children process can use standard logrus APIs. Signed-off-by: Marco Vedovati <mvedovati@suse.com>

view details

Marco Vedovati

commit sha feebfac358ca83fe0a1132f1b2a6da5fca69f1ce

Remove pipe close before exec. Pipe close before exec is not necessary as os.Pipe() is calling pipe2 with O_CLOEXEC option. Signed-off-by: Marco Vedovati <mvedovati@suse.com>

view details

Danail Branekov

commit sha c486e3c40633d571df38fca56da69f3ab0ab13fe

Address comments in PR 1861 Refactor configuring logging into a reusable component so that it can be nicely used in both main() and init process init() Co-authored-by: Georgi Sabev <georgethebeatle@gmail.com> Co-authored-by: Giuseppe Capizzi <gcapizzi@pivotal.io> Co-authored-by: Claudia Beresford <cberesford@pivotal.io> Signed-off-by: Danail Branekov <danailster@gmail.com>

view details

Aleksa Sarai

commit sha 8296826da5b372a4f7b344173b1dea753f8bd14b

specconv: always set "type: bind" in case of MS_BIND We discovered in umoci that setting a dummy type of "none" would result in file-based bind-mounts no longer working properly, which is caused by a restriction for when specconv will change the device type to "bind" to work around rootfs_linux.go's ... issues. However, bind-mounts don't have a type (and Linux will ignore any type specifier you give it) because the type is copied from the source of the bind-mount. So we should always overwrite it to avoid user confusion. Signed-off-by: Aleksa Sarai <asarai@suse.de>

view details

Xiao YongBiao

commit sha da5a2dd45625c3106d95b0dc7c44c3358c7a9ca2

`r.destroy` can defer exec in `runner.run` method. Signed-off-by: Xiao YongBiao <xyb4638@gmail.com>

view details

Sebastiaan van Stijn

commit sha e7831f2abb163fe39aef1067dc1a56087b68b3da

Update to Go 1.12 and drop obsolete versions Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Georgi Sabev

commit sha ba3cabf932943cc927059a6782ae51b7dd862b4e

Improve nsexec logging * Simplify logging function * Logs contain __FUNCTION__:__LINE__ * Bail uses write_log Co-authored-by: Julia Nedialkova <julianedialkova@hotmail.com> Co-authored-by: Danail Branekov <danailster@gmail.com> Signed-off-by: Georgi Sabev <georgethebeatle@gmail.com>

view details

Georgi Sabev

commit sha 475aef10f7a85c9a64dd86111a0540a4c37fe53d

Remove redundant log function Bump logrus so that we can use logrus.StandardLogger().Logf instead Co-authored-by: Julia Nedialkova <julianedialkova@hotmail.com> Signed-off-by: Georgi Sabev <georgethebeatle@gmail.com>

view details

Xiaochen Shen

commit sha 17b37ea3faa29bfae884907c0894c4d5e7588299

libcontainer: intelrdt: add missing destroy handler in defer func In the exception handling of initProcess.start(), we need to add the missing IntelRdtManager.Destroy() handler in defer func. Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com>

view details

Georgi Sabev

commit sha 68b4ff5b3725777172db52f897a84e86db7da5cd

Simplify bail logic & minor nsexec improvements Co-authored-by: Julia Nedialkova <julianedialkova@hotmail.com> Signed-off-by: Georgi Sabev <georgethebeatle@gmail.com>

view details

Georgi Sabev

commit sha a1460818288b8addfe9b70c8931da83864251f7a

Write logs to stderr by default Minor refactoring to use the filePair struct for both init sock and log pipe Co-authored-by: Julia Nedialkova <julianedialkova@hotmail.com> Signed-off-by: Georgi Sabev <georgethebeatle@gmail.com>

view details

Filipe Brandenburger

commit sha 46351eb3d14b8b42454787166811a61fe51e28b7

Move systemd.Manager initialization into a function in that module This will permit us to extend the internals of systemd.Manager to include further information about the system, such as whether cgroupv1, cgroupv2 or both are in effect. Furthermore, it allows a future refactor of moving more of UseSystemd() code into the factory initialization function. Signed-off-by: Filipe Brandenburger <filbranden@gmail.com>

view details

Michael Crosby

commit sha 70bc4cd847bcc731fb6e7ad8adeb1aa431bc4a50

Merge pull request #2034 from masters-of-cats/pr-child-logging Support for logging from children processes

view details

Mrunal Patel

commit sha a0ecf749ee4f236d3534a14c41c047fe6f488bd1

Merge pull request #2047 from filbranden/systemd7 Move systemd.Manager initialization into a function in that module

view details

Mrunal Patel

commit sha 2484581dd7d1dd9a15dd46887d1ea258283a5e58

Merge pull request #2035 from cyphar/bindmount-types specconv: always set "type: bind" in case of MS_BIND

view details

Mrunal Patel

commit sha eb4aeed24ffbf8e2d740fafea39d91faa0ee84d0

Merge pull request #2038 from imxyb/defer-destroy `r.destroy` can defer exec in `runner.run` method.

view details

Giuseppe Scrivano

commit sha 8383c724a4d76ab031159115127b32619a151099

main: not reopen /dev/stderr commit a1460818288b8addfe9b70c8931da83864251f7a introduced a change to write to /dev/stderr by default. Do not reopen the file in this case, but use directly the fd 2. Closes: https://github.com/opencontainers/runc/issues/2056 Closes: https://github.com/kubernetes/kubernetes/issues/77615 Closes: https://github.com/cri-o/cri-o/issues/2368 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

view details

Mrunal Patel

commit sha b9b6cc6e47fe4f2aa4f744a1fc62d248c182d28d

Merge pull request #2057 from giuseppe/no-reopen-stderr main: not reopen /dev/stderr

view details

Kenta Tada

commit sha 65032b55b152c8c8d4c630fbf4eeb63ba7159e87

libcontainer: fix TestGetContainerState to check configs.NEWCGROUP This test needs to handle the case of configs.NEWCGROUP as Namespace's type. Signed-off-by: Kenta Tada <Kenta.Tada@sony.com>

view details

push time in 2 months

create barnchflynn/runc

branch : fix-nsenter-unsupported

created branch time in 2 months

push eventflynn/runc

Jonathan Rudenberg

commit sha 38f5ec6ad14e8c311bfb115681009afcc63389b5

libcontainer/nsenter: Don't import C in non-cgo file Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in 2 months

created tagflynn/coreos-pkg

tagv1.0.1

a collection of go utility packages

created time in 2 months

push eventflynn/coreos-pkg

Jonathan Rudenberg

commit sha 6c3c6c675f9d7a9563e1aaf3a47a61a438e27b47

Add dlopen stub for other platforms Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in 2 months

created tagflynn/coreos-pkg

tagv1.0.0

a collection of go utility packages

created time in 2 months

created tagflynn/coreos-pkg

tagv6

a collection of go utility packages

created time in 2 months

push eventflynn/coreos-pkg

Jonathan Rudenberg

commit sha 0ea02f2978afdebb36e2a773d2fdac665f80f781

Add dlopen stub for other platforms Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>

view details

push time in 2 months

startedhawx/elm-mixpanel

started time in 2 months

startedjsha/minica

started time in 2 months

startedtiziano88/elm-protobuf

started time in 2 months

starteddoug-martin/goqu

started time in 2 months

startedMasterminds/squirrel

started time in 2 months

startedthockin/go-build-template

started time in 2 months

push eventtitanous/titanous.com

Jonathan Rudenberg

commit sha 45e6421f7c9985dd5171132ccb37355909abc601

Create CNAME

view details

push time in 2 months

push eventtitanous/titanous.com

Jonathan Rudenberg

commit sha db2186a8116a849b6cc8faf54eea4891d369e4e5

fix

view details

push time in 2 months

push eventtitanous/titanous.com

Jonathan Rudenberg

commit sha c96e4d5c369eb591d9d97e3dcb8b06dcb2dde02e

Create CNAME

view details

push time in 2 months

push eventtitanous/titanous.com

Jonathan Rudenberg

commit sha b983a9adfae87d09c0487a16a53bc60b7902be7a

update

view details

push time in 2 months

startedcedricss/elm-batteries

started time in 2 months

startedLattyware/elm-fontawesome-generator

started time in 3 months

startedohanhi/elm-shared-state

started time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+syntax = 'proto3';+package controller;+option go_package = 'protobuf';++import "google/api/annotations.proto";+import "google/protobuf/timestamp.proto";+import "google/protobuf/duration.proto";+import "google/protobuf/field_mask.proto";++service Controller {+  // read API+  rpc StreamApps (StreamAppsRequest) returns (stream StreamAppsResponse) {};+  rpc StreamReleases(StreamReleasesRequest) returns (stream StreamReleasesResponse) {};+  rpc StreamScales (StreamScalesRequest) returns (stream StreamScalesResponse) {};+  rpc StreamDeployments(StreamDeploymentsRequest) returns (stream StreamDeploymentsResponse) {};++  // write API+  rpc UpdateApp (UpdateAppRequest) returns (App) {};+  rpc CreateScale (CreateScaleRequest) returns (ScaleRequest) {};+  rpc CreateRelease (CreateReleaseRequest) returns (Release) {};+  rpc CreateDeployment (CreateDeploymentRequest) returns (stream DeploymentEvent) {};+}++/*+   Controller service+   */++message LabelFilter {+  message Expression {+    enum Operator {+      // OP_IN matches if there is a label entry where the value is in the given+      // values for the given key+      OP_IN = 0;+      // OP_NOT_IN matches if there are no label entries where the value is in+      // the given values for the given key+      OP_NOT_IN = 1;+      // OP_EXISTS matches if there is a label entry with the given key (given+      // values ignored and have undefined behavior)+      OP_EXISTS = 2;+      // OP_NOT_EXISTS matches if there are no label entries for the given key+      // (given values ignored and have undefined behavior)+      OP_NOT_EXISTS = 3;+    }++    string key = 1;+    Operator op = 2;+    repeated string values = 3;+  }++  // expressions are ANDed together.+  repeated Expression expressions = 1;+}++// read API++message StreamAppsRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource.+  repeated string name_filters = 3;++  // filters are ORed+  repeated LabelFilter label_filters = 4;++  // When true, leaves the stream open and sends any updates to each resource+  // returned in the initial page until the stream is closed.+  bool stream_updates = 5;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 6;+}++message StreamAppsResponse {+  repeated App apps = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++message StreamReleasesRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource. Parent resource+  // names may also be used to filter resources.+  repeated string name_filters = 3;++  // filters are ORed+  repeated LabelFilter label_filters = 4;++  // When true, leaves the stream open and sends any updates (i.e. resource+  // deletions) to each resource returned in the initial page until the stream+  // is closed.+  bool stream_updates = 5;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 6;+}++message StreamReleasesResponse {+  repeated Release releases = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++message StreamScalesRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource. Parent resource+  // names may also be used to filter resources.+  repeated string name_filters = 3;++  // When set, only includes resources having one of the specified states+  repeated ScaleRequestState state_filters = 4;++  // When true, leaves the stream open and sends any updates to each resource+  // returned in the initial page until the stream is closed.+  bool stream_updates = 5;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 6;+}++message StreamScalesResponse {+  repeated ScaleRequest scale_requests = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++message StreamDeploymentsRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource. Parent resource+  // names may also be used to filter resources.+  repeated string name_filters = 3;++  // Specified an optional list of release types. If provided, only resources+  // with these release types will be returned.+  repeated ReleaseType type_filters = 4;++  // filters are ORed+  repeated LabelFilter label_filters = 5;++  // Specifies an optional list of statuses. If provided, only deployments+  // matching one of the given statuses will be returned.+  repeated DeploymentStatus status_filters = 6;++  // When true, leaves the stream open and sends any updates to each resource+  // returned in the initial page until the stream is closed.+  bool stream_updates = 7;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 8;+}++message StreamDeploymentsResponse {+  repeated ExpandedDeployment deployments = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++// write API++message UpdateAppRequest {+  App app = 1;+  google.protobuf.FieldMask update_mask = 2;+}++message CreateScaleRequest {+  // parent = "apps/APP_ID/releases/RELEASE_ID"+  string parent = 1;+  map<string, int32> processes = 2;+  // profobuf doesn't support maps within maps, so map[string]map[string]string+  // could not be reproduced+  map<string, DeploymentProcessTags> tags = 3;+}++message CreateReleaseRequest {+  // parent = "apps/APP_ID"+  string parent = 1;+  Release release = 2;+  string request_id = 3;+}++message CreateDeploymentRequest {+  // parent = "apps/APP_ID/releases/RELEASE_ID"+  string parent = 1;+  // optional scale request+  CreateScaleRequest scale_request = 2;+}++/*+   Controller message types+   */++message App {+  // name = "apps/APP_ID"+  string name = 1;+  string display_name = 2;+  map<string, string> labels = 3;+  int32 deploy_timeout = 4;+  string strategy = 5;+  // release = Release.name+  string release = 6;+  google.protobuf.Timestamp create_time = 7;+  google.protobuf.Timestamp update_time = 8;+  google.protobuf.Timestamp delete_time = 9;+}++// See github.com/flynn/flynn/host/types Mount+message HostHealthCheck {+  // Type is one of tcp, http, https+  string type = 1;+  // Interval is the time to wait between checks after the service has been+  // marked as up. It defaults to two seconds.+  google.protobuf.Duration interval = 3;+  // Threshold is the number of consecutive checks of the same status before+  // a service will be marked as up or down after coming up for the first+  // time. It defaults to 2.+  int32 threshold = 4;+  // If KillDown is true, the job will be killed if the service goes down (or+  // does not come up)+  bool kill_down = 5;+  // StartTimeout is the maximum duration that a service can take to come up+  // for the first time if KillDown is true. It defaults to ten seconds.+  google.protobuf.Duration start_timeout = 6;++  // Extra optional config fields for http/https checks+  string path = 7;+  string host = 8;+  string match = 9;+  int32 status = 10;+}++// See github.com/flynn/flynn/host/types Mount+message HostService {+  string display_name = 1;+  // Create the service in service discovery+  bool create = 2;+  HostHealthCheck check = 3;+}++message Port {+  int32 port = 1;+  string proto = 2;+  HostService service = 3;+}++message VolumeReq {+  string path = 1;+  bool delete_on_stop = 2;+}++// See github.com/flynn/flynn/host/resource Spec+message HostResourceSpec {+  // Request, if set, is the amount of resource a job expects to consume,+  // so the job should only be placed on a host with at least this amount+  // of resource available, and once scheduled this amount of resource+  // should then be unavailable on the given host.+  int64 request = 1;+  // Limit, if set, is an upper limit on the amount of resource a job can+  // consume, the outcome of hitting this limit being implementation+  // defined (e.g. a system error, throttling, catchable / uncatchable+  // signals etc.)+  int64 limit = 2;+}++// See github.com/flynn/flynn/host/types Mount+message HostMount {+  string location = 1;+  string target = 2;+  bool writeable = 3;+  string device = 4;+  string data = 5;+  int32 flags = 6;+}++// See github.com/opencontainers/runc/libcontainer/configs Device+message LibContainerDevice {+  // Device type, block, char, etc.+  int32 type = 1;+  // Path to the device.+  string path = 2;+  // Major is the device's major number.+  int64 major = 3;+  // Minor is the device's minor number.+  int64 minor = 4;+  // Cgroup permissions format, rwm.+  string permissions = 5;+  // FileMode permission bits for the device.+  uint32 file_mode = 6;+  // Uid of the device.+  uint32 uid = 7;+  // Gid of the device.+  uint32 gid = 8;+  // Write the file to the allowed list+  bool allow = 9;+}++message ProcessType {+  repeated string args = 1;+  map<string, string> env = 2;+  repeated Port ports = 3;+  repeated VolumeReq volumes = 4;+  bool omni = 5;+  bool host_network = 6;+  bool host_pid_namespace = 7;+  string service = 8;+  bool resurrect = 9;+  map<string, HostResourceSpec> resources = 10;+  repeated HostMount mounts = 11;+  repeated string linux_capabilities = 12;+  repeated LibContainerDevice allowed_devices = 13;+  bool writeable_cgroups = 14;+}++enum ReleaseType {+  ANY = 0;+  CODE = 1;+  CONFIG = 2;+}++message Release {+  // name = "apps/APP_ID/releases/RELEASE_ID"+  string name = 1;+  repeated string artifacts = 3;+  map<string, string> env = 4;+  map<string, string> labels = 5;+  map<string, ProcessType> processes = 6;+  ReleaseType type = 7;+  google.protobuf.Timestamp create_time = 8;+  google.protobuf.Timestamp delete_time = 9;+}++enum ScaleRequestState {+  SCALE_PENDING = 0;+  SCALE_CANCELLED = 1;+  SCALE_COMPLETE = 2;+}++message ScaleRequest {+  // parent = "apps/APP_ID/releases/RELEASE_ID"+  string parent = 1;+  // name = "apps/APP_ID/releases/RELEASE_ID/scales/SCALE_REQUEST_ID"+  string name = 2;+  ScaleRequestState state = 3;+  map<string, int32> old_processes = 4;+  map<string, int32> new_processes = 5;+  // profobuf doesn't support maps within maps, so map[string]map[string]string+  // could not be reproduced+  map<string, DeploymentProcessTags> old_tags = 6;+  map<string, DeploymentProcessTags> new_tags = 7;+  google.protobuf.Timestamp create_time = 8;+  google.protobuf.Timestamp update_time = 9;+}++enum DeploymentStatus {+  PENDING = 0;+  FAILED = 1;+  RUNNING = 2;+  COMPLETE = 3;+}++message DeploymentProcessTags {+  map<string, string> tags = 1;+}++message ExpandedDeployment {+  // name = "apps/APP_ID/deployments/DEPLOYMENT_ID"+  string name = 1;+  // old_release = Release.name+  Release old_release = 3;+  // new_release = Release.name+  Release new_release = 4;+  ReleaseType type = 5;+  string strategy = 6;+  DeploymentStatus status = 7;+  map<string, int32> processes = 8;+  // profobuf doesn't support maps within maps, so map[string]map[string]string+  // could not be reproduced

You can delete this comment.

jvatic

comment created time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+syntax = 'proto3';+package controller;+option go_package = 'protobuf';++import "google/api/annotations.proto";+import "google/protobuf/timestamp.proto";+import "google/protobuf/duration.proto";+import "google/protobuf/field_mask.proto";++service Controller {+  // read API+  rpc StreamApps (StreamAppsRequest) returns (stream StreamAppsResponse) {};+  rpc StreamReleases(StreamReleasesRequest) returns (stream StreamReleasesResponse) {};+  rpc StreamScales (StreamScalesRequest) returns (stream StreamScalesResponse) {};+  rpc StreamDeployments(StreamDeploymentsRequest) returns (stream StreamDeploymentsResponse) {};++  // write API+  rpc UpdateApp (UpdateAppRequest) returns (App) {};+  rpc CreateScale (CreateScaleRequest) returns (ScaleRequest) {};+  rpc CreateRelease (CreateReleaseRequest) returns (Release) {};+  rpc CreateDeployment (CreateDeploymentRequest) returns (stream DeploymentEvent) {};+}++/*+   Controller service+   */++message LabelFilter {+  message Expression {+    enum Operator {+      // OP_IN matches if there is a label entry where the value is in the given+      // values for the given key+      OP_IN = 0;+      // OP_NOT_IN matches if there are no label entries where the value is in+      // the given values for the given key+      OP_NOT_IN = 1;+      // OP_EXISTS matches if there is a label entry with the given key (given+      // values ignored and have undefined behavior)+      OP_EXISTS = 2;+      // OP_NOT_EXISTS matches if there are no label entries for the given key+      // (given values ignored and have undefined behavior)+      OP_NOT_EXISTS = 3;+    }++    string key = 1;+    Operator op = 2;+    repeated string values = 3;+  }++  // expressions are ANDed together.+  repeated Expression expressions = 1;+}++// read API++message StreamAppsRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource.+  repeated string name_filters = 3;++  // filters are ORed+  repeated LabelFilter label_filters = 4;++  // When true, leaves the stream open and sends any updates to each resource+  // returned in the initial page until the stream is closed.+  bool stream_updates = 5;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 6;+}++message StreamAppsResponse {+  repeated App apps = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++message StreamReleasesRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource. Parent resource+  // names may also be used to filter resources.+  repeated string name_filters = 3;++  // filters are ORed+  repeated LabelFilter label_filters = 4;++  // When true, leaves the stream open and sends any updates (i.e. resource+  // deletions) to each resource returned in the initial page until the stream+  // is closed.+  bool stream_updates = 5;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 6;+}++message StreamReleasesResponse {+  repeated Release releases = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++message StreamScalesRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource. Parent resource+  // names may also be used to filter resources.+  repeated string name_filters = 3;++  // When set, only includes resources having one of the specified states+  repeated ScaleRequestState state_filters = 4;++  // When true, leaves the stream open and sends any updates to each resource+  // returned in the initial page until the stream is closed.+  bool stream_updates = 5;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 6;+}++message StreamScalesResponse {+  repeated ScaleRequest scale_requests = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++message StreamDeploymentsRequest {+  // The maximum number of resources to return in the initial page.+  int32 page_size = 1;++  // Used for pagination. Must be a next_page_token returned from a previous response.+  string page_token = 2;++  // Specifies an optional list of resource names that should be looked up. The+  // list length must be smaller than page_size. This can be used to request a+  // known set of one or more resources and optionally receive updates about+  // them, and can also be used to retrieve a single resource. Parent resource+  // names may also be used to filter resources.+  repeated string name_filters = 3;++  // Specified an optional list of release types. If provided, only resources+  // with these release types will be returned.+  repeated ReleaseType type_filters = 4;++  // filters are ORed+  repeated LabelFilter label_filters = 5;++  // Specifies an optional list of statuses. If provided, only deployments+  // matching one of the given statuses will be returned.+  repeated DeploymentStatus status_filters = 6;++  // When true, leaves the stream open and sends any updates to each resource+  // returned in the initial page until the stream is closed.+  bool stream_updates = 7;++  // When true, leaves the stream open and sends newly created resources+  // matching the filters until the stream is closed. page_token must not be+  // set.+  bool stream_creates = 8;+}++message StreamDeploymentsResponse {+  repeated ExpandedDeployment deployments = 1;++  // Set to true on the last response for the initial page.+  bool page_complete = 2;++  string next_page_token = 3;+}++// write API++message UpdateAppRequest {+  App app = 1;+  google.protobuf.FieldMask update_mask = 2;+}++message CreateScaleRequest {+  // parent = "apps/APP_ID/releases/RELEASE_ID"+  string parent = 1;+  map<string, int32> processes = 2;+  // profobuf doesn't support maps within maps, so map[string]map[string]string+  // could not be reproduced+  map<string, DeploymentProcessTags> tags = 3;+}++message CreateReleaseRequest {+  // parent = "apps/APP_ID"+  string parent = 1;+  Release release = 2;+  string request_id = 3;+}++message CreateDeploymentRequest {+  // parent = "apps/APP_ID/releases/RELEASE_ID"+  string parent = 1;+  // optional scale request+  CreateScaleRequest scale_request = 2;+}++/*+   Controller message types+   */++message App {+  // name = "apps/APP_ID"+  string name = 1;+  string display_name = 2;+  map<string, string> labels = 3;+  int32 deploy_timeout = 4;+  string strategy = 5;+  // release = Release.name+  string release = 6;+  google.protobuf.Timestamp create_time = 7;+  google.protobuf.Timestamp update_time = 8;+  google.protobuf.Timestamp delete_time = 9;+}++// See github.com/flynn/flynn/host/types Mount+message HostHealthCheck {+  // Type is one of tcp, http, https+  string type = 1;+  // Interval is the time to wait between checks after the service has been+  // marked as up. It defaults to two seconds.+  google.protobuf.Duration interval = 3;+  // Threshold is the number of consecutive checks of the same status before+  // a service will be marked as up or down after coming up for the first+  // time. It defaults to 2.+  int32 threshold = 4;+  // If KillDown is true, the job will be killed if the service goes down (or+  // does not come up)+  bool kill_down = 5;+  // StartTimeout is the maximum duration that a service can take to come up+  // for the first time if KillDown is true. It defaults to ten seconds.+  google.protobuf.Duration start_timeout = 6;++  // Extra optional config fields for http/https checks+  string path = 7;+  string host = 8;+  string match = 9;+  int32 status = 10;+}++// See github.com/flynn/flynn/host/types Mount+message HostService {+  string display_name = 1;+  // Create the service in service discovery+  bool create = 2;+  HostHealthCheck check = 3;+}++message Port {+  int32 port = 1;+  string proto = 2;+  HostService service = 3;+}++message VolumeReq {+  string path = 1;+  bool delete_on_stop = 2;+}++// See github.com/flynn/flynn/host/resource Spec+message HostResourceSpec {+  // Request, if set, is the amount of resource a job expects to consume,+  // so the job should only be placed on a host with at least this amount+  // of resource available, and once scheduled this amount of resource+  // should then be unavailable on the given host.+  int64 request = 1;+  // Limit, if set, is an upper limit on the amount of resource a job can+  // consume, the outcome of hitting this limit being implementation+  // defined (e.g. a system error, throttling, catchable / uncatchable+  // signals etc.)+  int64 limit = 2;+}++// See github.com/flynn/flynn/host/types Mount+message HostMount {+  string location = 1;+  string target = 2;+  bool writeable = 3;+  string device = 4;+  string data = 5;+  int32 flags = 6;+}++// See github.com/opencontainers/runc/libcontainer/configs Device+message LibContainerDevice {+  // Device type, block, char, etc.+  int32 type = 1;+  // Path to the device.+  string path = 2;+  // Major is the device's major number.+  int64 major = 3;+  // Minor is the device's minor number.+  int64 minor = 4;+  // Cgroup permissions format, rwm.+  string permissions = 5;+  // FileMode permission bits for the device.+  uint32 file_mode = 6;+  // Uid of the device.+  uint32 uid = 7;+  // Gid of the device.+  uint32 gid = 8;+  // Write the file to the allowed list+  bool allow = 9;+}++message ProcessType {+  repeated string args = 1;+  map<string, string> env = 2;+  repeated Port ports = 3;+  repeated VolumeReq volumes = 4;+  bool omni = 5;+  bool host_network = 6;+  bool host_pid_namespace = 7;+  string service = 8;+  bool resurrect = 9;+  map<string, HostResourceSpec> resources = 10;+  repeated HostMount mounts = 11;+  repeated string linux_capabilities = 12;+  repeated LibContainerDevice allowed_devices = 13;+  bool writeable_cgroups = 14;+}++enum ReleaseType {+  ANY = 0;+  CODE = 1;+  CONFIG = 2;+}++message Release {+  // name = "apps/APP_ID/releases/RELEASE_ID"+  string name = 1;+  repeated string artifacts = 3;+  map<string, string> env = 4;+  map<string, string> labels = 5;+  map<string, ProcessType> processes = 6;+  ReleaseType type = 7;+  google.protobuf.Timestamp create_time = 8;+  google.protobuf.Timestamp delete_time = 9;+}++enum ScaleRequestState {+  SCALE_PENDING = 0;+  SCALE_CANCELLED = 1;+  SCALE_COMPLETE = 2;+}++message ScaleRequest {+  // parent = "apps/APP_ID/releases/RELEASE_ID"+  string parent = 1;+  // name = "apps/APP_ID/releases/RELEASE_ID/scales/SCALE_REQUEST_ID"+  string name = 2;+  ScaleRequestState state = 3;+  map<string, int32> old_processes = 4;+  map<string, int32> new_processes = 5;+  // profobuf doesn't support maps within maps, so map[string]map[string]string+  // could not be reproduced

You can delete this comment.

jvatic

comment created time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+syntax = 'proto3';+package controller;+option go_package = 'protobuf';++import "google/api/annotations.proto";+import "google/protobuf/timestamp.proto";+import "google/protobuf/duration.proto";+import "google/protobuf/field_mask.proto";++service Controller {+  // read API+  rpc StreamApps (StreamAppsRequest) returns (stream StreamAppsResponse) {};+  rpc StreamReleases(StreamReleasesRequest) returns (stream StreamReleasesResponse) {};+  rpc StreamScales (StreamScalesRequest) returns (stream StreamScalesResponse) {};+  rpc StreamDeployments(StreamDeploymentsRequest) returns (stream StreamDeploymentsResponse) {};++  // write API+  rpc UpdateApp (UpdateAppRequest) returns (App) {};+  rpc CreateScale (CreateScaleRequest) returns (ScaleRequest) {};+  rpc CreateRelease (CreateReleaseRequest) returns (Release) {};+  rpc CreateDeployment (CreateDeploymentRequest) returns (stream DeploymentEvent) {};+}++/*+   Controller service+   */++message LabelFilter {+  message Expression {+    enum Operator {+      // OP_IN matches if there is a label entry where the value is in the given+      // values for the given key+      OP_IN = 0;+      // OP_NOT_IN matches if there are no label entries where the value is in+      // the given values for the given key+      OP_NOT_IN = 1;+      // OP_EXISTS matches if there is a label entry with the given key (given+      // values ignored and have undefined behavior)+      OP_EXISTS = 2;+      // OP_NOT_EXISTS matches if there are no label entries for the given key+      // (given values ignored and have undefined behavior)
      // (it is an error to provide a value)
jvatic

comment created time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+syntax = 'proto3';+package controller;+option go_package = 'protobuf';++import "google/api/annotations.proto";+import "google/protobuf/timestamp.proto";+import "google/protobuf/duration.proto";+import "google/protobuf/field_mask.proto";++service Controller {+  // read API+  rpc StreamApps (StreamAppsRequest) returns (stream StreamAppsResponse) {};+  rpc StreamReleases(StreamReleasesRequest) returns (stream StreamReleasesResponse) {};+  rpc StreamScales (StreamScalesRequest) returns (stream StreamScalesResponse) {};+  rpc StreamDeployments(StreamDeploymentsRequest) returns (stream StreamDeploymentsResponse) {};++  // write API+  rpc UpdateApp (UpdateAppRequest) returns (App) {};+  rpc CreateScale (CreateScaleRequest) returns (ScaleRequest) {};+  rpc CreateRelease (CreateReleaseRequest) returns (Release) {};+  rpc CreateDeployment (CreateDeploymentRequest) returns (stream DeploymentEvent) {};+}++/*+   Controller service+   */++message LabelFilter {+  message Expression {+    enum Operator {+      // OP_IN matches if there is a label entry where the value is in the given+      // values for the given key+      OP_IN = 0;+      // OP_NOT_IN matches if there are no label entries where the value is in+      // the given values for the given key+      OP_NOT_IN = 1;+      // OP_EXISTS matches if there is a label entry with the given key (given+      // values ignored and have undefined behavior)
      // (it is an error to provide a value)
jvatic

comment created time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+#!/bin/bash++set -eo pipefail++# install go pkgs+echo "GOPATH: $GOPATH"+go get -u github.com/golang/protobuf/proto+go get -u github.com/golang/protobuf/protoc-gen-go+go get -u google.golang.org/grpc++cp /go/bin/protoc-gen-go /bin++git clone https://github.com/grpc-ecosystem/grpc-gateway --branch v1.9.6 --depth 1 /go/src/github.com/grpc-ecosystem/grpc-gateway

This needs to specify a full git commit SHA to ensure integrity.

jvatic

comment created time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+package main++import (+	"crypto/subtle"+	"encoding/json"+	fmt "fmt"+	"net"+	"net/http"+	"os"+	"strings"+	"sync"+	"syscall"+	"time"++	"github.com/flynn/flynn/controller-grpc/protobuf"+	"github.com/flynn/flynn/controller/data"+	"github.com/flynn/flynn/controller/schema"+	ct "github.com/flynn/flynn/controller/types"+	"github.com/flynn/flynn/pkg/cors"+	"github.com/flynn/flynn/pkg/ctxhelper"+	"github.com/flynn/flynn/pkg/httphelper"+	"github.com/flynn/flynn/pkg/postgres"+	"github.com/flynn/flynn/pkg/random"+	"github.com/flynn/flynn/pkg/shutdown"+	routerc "github.com/flynn/flynn/router/client"+	que "github.com/flynn/que-go"+	middleware "github.com/grpc-ecosystem/go-grpc-middleware"+	"github.com/improbable-eng/grpc-web/go/grpcweb"+	log "github.com/inconshreveable/log15"+	"github.com/soheilhy/cmux"+	"golang.org/x/net/context"+	"google.golang.org/grpc"+	"google.golang.org/grpc/codes"+	"google.golang.org/grpc/metadata"+	"google.golang.org/grpc/reflection"+	"google.golang.org/grpc/stats"+	"google.golang.org/grpc/status"+)++func mustEnv(key string) string {+	if val, ok := os.LookupEnv(key); ok {+		return val+	}+	shutdown.Fatalf("%s is required", key)+	return ""+}++var logger = log.New("component", "controller-grpc")++var schemaRoot = "/etc/flynn-controller/jsonschema"++func main() {+	// Increase resources limitations+	// See https://github.com/eranyanay/1m-go-websockets/blob/master/2_ws_ulimit/server.go+	var rLimit syscall.Rlimit+	if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {+		shutdown.Fatal(err)+	}+	rLimit.Cur = rLimit.Max+	if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {

Looking at this further, the default is 10k fds already, so this shouldn't be required at all.

jvatic

comment created time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+package main++import (+	"crypto/subtle"+	"encoding/json"+	fmt "fmt"+	"net"+	"net/http"+	"os"+	"strings"+	"sync"+	"syscall"+	"time"++	"github.com/flynn/flynn/controller-grpc/protobuf"+	"github.com/flynn/flynn/controller/data"+	"github.com/flynn/flynn/controller/schema"+	ct "github.com/flynn/flynn/controller/types"+	"github.com/flynn/flynn/pkg/cors"+	"github.com/flynn/flynn/pkg/ctxhelper"+	"github.com/flynn/flynn/pkg/httphelper"+	"github.com/flynn/flynn/pkg/postgres"+	"github.com/flynn/flynn/pkg/random"+	"github.com/flynn/flynn/pkg/shutdown"+	routerc "github.com/flynn/flynn/router/client"+	que "github.com/flynn/que-go"+	middleware "github.com/grpc-ecosystem/go-grpc-middleware"+	"github.com/improbable-eng/grpc-web/go/grpcweb"+	log "github.com/inconshreveable/log15"+	"github.com/soheilhy/cmux"+	"golang.org/x/net/context"+	"google.golang.org/grpc"+	"google.golang.org/grpc/codes"+	"google.golang.org/grpc/metadata"+	"google.golang.org/grpc/reflection"+	"google.golang.org/grpc/stats"+	"google.golang.org/grpc/status"+)++func mustEnv(key string) string {+	if val, ok := os.LookupEnv(key); ok {+		return val+	}+	shutdown.Fatalf("%s is required", key)+	return ""+}++var logger = log.New("component", "controller-grpc")++var schemaRoot = "/etc/flynn-controller/jsonschema"++func main() {+	// Increase resources limitations+	// See https://github.com/eranyanay/1m-go-websockets/blob/master/2_ws_ulimit/server.go+	var rLimit syscall.Rlimit+	if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {+		shutdown.Fatal(err)+	}+	rLimit.Cur = rLimit.Max+	if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {+		shutdown.Fatal(err)+	}++	logger.Debug("opening database connection...")++	// Open connection to main controller database+	db := postgres.Wait(nil, data.PrepareStatements)+	shutdown.BeforeExit(func() { db.Close() })+	q := que.NewClient(db.ConnPool)++	logger.Debug("initializing server...")++	s := NewServer(configureRepos(&Config{+		logger:   logger,+		DB:       db,+		q:        q,+		authKeys: strings.Split(os.Getenv("AUTH_KEY"), ","),+		authIDs:  strings.Split(os.Getenv("AUTH_KEY_IDS"), ","),+	}))++	port := os.Getenv("PORT")+	if port == "" {+		port = "3000"+	}+	addr := ":" + port+	logger.Debug(fmt.Sprintf("attempting to listen on %q...", addr))+	l, err := net.Listen("tcp", addr)+	if err != nil {+		logger.Debug(fmt.Sprintf("error opening listener on %q...: %v", addr, err))+		shutdown.Fatalf("failed to create listener: %v", err)+	}+	logger.Debug("listener aquired")+	shutdown.BeforeExit(func() { l.Close() })+	runServer(s, l)+	logger.Debug("servers stopped")+}++func runServer(s *grpc.Server, l net.Listener) {+	logger.Debug("loading JSON schemas...")++	if err := schema.Load(schemaRoot); err != nil {+		shutdown.Fatal(err)+	}++	logger.Debug("initializing grpc-web server...")+	grpcWebServer := grpcweb.WrapServer(s)++	logger.Debug("initializing cmux listeners...")+	m := cmux.New(l)+	grpcListener := m.MatchWithWriters(cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))+	grpcWebListener := m.Match(cmux.Any())++	var wg sync.WaitGroup++	logger.Debug("starting servers...")+	wg.Add(1)+	go func() {+		defer wg.Done()+		logger.Debug("starting gRPC server...")+		s.Serve(grpcListener)+	}()++	wg.Add(1)+	go func() {+		defer wg.Done()+		logger.Debug("starting gRPC-web server...")+		http.Serve(+			grpcWebListener,+			httphelper.ContextInjector(+				"controller-grpc [gRPC-web]",+				httphelper.NewRequestLogger(corsHandler(http.HandlerFunc(grpcWebServer.ServeHTTP))),+			),+		)+	}()++	wg.Add(1)+	go func() {+		defer wg.Done()+		logger.Debug("starting mux server...")+		m.Serve()+	}()++	wg.Wait()+}++type Config struct {+	logger           log.Logger+	DB               *postgres.DB+	q                *que.Client+	appRepo          *data.AppRepo+	artifactRepo     *data.ArtifactRepo+	releaseRepo      *data.ReleaseRepo+	formationRepo    *data.FormationRepo+	deploymentRepo   *data.DeploymentRepo+	eventRepo        *data.EventRepo+	eventListenerMtx sync.Mutex+	eventListener    *data.EventListener+	authKeys         []string+	authIDs          []string+}++func configureRepos(c *Config) *Config {+	c.appRepo = data.NewAppRepo(c.DB, os.Getenv("DEFAULT_ROUTE_DOMAIN"), routerc.New())+	c.artifactRepo = data.NewArtifactRepo(c.DB)+	c.releaseRepo = data.NewReleaseRepo(c.DB, c.artifactRepo, c.q)+	c.formationRepo = data.NewFormationRepo(c.DB, c.appRepo, c.releaseRepo, c.artifactRepo)+	c.eventRepo = data.NewEventRepo(c.DB)+	c.deploymentRepo = data.NewDeploymentRepo(c.DB, c.appRepo, c.releaseRepo, c.formationRepo)+	return c+}++const ctxKeyFlynnAuthKeyID = "flynn-auth-key-id"++func (c *Config) Authorize(ctx context.Context) (context.Context, error) {+	if md, ok := metadata.FromIncomingContext(ctx); ok {+		if passwords, ok := md["auth_key"]; ok {+			for _, password := range passwords {+				for i, k := range c.authKeys {+					if len(password) == len(k) && subtle.ConstantTimeCompare([]byte(password), []byte(k)) == 1 {+						if len(c.authIDs) == len(c.authKeys) {+							authKeyID := c.authIDs[i]+							ctx = context.WithValue(ctx, ctxKeyFlynnAuthKeyID, authKeyID)++							logger, ok := ctxhelper.LoggerFromContext(ctx)+							if !ok {+								logger = c.logger+							}+							ctx = ctxhelper.NewContextLogger(ctx, logger.New("authKeyID", authKeyID))+						}+						return ctx, nil+					}+				}+			}+			return ctx, grpc.Errorf(codes.Unauthenticated, "invalid auth_key")+		}++		return ctx, grpc.Errorf(codes.Unauthenticated, "no auth_key provided")+	}++	return ctx, grpc.Errorf(codes.Unauthenticated, "metadata missing")+}++func (c *Config) Logger(ctx context.Context) log.Logger {+	logger, ok := ctxhelper.LoggerFromContext(ctx)+	if !ok {+		logger = c.logger+	}+	return logger+}++func (c *Config) LogRequest(ctx context.Context, rpcMethod string) (context.Context, func(context.Context, error)) {+	startTime := time.Now()++	var clientIP string+	if md, ok := metadata.FromIncomingContext(ctx); ok {+		var reqID []string+		if id, ok := ctxhelper.RequestIDFromContext(ctx); ok {+			reqID = []string{id}+		} else {+			reqID = md.Get("x-request-id")+		}+		if len(reqID) == 0 {+			reqID = []string{random.UUID()}+		}+		ctx = ctxhelper.NewContextRequestID(ctx, reqID[0])+		ctx = ctxhelper.NewContextLogger(ctx, c.Logger(ctx).New("req_id", reqID[0]))++		xForwardedFor := md.Get("x-forwarded-for")+		if len(xForwardedFor) > 0 {+			clientIPs := strings.Split(xForwardedFor[0], ",")+			if len(clientIPs) > 0 {+				clientIP = strings.TrimSpace(clientIPs[len(clientIPs)-1])+			}+		}+	}+	if clientIP == "" {+		if remoteHost, ok := ctx.Value(ctxKeyRemoteHost).(string); ok {+			clientIP = remoteHost+		}+	}++	c.Logger(ctx).Info("gRPC request started", "rpcMethod", rpcMethod, "client_ip", clientIP)+	return ctx, func(ctx context.Context, err error) {+		duration := time.Since(startTime)+		if err == nil {+			c.Logger(ctx).Info("gRPC request ended", "duration", duration)+		} else {+			c.Logger(ctx).Info("gRPC request ended", "duration", duration, "error", err)+		}+	}+}++func (c *Config) maybeStartEventListener() (*data.EventListener, error) {+	c.eventListenerMtx.Lock()+	defer c.eventListenerMtx.Unlock()+	if c.eventListener != nil && !c.eventListener.IsClosed() {+		return c.eventListener, nil+	}+	c.eventListener = data.NewEventListener(c.eventRepo)+	return c.eventListener, c.eventListener.Listen()+}++type EventListener struct {+	Events  chan *ct.Event+	Err     error+	errOnce sync.Once+	subs    []*data.EventSubscriber+}++func (e *EventListener) Close() {+	for _, sub := range e.subs {+		sub.Close()+		if err := sub.Err; err != nil {+			e.errOnce.Do(func() { e.Err = err })+		}+	}+}++func (c *Config) subscribeEvents(appIDs []string, objectTypes []ct.EventType, objectIDs []string) (*EventListener, error) {+	dataEventListener, err := c.maybeStartEventListener()+	if err != nil {+		return nil, protobuf.NewError(err, err.Error())+	}++	eventListener := &EventListener{+		Events: make(chan *ct.Event),+	}++	objectTypeStrings := make([]string, len(objectTypes))+	for i, t := range objectTypes {+		objectTypeStrings[i] = string(t)+	}++	if len(appIDs) == 0 && len(objectIDs) == 0 {+		// an empty string matches all app ids+		appIDs = []string{""}+	}+	subs := make([]*data.EventSubscriber, 0, len(appIDs)+len(objectIDs))+	for _, appID := range appIDs {+		sub, err := dataEventListener.Subscribe(appID, objectTypeStrings, "")+		if err != nil {+			return nil, protobuf.NewError(err, err.Error())+		}+		subs = append(subs, sub)+		go (func() {+			for {+				event, ok := <-sub.Events+				if !ok {+					break+				}+				eventListener.Events <- event+			}+		})()+	}+	for _, objectID := range objectIDs {+		sub, err := dataEventListener.Subscribe("", objectTypeStrings, objectID)+		if err != nil {+			return nil, protobuf.NewError(err, err.Error())+		}+		subs = append(subs, sub)+		go (func() {+			for {+				event, ok := <-sub.Events+				if !ok {+					break+				}+				eventListener.Events <- event+			}+		})()+	}+	eventListener.subs = subs+	return eventListener, nil+}++func corsHandler(main http.Handler) http.Handler {+	return (&cors.Options{+		ShouldAllowOrigin: func(origin string, req *http.Request) bool {+			return true+		},+		AllowMethods:     []string{"GET", "POST", "PUT", "PATCH", "DELETE", "HEAD"},+		AllowHeaders:     []string{"auth_key", "Authorization", "Accept", "Content-Type", "If-Match", "If-None-Match", "X-GRPC-Web"},+		ExposeHeaders:    []string{"ETag"},+		AllowCredentials: true,+		MaxAge:           time.Hour,+	}).Handler(main)+}++func NewServer(c *Config) *grpc.Server {+	s := grpc.NewServer(+		grpc.StatsHandler(&statsHandler{}),+		grpc.StreamInterceptor(streamInterceptor(c)),+		grpc.UnaryInterceptor(unaryInterceptor(c)),+	)+	protobuf.RegisterControllerServer(s, &server{Config: c})+	// Register reflection service on gRPC server.+	reflection.Register(s)+	return s+}++type statsHandler struct{}++func (s *statsHandler) TagRPC(ctx context.Context, rpcTagInfo *stats.RPCTagInfo) context.Context {+	return ctx+}++func (s *statsHandler) HandleRPC(context.Context, stats.RPCStats) {}++const ctxKeyRemoteHost = "remote-host"++func (s *statsHandler) TagConn(ctx context.Context, connTagInfo *stats.ConnTagInfo) context.Context {+	remoteHost, _, _ := net.SplitHostPort(connTagInfo.RemoteAddr.String())+	ctx = context.WithValue(ctx, ctxKeyRemoteHost, remoteHost)+	return ctx+}++func (s *statsHandler) HandleConn(context.Context, stats.ConnStats) {}++func streamInterceptor(c *Config) grpc.StreamServerInterceptor {+	return func(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) (err error) {+		ctx, logRequestEnd := c.LogRequest(stream.Context(), info.FullMethod)+		defer func() {+			logRequestEnd(ctx, err)+		}()++		ctx, err = c.Authorize(stream.Context())+		if err != nil {+			return err+		}++		if l, ok := ctxhelper.LoggerFromContext(ctx); ok {+			logger = l+		}++		wrappedStream := middleware.WrapServerStream(stream)+		wrappedStream.WrappedContext = ctx+		return handler(srv, wrappedStream)+	}+}++func unaryInterceptor(c *Config) grpc.UnaryServerInterceptor {+	return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (res interface{}, err error) {+		var logRequestEnd func(context.Context, error)+		ctx, logRequestEnd = c.LogRequest(ctx, info.FullMethod)+		defer func() {+			logRequestEnd(ctx, err)+		}()++		ctx, err = c.Authorize(ctx)+		if err != nil {+			return nil, err+		}++		return handler(ctx, req)+	}+}++type server struct {+	protobuf.ControllerServer+	*Config+}++func (s *server) listApps(req *protobuf.StreamAppsRequest) ([]*protobuf.App, *data.PageToken, error) {+	pageSize := int(req.GetPageSize())+	pageToken, err := data.ParsePageToken(req.PageToken)+	if err != nil {+		return nil, nil, err+	}++	if pageSize > 0 {+		pageToken.Size = pageSize+	} else {+		pageSize = pageToken.Size+	}++	appIDs := protobuf.ParseIDsFromNameFilters(req.GetNameFilters(), "apps")+	ctApps, nextPageToken, err := s.appRepo.ListPage(data.ListAppOptions{+		PageToken: *pageToken,+		AppIDs:    appIDs,+	})+	if err != nil {+		return nil, nil, err+	}++	if pageSize == 0 {+		pageSize = len(ctApps)+	}++	if len(appIDs) == 0 {+		appIDs = nil+	}++	labelFilters := req.GetLabelFilters()+	apps := make([]*protobuf.App, 0, pageSize)+	n := 0++	for _, a := range ctApps {+		if !protobuf.MatchLabelFilters(a.Meta, labelFilters) {+			continue+		}++		apps = append(apps, protobuf.NewApp(a))+		n++++		if n == pageSize {+			break+		}+	}++	// make sure we fill the page if possible+	if n < pageSize && nextPageToken != nil {

Yeah, that makes sense.

jvatic

comment created time in 3 months

Pull request review commentflynn/flynn

gRPC controller

+package main++import (+	"crypto/subtle"+	"encoding/json"+	fmt "fmt"+	"net"+	"net/http"+	"os"+	"strings"+	"sync"+	"syscall"+	"time"++	"github.com/flynn/flynn/controller-grpc/protobuf"+	"github.com/flynn/flynn/controller/data"+	"github.com/flynn/flynn/controller/schema"+	ct "github.com/flynn/flynn/controller/types"+	"github.com/flynn/flynn/pkg/cors"+	"github.com/flynn/flynn/pkg/ctxhelper"+	"github.com/flynn/flynn/pkg/httphelper"+	"github.com/flynn/flynn/pkg/postgres"+	"github.com/flynn/flynn/pkg/random"+	"github.com/flynn/flynn/pkg/shutdown"+	routerc "github.com/flynn/flynn/router/client"+	que "github.com/flynn/que-go"+	middleware "github.com/grpc-ecosystem/go-grpc-middleware"+	"github.com/improbable-eng/grpc-web/go/grpcweb"+	log "github.com/inconshreveable/log15"+	"github.com/soheilhy/cmux"+	"golang.org/x/net/context"+	"google.golang.org/grpc"+	"google.golang.org/grpc/codes"+	"google.golang.org/grpc/metadata"+	"google.golang.org/grpc/reflection"+	"google.golang.org/grpc/stats"+	"google.golang.org/grpc/status"+)++func mustEnv(key string) string {+	if val, ok := os.LookupEnv(key); ok {+		return val+	}+	shutdown.Fatalf("%s is required", key)+	return ""+}++var logger = log.New("component", "controller-grpc")++var schemaRoot = "/etc/flynn-controller/jsonschema"++func main() {+	// Increase resources limitations+	// See https://github.com/eranyanay/1m-go-websockets/blob/master/2_ws_ulimit/server.go+	var rLimit syscall.Rlimit+	if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {+		shutdown.Fatal(err)+	}+	rLimit.Cur = rLimit.Max+	if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {+		shutdown.Fatal(err)+	}++	logger.Debug("opening database connection...")++	// Open connection to main controller database+	db := postgres.Wait(nil, data.PrepareStatements)+	shutdown.BeforeExit(func() { db.Close() })+	q := que.NewClient(db.ConnPool)++	logger.Debug("initializing server...")++	s := NewServer(configureRepos(&Config{+		logger:   logger,+		DB:       db,+		q:        q,+		authKeys: strings.Split(os.Getenv("AUTH_KEY"), ","),+		authIDs:  strings.Split(os.Getenv("AUTH_KEY_IDS"), ","),+	}))++	port := os.Getenv("PORT")+	if port == "" {+		port = "3000"+	}+	addr := ":" + port+	logger.Debug(fmt.Sprintf("attempting to listen on %q...", addr))+	l, err := net.Listen("tcp", addr)+	if err != nil {+		logger.Debug(fmt.Sprintf("error opening listener on %q...: %v", addr, err))+		shutdown.Fatalf("failed to create listener: %v", err)+	}+	logger.Debug("listener aquired")+	shutdown.BeforeExit(func() { l.Close() })+	runServer(s, l)+	logger.Debug("servers stopped")+}++func runServer(s *grpc.Server, l net.Listener) {+	logger.Debug("loading JSON schemas...")++	if err := schema.Load(schemaRoot); err != nil {+		shutdown.Fatal(err)+	}++	logger.Debug("initializing grpc-web server...")+	grpcWebServer := grpcweb.WrapServer(s)++	logger.Debug("initializing cmux listeners...")+	m := cmux.New(l)+	grpcListener := m.MatchWithWriters(cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))+	grpcWebListener := m.Match(cmux.Any())++	var wg sync.WaitGroup++	logger.Debug("starting servers...")+	wg.Add(1)+	go func() {+		defer wg.Done()+		logger.Debug("starting gRPC server...")+		s.Serve(grpcListener)+	}()++	wg.Add(1)+	go func() {+		defer wg.Done()+		logger.Debug("starting gRPC-web server...")+		http.Serve(+			grpcWebListener,+			httphelper.ContextInjector(+				"controller-grpc [gRPC-web]",+				httphelper.NewRequestLogger(corsHandler(http.HandlerFunc(grpcWebServer.ServeHTTP))),+			),+		)+	}()++	wg.Add(1)+	go func() {+		defer wg.Done()+		logger.Debug("starting mux server...")+		m.Serve()+	}()++	wg.Wait()+}++type Config struct {+	logger           log.Logger+	DB               *postgres.DB+	q                *que.Client+	appRepo          *data.AppRepo+	artifactRepo     *data.ArtifactRepo+	releaseRepo      *data.ReleaseRepo+	formationRepo    *data.FormationRepo+	deploymentRepo   *data.DeploymentRepo+	eventRepo        *data.EventRepo+	eventListenerMtx sync.Mutex+	eventListener    *data.EventListener+	authKeys         []string+	authIDs          []string+}++func configureRepos(c *Config) *Config {+	c.appRepo = data.NewAppRepo(c.DB, os.Getenv("DEFAULT_ROUTE_DOMAIN"), routerc.New())+	c.artifactRepo = data.NewArtifactRepo(c.DB)+	c.releaseRepo = data.NewReleaseRepo(c.DB, c.artifactRepo, c.q)+	c.formationRepo = data.NewFormationRepo(c.DB, c.appRepo, c.releaseRepo, c.artifactRepo)+	c.eventRepo = data.NewEventRepo(c.DB)+	c.deploymentRepo = data.NewDeploymentRepo(c.DB, c.appRepo, c.releaseRepo, c.formationRepo)+	return c+}++const ctxKeyFlynnAuthKeyID = "flynn-auth-key-id"++func (c *Config) Authorize(ctx context.Context) (context.Context, error) {+	if md, ok := metadata.FromIncomingContext(ctx); ok {+		if passwords, ok := md["auth_key"]; ok {+			for _, password := range passwords {+				for i, k := range c.authKeys {+					if len(password) == len(k) && subtle.ConstantTimeCompare([]byte(password), []byte(k)) == 1 {+						if len(c.authIDs) == len(c.authKeys) {+							authKeyID := c.authIDs[i]+							ctx = context.WithValue(ctx, ctxKeyFlynnAuthKeyID, authKeyID)++							logger, ok := ctxhelper.LoggerFromContext(ctx)+							if !ok {+								logger = c.logger+							}+							ctx = ctxhelper.NewContextLogger(ctx, logger.New("authKeyID", authKeyID))+						}+						return ctx, nil+					}+				}+			}+			return ctx, grpc.Errorf(codes.Unauthenticated, "invalid auth_key")+		}++		return ctx, grpc.Errorf(codes.Unauthenticated, "no auth_key provided")+	}++	return ctx, grpc.Errorf(codes.Unauthenticated, "metadata missing")+}++func (c *Config) Logger(ctx context.Context) log.Logger {+	logger, ok := ctxhelper.LoggerFromContext(ctx)+	if !ok {+		logger = c.logger+	}+	return logger+}++func (c *Config) LogRequest(ctx context.Context, rpcMethod string) (context.Context, func(context.Context, error)) {+	startTime := time.Now()++	var clientIP string+	if md, ok := metadata.FromIncomingContext(ctx); ok {+		var reqID []string+		if id, ok := ctxhelper.RequestIDFromContext(ctx); ok {+			reqID = []string{id}+		} else {+			reqID = md.Get("x-request-id")+		}+		if len(reqID) == 0 {+			reqID = []string{random.UUID()}+		}+		ctx = ctxhelper.NewContextRequestID(ctx, reqID[0])+		ctx = ctxhelper.NewContextLogger(ctx, c.Logger(ctx).New("req_id", reqID[0]))++		xForwardedFor := md.Get("x-forwarded-for")+		if len(xForwardedFor) > 0 {+			clientIPs := strings.Split(xForwardedFor[0], ",")+			if len(clientIPs) > 0 {+				clientIP = strings.TrimSpace(clientIPs[len(clientIPs)-1])+			}+		}+	}+	if clientIP == "" {+		if remoteHost, ok := ctx.Value(ctxKeyRemoteHost).(string); ok {+			clientIP = remoteHost+		}+	}++	c.Logger(ctx).Info("gRPC request started", "rpcMethod", rpcMethod, "client_ip", clientIP)+	return ctx, func(ctx context.Context, err error) {+		duration := time.Since(startTime)+		if err == nil {+			c.Logger(ctx).Info("gRPC request ended", "duration", duration)+		} else {+			c.Logger(ctx).Info("gRPC request ended", "duration", duration, "error", err)+		}+	}+}++func (c *Config) maybeStartEventListener() (*data.EventListener, error) {+	c.eventListenerMtx.Lock()+	defer c.eventListenerMtx.Unlock()+	if c.eventListener != nil && !c.eventListener.IsClosed() {+		return c.eventListener, nil+	}+	c.eventListener = data.NewEventListener(c.eventRepo)+	return c.eventListener, c.eventListener.Listen()+}++type EventListener struct {+	Events  chan *ct.Event+	Err     error+	errOnce sync.Once+	subs    []*data.EventSubscriber+}++func (e *EventListener) Close() {+	for _, sub := range e.subs {+		sub.Close()+		if err := sub.Err; err != nil {+			e.errOnce.Do(func() { e.Err = err })+		}+	}+}++func (c *Config) subscribeEvents(appIDs []string, objectTypes []ct.EventType, objectIDs []string) (*EventListener, error) {+	dataEventListener, err := c.maybeStartEventListener()+	if err != nil {+		return nil, protobuf.NewError(err, err.Error())+	}++	eventListener := &EventListener{+		Events: make(chan *ct.Event),+	}++	objectTypeStrings := make([]string, len(objectTypes))+	for i, t := range objectTypes {+		objectTypeStrings[i] = string(t)+	}++	if len(appIDs) == 0 && len(objectIDs) == 0 {+		// an empty string matches all app ids+		appIDs = []string{""}+	}+	subs := make([]*data.EventSubscriber, 0, len(appIDs)+len(objectIDs))+	for _, appID := range appIDs {+		sub, err := dataEventListener.Subscribe(appID, objectTypeStrings, "")+		if err != nil {+			return nil, protobuf.NewError(err, err.Error())+		}+		subs = append(subs, sub)+		go (func() {+			for {+				event, ok := <-sub.Events+				if !ok {+					break+				}+				eventListener.Events <- event+			}+		})()+	}+	for _, objectID := range objectIDs {+		sub, err := dataEventListener.Subscribe("", objectTypeStrings, objectID)+		if err != nil {+			return nil, protobuf.NewError(err, err.Error())+		}+		subs = append(subs, sub)+		go (func() {+			for {+				event, ok := <-sub.Events+				if !ok {+					break+				}+				eventListener.Events <- event+			}+		})()+	}+	eventListener.subs = subs+	return eventListener, nil+}++func corsHandler(main http.Handler) http.Handler {+	return (&cors.Options{+		ShouldAllowOrigin: func(origin string, req *http.Request) bool {+			return true+		},+		AllowMethods:     []string{"GET", "POST", "PUT", "PATCH", "DELETE", "HEAD"},+		AllowHeaders:     []string{"auth_key", "Authorization", "Accept", "Content-Type", "If-Match", "If-None-Match", "X-GRPC-Web"},+		ExposeHeaders:    []string{"ETag"},+		AllowCredentials: true,+		MaxAge:           time.Hour,+	}).Handler(main)+}++func NewServer(c *Config) *grpc.Server {+	s := grpc.NewServer(+		grpc.StatsHandler(&statsHandler{}),+		grpc.StreamInterceptor(streamInterceptor(c)),+		grpc.UnaryInterceptor(unaryInterceptor(c)),+	)+	protobuf.RegisterControllerServer(s, &server{Config: c})+	// Register reflection service on gRPC server.+	reflection.Register(s)+	return s+}++type statsHandler struct{}++func (s *statsHandler) TagRPC(ctx context.Context, rpcTagInfo *stats.RPCTagInfo) context.Context {+	return ctx+}++func (s *statsHandler) HandleRPC(context.Context, stats.RPCStats) {}++const ctxKeyRemoteHost = "remote-host"++func (s *statsHandler) TagConn(ctx context.Context, connTagInfo *stats.ConnTagInfo) context.Context {+	remoteHost, _, _ := net.SplitHostPort(connTagInfo.RemoteAddr.String())+	ctx = context.WithValue(ctx, ctxKeyRemoteHost, remoteHost)+	return ctx+}++func (s *statsHandler) HandleConn(context.Context, stats.ConnStats) {}++func streamInterceptor(c *Config) grpc.StreamServerInterceptor {+	return func(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) (err error) {+		ctx, logRequestEnd := c.LogRequest(stream.Context(), info.FullMethod)+		defer func() {+			logRequestEnd(ctx, err)+		}()++		ctx, err = c.Authorize(stream.Context())+		if err != nil {+			return err+		}++		if l, ok := ctxhelper.LoggerFromContext(ctx); ok {+			logger = l+		}++		wrappedStream := middleware.WrapServerStream(stream)+		wrappedStream.WrappedContext = ctx+		return handler(srv, wrappedStream)+	}+}++func unaryInterceptor(c *Config) grpc.UnaryServerInterceptor {+	return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (res interface{}, err error) {+		var logRequestEnd func(context.Context, error)+		ctx, logRequestEnd = c.LogRequest(ctx, info.FullMethod)+		defer func() {+			logRequestEnd(ctx, err)+		}()++		ctx, err = c.Authorize(ctx)+		if err != nil {+			return nil, err+		}++		return handler(ctx, req)+	}+}++type server struct {+	protobuf.ControllerServer+	*Config+}++func (s *server) listApps(req *protobuf.StreamAppsRequest) ([]*protobuf.App, *data.PageToken, error) {+	pageSize := int(req.GetPageSize())+	pageToken, err := data.ParsePageToken(req.PageToken)+	if err != nil {+		return nil, nil, err+	}++	if pageSize > 0 {+		pageToken.Size = pageSize+	} else {+		pageSize = pageToken.Size+	}++	appIDs := protobuf.ParseIDsFromNameFilters(req.GetNameFilters(), "apps")+	ctApps, nextPageToken, err := s.appRepo.ListPage(data.ListAppOptions{+		PageToken: *pageToken,+		AppIDs:    appIDs,+	})+	if err != nil {+		return nil, nil, err+	}++	if pageSize == 0 {+		pageSize = len(ctApps)+	}++	if len(appIDs) == 0 {+		appIDs = nil+	}++	labelFilters := req.GetLabelFilters()+	apps := make([]*protobuf.App, 0, pageSize)+	n := 0++	for _, a := range ctApps {+		if !protobuf.MatchLabelFilters(a.Meta, labelFilters) {+			continue+		}++		apps = append(apps, protobuf.NewApp(a))+		n++++		if n == pageSize {+			break+		}+	}++	// make sure we fill the page if possible+	if n < pageSize && nextPageToken != nil {

Hmm, I think this should be limited to a few pages at most instead of unlimited recursion.

jvatic

comment created time in 3 months

startedalygin/vscode-tlaplus

started time in 3 months

startedalloy-commons/alloy-open-source

started time in 3 months

startedopenmaptiles/openmaptiles

started time in 3 months

starteddevelopmentseed/osm-seed

started time in 3 months

startedgo-spatial/tegola-osm

started time in 3 months

startedgo-spatial/tegola

started time in 3 months

startedomniscale/imposm3

started time in 3 months

pull request commentgodbus/dbus

Update go.mod to v6

Would it be possible to get the v5 change/tag pushed soon? This currently breaks downstream consumers using go mod.

bminer

comment created time in 3 months

startedtazjin/nixini

started time in 3 months

startedkern/filepizza

started time in 3 months

more