profile
viewpoint
Cezar Sá Espinola cezarsa Globo.com Brasília - DF / Brazil

andrestc/go-kubernetes-workshop 20

Aprenda a customizar o Kubernetes utilizando Go.

cezarsa/crxpp 7

Chrome pixel perfect - An extension to compare a page with its original design.

cezarsa/chromed_bird 5

This project has been moved to:

cezarsa/CamanJS 4

HTML canvas image-manipulation Javascript library with a very easy to use interface and a webworkers backend.

cezarsa/bookshelf 3

Awesome bookshelf app automatically generated based on your Kindle collection.

cezarsa/crxclone 3

A service that allows cloning Google Chrome extensions

cezarsa/chaos-operator-example 2

chaos-operator-example

cezarsa/crxmake 2

making chromium extension

cezarsa/blanklinter 1

Go linter that find blank lines inside functions

push eventtsuru/go-tsuruclient

Cezar Sa Espinola

commit sha a2a21eb2bbe40c36dac691985dc0c3653cd1bfbc

Generate client from api

view details

push time in 13 minutes

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 5b7338e98eb0645d9272abea449b35db0b3b9d6e

provision: Propagate app tags as labels or annotations

view details

push time in 20 hours

PR merged tsuru/tsuru

provision: Propagate app tags as labels or annotations
+113 -14

0 comment

5 changed files

cezarsa

pr closed time in 20 hours

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha d113a21d4bb1e37e24f861f675ebdaffd58bfe37

Read groups from oauth provider and allow roles to groups

view details

push time in 20 hours

PR merged tsuru/tsuru

Read groups from oauth provider and allow roles to groups

This chance makes it possible to assign roles to groups. Groups are entities that the auth provider dynamically set on each user and which tsuru has no control over.

Example of a possible use:

tsuru role-assign team-member group:mygroup teamX

This means that every user belonging to group mygroup (according to information by the auth provider) will now have the role team-member with context value of teamX.

+800 -19

0 comment

26 changed files

cezarsa

pr closed time in 20 hours

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 83e8be3a82503f3008163008ea4f7af78af0237c

Allow overriding plan memory and cpu for a single app

view details

Cezar Sa Espinola

commit sha 1a85d9320be955df93c4da4e527ed1fa5aa7345a

Add new plan override permission for app update

view details

push time in 20 hours

PR merged tsuru/tsuru

Allow overriding plan memory and cpu for a single app
+178 -41

0 comment

11 changed files

cezarsa

pr closed time in 20 hours

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 8051355dba4a7a462a0ede11e4b0db625d46ceaf

bump version to 1.8.0-rc9

view details

push time in a day

created tagtsuru/tsuru

tag1.8.0-rc9

Open source, extensible and Docker-based Platform as a Service (PaaS).

created time in a day

create barnchtsuru/custom-cloudstack-ccm

branch : udptest

created branch time in 4 days

push eventtsuru/eviaas

dependabot[bot]

commit sha 8916f4927b57869af4a75babb395422275086b00

Bump flask from 0.10.1 to 1.0 Bumps [flask](https://github.com/pallets/flask) from 0.10.1 to 1.0. - [Release notes](https://github.com/pallets/flask/releases) - [Changelog](https://github.com/pallets/flask/blob/master/CHANGES.rst) - [Commits](https://github.com/pallets/flask/compare/0.10.1...1.0) Signed-off-by: dependabot[bot] <support@github.com>

view details

Cezar Sá Espinola

commit sha e1f4e8726167ee1b2beecead469eef16d74de571

Merge pull request #4 from tsuru/dependabot/pip/flask-1.0 Bump flask from 0.10.1 to 1.0

view details

push time in 4 days

PR merged tsuru/eviaas

Bump flask from 0.10.1 to 1.0 dependencies

Bumps flask from 0.10.1 to 1.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pallets/flask/releases">flask's releases</a>.</em></p> <blockquote> <h2>1.0</h2> <p>The Pallets team is pleased to release Flask 1.0. [Read the announcement on our blog.](<a href="https://www.palletsprojects.com/blog/flask-1-0-released/">https://www.palletsprojects.com/blog/flask-1-0-released/</a></p> <p>There are over a year's worth of changes in this release. Many features have been improved or changed. <a href="http://flask.pocoo.org/docs/1.0/changelog/">Read the changelog</a> to understand how your project's code will be affected.</p> <h2>JSON Security Fix</h2> <p>Flask previously decoded incoming JSON bytes using the content type of the request. Although JSON should only be encoded as UTF-8, Flask was more lenient. However, Python includes non-text related encodings that could result in unexpected memory use by a request.</p> <p>Flask will now detect the encoding of incoming JSON data as one of the supported UTF encodings, and will not allow arbitrary encodings from the request.</p> <h2>Install or Upgrade</h2> <p>Install from <a href="https://pypi.org/project/Flask/">PyPI</a> with pip:</p> <pre><code>pip install -U Flask </code></pre> <h2>0.12.4</h2> <p>This is a repackage of <a href="https://github.com/pallets/flask/releases/0.12.3">0.12.3</a> to fix an issue with how the package was built.</p> <h2>Upgrade</h2> <p>Upgrade from <a href="https://pypi.org/project/Flask/0.12.4/">PyPI</a> with pip. Use a version identifier if you want to stay at 0.12:</p> <pre><code>pip install -U 'Flask~=0.12.4' </code></pre> <h2>0.12.3</h2> <p>This release includes an important security fix for JSON and a minor backport for CLI support in PyCharm. It is provided for projects that cannot update to Flask 1.0 immediately. See the <a href="https://github.com/pallets/flask/blob/flask-1-0-released">1.0 announcement</a> and update to it instead if possible.</p> <h2>JSON Security Fix</h2> <p>Flask previously decoded incoming JSON bytes using the content type of the request. Although JSON should only be encoded as UTF-8, Flask was more lenient. However, Python includes non-text related encodings that could result in unexpected memory use by a request.</p> <p>Flask will now detect the encoding of incoming JSON data as one of the supported UTF encodings, and will not allow arbitrary encodings from the request.</p> <h2>Upgrade</h2> <p>Upgrade from <a href="https://pypi.org/project/Flask/">PyPI</a> with pip. Use a version identifier if you want to stay at 0.12:</p> <pre><code>pip install -U 'Flask~=0.12.3' </code></pre> <!-- raw HTML omitted --> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pallets/flask/blob/master/CHANGES.rst">flask's changelog</a>.</em></p> <blockquote> <h2>Version 1.0</h2> <p>Released 2018-04-26</p> <ul> <li>Python 2.6 and 3.3 are no longer supported.</li> <li>Bump minimum dependency versions to the latest stable versions: Werkzeug >= 0.14, Jinja >= 2.10, itsdangerous >= 0.24, Click >= 5.1. :issue:<code>2586</code></li> <li>Skip :meth:<code>app.run <Flask.run></code> when a Flask application is run from the command line. This avoids some behavior that was confusing to debug.</li> <li>Change the default for :data:<code>JSONIFY_PRETTYPRINT_REGULAR</code> to <code>False</code>. :func:<code>~json.jsonify</code> returns a compact format by default, and an indented format in debug mode. :pr:<code>2193</code></li> <li>:meth:<code>Flask.init <Flask></code> accepts the <code>host_matching</code> argument and sets it on :attr:<code>~Flask.url_map</code>. :issue:<code>1559</code></li> <li>:meth:<code>Flask.init <Flask></code> accepts the <code>static_host</code> argument and passes it as the <code>host</code> argument when defining the static route. :issue:<code>1559</code></li> <li>:func:<code>send_file</code> supports Unicode in <code>attachment_filename</code>. :pr:<code>2223</code></li> <li>Pass <code>_scheme</code> argument from :func:<code>url_for</code> to :meth:<code>~Flask.handle_url_build_error</code>. :pr:<code>2017</code></li> <li>:meth:<code>~Flask.add_url_rule</code> accepts the <code>provide_automatic_options</code> argument to disable adding the <code>OPTIONS</code> method. :pr:<code>1489</code></li> <li>:class:<code>~views.MethodView</code> subclasses inherit method handlers from base classes. :pr:<code>1936</code></li> <li>Errors caused while opening the session at the beginning of the request are handled by the app's error handlers. :pr:<code>2254</code></li> <li>Blueprints gained :attr:<code>~Blueprint.json_encoder</code> and :attr:<code>~Blueprint.json_decoder</code> attributes to override the app's encoder and decoder. :pr:<code>1898</code></li> <li>:meth:<code>Flask.make_response</code> raises <code>TypeError</code> instead of <code>ValueError</code> for bad response types. The error messages have been improved to describe why the type is invalid. :pr:<code>2256</code></li> <li>Add <code>routes</code> CLI command to output routes registered on the application. :pr:<code>2259</code></li> <li>Show warning when session cookie domain is a bare hostname or an IP address, as these may not behave properly in some browsers, such as Chrome. :pr:<code>2282</code></li> <li>Allow IP address as exact session cookie domain. :pr:<code>2282</code></li> <li><code>SESSION_COOKIE_DOMAIN</code> is set if it is detected through <code>SERVER_NAME</code>. :pr:<code>2282</code></li> <li>Auto-detect zero-argument app factory called <code>create_app</code> or <code>make_app</code> from <code>FLASK_APP</code>. :pr:<code>2297</code></li> <li>Factory functions are not required to take a <code>script_info</code> parameter to work with the <code>flask</code> command. If they take a single parameter or a parameter named <code>script_info</code>, the</li> </ul> <!-- raw HTML omitted --> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pallets/flask/commit/291f3c338c4d302dbde01ab9153a7817e5a780f5"><code>291f3c3</code></a> Bump version number to 1.0</li> <li><a href="https://github.com/pallets/flask/commit/36e68a439a073e927b1801704fc7921be58262e1"><code>36e68a4</code></a> release 1.0</li> <li><a href="https://github.com/pallets/flask/commit/216151c8a3c02e805fe5d1824708253f7e01e77f"><code>216151c</code></a> Merge branch '0.12-maintenance'</li> <li><a href="https://github.com/pallets/flask/commit/23047a71fd7da13be7b545f30807f38f4d9ecb25"><code>23047a7</code></a> Bump version number to 0.12.4.dev</li> <li><a href="https://github.com/pallets/flask/commit/1a9e58e8c97c47c969736d46410f724f4e834f54"><code>1a9e58e</code></a> Bump version number to 0.12.3</li> <li><a href="https://github.com/pallets/flask/commit/63deee0a8b0963f1657e2d327773d65632a387d3"><code>63deee0</code></a> release 0.12.3</li> <li><a href="https://github.com/pallets/flask/commit/062745b23f7abaafb144e3d94b6fbdf8ccc456b9"><code>062745b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pallets/flask/issues/2720">#2720</a> from pallets/setup-link</li> <li><a href="https://github.com/pallets/flask/commit/5c8110de25f08bf20e9fda6611403dc5c59ec849"><code>5c8110d</code></a> ensure order of project urls</li> <li><a href="https://github.com/pallets/flask/commit/10a77a54309876a6aba2e3303d291498c0a9318c"><code>10a77a5</code></a> Add project_urls so that PyPI will show GitHub stats.</li> <li><a href="https://github.com/pallets/flask/commit/22992a0d533f7f68e9fa1845c86dae230d8ff9ba"><code>22992a0</code></a> add donate link</li> <li>Additional commits viewable in <a href="https://github.com/pallets/flask/compare/0.10.1...1.0">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+1 -1

0 comment

1 changed file

dependabot[bot]

pr closed time in 4 days

create barnchcezarsa/kontainer-engine

branch : allprs2

created branch time in 5 days

delete branch tsuru/custom-cloudstack-ccm

delete branch : single-project

delete time in 6 days

push eventtsuru/custom-cloudstack-ccm

Cezar Sa Espinola

commit sha 8beed3ce56efa89c4176a6f2c57274c5636d5e5e

Allow default project-id and environment This change makes it easier to use this provider when the entire cluster is using a single cloudstack project in a single cloudstack enviroment. The environment and project id labels in each node are no longer required in this situation.

view details

Cezar Sa Espinola

commit sha a83be8e4001216929a157e1d36086d5316d80afe

ci: push branch to docker hub

view details

push time in 6 days

PR merged tsuru/custom-cloudstack-ccm

WIP: Allow default project-id and environment

This change makes it easier to use this provider when the entire cluster is using a single cloudstack project in a single cloudstack enviroment.

The environment and project id labels in each node are no longer required in this situation.

+43 -16

0 comment

4 changed files

cezarsa

pr closed time in 6 days

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 83c57a6b8b4f63529816a35e190b27fc3d8a7f7c

Add new plan override permission for app update

view details

push time in 8 days

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 6475032a7ac23fd55f73e18c7138d2d3dd1e6581

Add new plan override permission for apps

view details

push time in 8 days

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha b54a6f4e3f1faff85479b3d1b539cb47767daea9

Allow overriding plan memory and cpu for a single app

view details

push time in 8 days

PR opened tsuru/tsuru

Allow overriding plan memory and cpu for a single app
+175 -41

0 comment

9 changed files

pr created time in 8 days

create barnchcezarsa/tsuru

branch : planoverride

created branch time in 8 days

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 29de007ed5097e7386bfc1624d1235448af437df

provision/docker: Fix possible race in failed test

view details

push time in 11 days

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 9870028573fbef3b5b44f616803c5bb27031f03c

Read groups from oauth provider and allow roles to groups

view details

push time in 11 days

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 559b35293a9b0f3191b8e31f6b8726db5ead9809

Read groups from oauth provider and allow roles to groups

view details

push time in 11 days

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 3358bfcca21cd65c2c17a7ca4278030164eae52b

redis: Fix error message validation in tests

view details

Cezar Sa Espinola

commit sha 330b8327bc3e657fdbda58245a865e668515c30f

Read groups from oauth provider and allow roles to groups

view details

push time in 11 days

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 3358bfcca21cd65c2c17a7ca4278030164eae52b

redis: Fix error message validation in tests

view details

push time in 11 days

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 292f083c185d1987ce82b6722e87312eea3fea66

Read groups from oauth provider and allow roles to groups

view details

push time in 11 days

PR opened tsuru/tsuru

WIP: Read groups from oauth provider and allow roles to groups
+523 -17

0 comment

19 changed files

pr created time in 11 days

create barnchcezarsa/tsuru

branch : authgroups

created branch time in 11 days

create barnchcezarsa/tsuru

branch : propagatetag

created branch time in 11 days

Pull request review commentfwmark/registry

Add calico reserved bits

 that people are likely to search for. | 11 | 0x800 | [Cilium][cilium] | | 14 | 0x4000 | [Kubernetes][k8s] | | 15 | 0x8000 | [Kubernetes][k8s] |+| 16-31 | 0xFFFF0000 | [Calico][cal] |

Yeah thinking this over I agree that having a line for each bit would make it easier to spot conflicts as there would be more than one service in a single line.

cezarsa

comment created time in 13 days

created tagtsuru/kubernetes-router

tag0.9.1

Tsuru router API implementation that manages k8s loadbalancers/ingress

created time in 13 days

push eventtsuru/kubernetes-router

Cezar Sa Espinola

commit sha 5367b94ee5125d520b111d7033df3593beb375fa

Improved ingress support with custom annotations

view details

push time in 13 days

startedfwmark/registry

started time in 13 days

PR opened fwmark/registry

Add calico reserved bits

Calico by default reserves a set of 16 bits for packet marks that must not conflict with kubernetes' kube-proxy marks. The reserved mask can be changed with a configuration option but it's usually left unchanged. The mask should have from 8 to 16 bits depending on kube-proxy's configuration.

Reference: IptablesMarkMask config in https://docs.projectcalico.org/reference/felix/configuration

+2 -0

0 comment

1 changed file

pr created time in 13 days

push eventcezarsa/registry

Cezar Sá Espinola

commit sha 738e62f94820c75537ba6245f12f2e8e0e67511b

Add calico reserved bits Calico by default reserves a set of 16 bits for packet marks that must not conflict with kubernetes' kube-proxy marks. The reserved mask can be changed with a configuration option but it's usually left unchanged. The mask should have from 8 to 16 bits depending on kube-proxy's configuration. Reference: `IptablesMarkMask` config in https://docs.projectcalico.org/reference/felix/configuration

view details

push time in 13 days

fork cezarsa/registry

An open, unofficial registry of linux packet mark bits (aka fwmark, connmark, netfilter, iptables, nftables)

fork in 13 days

create barnchtsuru/autoscaler

branch : cloudstack-rebase-latest-tag

created branch time in 15 days

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 20f7d0fa093e631d1c7dda4dd3ac88e57184b433

Bump version to 1.8.0-rc8

view details

push time in 15 days

created tagtsuru/tsuru

tag1.8.0-rc8

Open source, extensible and Docker-based Platform as a Service (PaaS).

created time in 15 days

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 4c7ab7b67ba085bc95db3e9f393f9edd3e7098d3

Update vendor to match go.mod

view details

push time in 15 days

startedmicrosoft/ProcMon-for-Linux

started time in 17 days

PR opened tsuru/tsuru

router: simpler and more correct dynamic config fetching

To avoid inconsistencies between dynamic and static router configs all configs are now delegated to the github.com/tsuru/config pkg.

+165 -224

0 comment

11 changed files

pr created time in 18 days

create barnchcezarsa/tsuru

branch : fixdynrouter

created branch time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha 2a9a0efe5f281dfd107ea90e08280edea498462c

fix title in readme

view details

push time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha 90c448ec98f5ca789aec61fe989cf524a13e0d32

fix readme ci link

view details

push time in 18 days

delete branch tsuru/config

delete branch : lint

delete time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha 50f0223043f0d5af665177b304072b67240a1a07

Expose default config instance

view details

Cezar Sa Espinola

commit sha 21dc87f2b51dc84e60f170b7b2f558dd0a6ccab5

fix lint errors and lint in CI

view details

push time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha 6651e86cbdde9cf7e5149913353cac58f57b3183

fix lint errors and lint in CI

view details

push time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha a0a13be03803d8261cada41e9d2f750a2cff549e

fix lint errors and lint in CI

view details

push time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha 2016d994891c7b036bd3c4d3e2f2f196bacac0c2

fix lint errors and lint in CI

view details

push time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha c10bee2e4decc33c506db48cd1a2f7ec892c8b3f

fix lint errors and lint in CI

view details

push time in 18 days

create barnchtsuru/config

branch : lint

created branch time in 18 days

push eventtsuru/config

Cezar Sa Espinola

commit sha 793ee69ccc43de6e5b2a1eaaa24359727fbcaf39

Allow using isolated config instance instead of singleton

view details

push time in 18 days

push eventtsuru/tsuru

Paulo Sousa

commit sha b8e7ae413bc11dafa6b15502493f331e1ad69351

api/pool: fix pool list for multiple permission.PermAppCreate on pool constraint

view details

push time in 19 days

PR merged tsuru/tsuru

Fix pool list for users on multiple teams

This PR fixes a condition on pool constraint which compares multiple teams against pool constraints and skip at first false evaluation for a team instead trying remaining teams comparison on a list.

+10 -4

0 comment

2 changed files

morpheu

pr closed time in 19 days

push eventtsuru/go-tsuruclient

Cezar Sa Espinola

commit sha 4080334ce05d0318c682e771fa871d18d4d4a9e5

update generated client from latest spec

view details

push time in 22 days

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 421d22f96e8002810066a7b5505c192763512c17

docs: fix router api reference returns

view details

push time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

 var eventKindsIgnoreRebuild = []string{ 	permission.PermAppUpdateRoutable.FullName(), } +type podListener struct {+	OnEvent func(*apiv1.Pod)+}+type podListeners map[*podListener]struct{}

Unused

wpjunior

comment created time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

+// Copyright 2020 tsuru authors. All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++package kubernetes++import (+	"bufio"+	"context"+	"fmt"+	"sort"+	"strings"+	"sync"+	"time"++	"github.com/tsuru/tsuru/provision"+	appTypes "github.com/tsuru/tsuru/types/app"+	apiv1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/labels"+	knet "k8s.io/apimachinery/pkg/util/net"+)++const (+	logLineTimeSeparator = " "+	logWatchBufferSize   = 1000+)++var (+	watchNewPodsInterval = time.Second * 10+	watchTimeout         = time.Hour+)++func (p *kubernetesProvisioner) ListLogs(app appTypes.App, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	pods, err := podInformer.Lister().Pods(ns).List(listPodsSelectorForLog(args))+	return listLogsFromPods(clusterClient, ns, pods, args)+}++func (p *kubernetesProvisioner) WatchLogs(app appTypes.App, args appTypes.ListLogArgs) (appTypes.LogWatcher, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterClient.SetTimeout(watchTimeout)++	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	selector := listPodsSelectorForLog(args)+	pods, err := podInformer.Lister().Pods(ns).List(selector)+	if err != nil {+		return nil, err+	}+	watchingPods := map[string]bool{}+	ctx, done := context.WithCancel(context.Background())+	watcher := &k8sLogsNotifier{+		ctx:           ctx,+		ch:            make(chan appTypes.Applog, logWatchBufferSize),+		ns:            ns,+		clusterClient: clusterClient,+		done:          done,+	}+	for _, pod := range pods {+		if pod.Status.Phase == apiv1.PodPending {+			continue+		}+		watchingPods[pod.ObjectMeta.Name] = true+		go watcher.watchPod(pod, false)+	}++	podListener := &podListener{+		OnEvent: func(pod *apiv1.Pod) {+			_, alreadyWatching := watchingPods[pod.ObjectMeta.Name]++			if !alreadyWatching && pod.Status.Phase != apiv1.PodPending {+				watchingPods[pod.ObjectMeta.Name] = true+				go watcher.watchPod(pod, true)+			}+		},+	}++	clusterController.addPodListener(args.AppName, podListener)

Also, with this change removePodListener would be called directly from k8sLogsNotifier.Close, no need for this goroutine here.

wpjunior

comment created time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

+// Copyright 2020 tsuru authors. All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++package kubernetes++import (+	"bufio"+	"context"+	"fmt"+	"sort"+	"strings"+	"sync"+	"time"++	"github.com/tsuru/tsuru/provision"+	appTypes "github.com/tsuru/tsuru/types/app"+	apiv1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/labels"+	knet "k8s.io/apimachinery/pkg/util/net"+)++const (+	logLineTimeSeparator = " "+	logWatchBufferSize   = 1000+)++var (+	watchNewPodsInterval = time.Second * 10+	watchTimeout         = time.Hour+)++func (p *kubernetesProvisioner) ListLogs(app appTypes.App, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	pods, err := podInformer.Lister().Pods(ns).List(listPodsSelectorForLog(args))+	return listLogsFromPods(clusterClient, ns, pods, args)+}++func (p *kubernetesProvisioner) WatchLogs(app appTypes.App, args appTypes.ListLogArgs) (appTypes.LogWatcher, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterClient.SetTimeout(watchTimeout)++	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	selector := listPodsSelectorForLog(args)+	pods, err := podInformer.Lister().Pods(ns).List(selector)+	if err != nil {+		return nil, err+	}+	watchingPods := map[string]bool{}+	ctx, done := context.WithCancel(context.Background())+	watcher := &k8sLogsNotifier{+		ctx:           ctx,+		ch:            make(chan appTypes.Applog, logWatchBufferSize),+		ns:            ns,+		clusterClient: clusterClient,+		done:          done,+	}+	for _, pod := range pods {+		if pod.Status.Phase == apiv1.PodPending {+			continue+		}+		watchingPods[pod.ObjectMeta.Name] = true+		go watcher.watchPod(pod, false)+	}++	podListener := &podListener{+		OnEvent: func(pod *apiv1.Pod) {+			_, alreadyWatching := watchingPods[pod.ObjectMeta.Name]++			if !alreadyWatching && pod.Status.Phase != apiv1.PodPending {+				watchingPods[pod.ObjectMeta.Name] = true+				go watcher.watchPod(pod, true)+			}+		},+	}++	clusterController.addPodListener(args.AppName, podListener)

Instead of calling addPodListener with a *podListener what do you think about having it declared as:

func (controller) addPodListener(appName string, listener interface{ OnEvent(*apiv1.Pod) })

And then implementing the OnEvent handler in the k8sLogsNotifier itself. This way we would avoid this closure here and simply call:

clusterController.addPodListener(args.appName, watcher)
wpjunior

comment created time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

+// Copyright 2020 tsuru authors. All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++package kubernetes++import (+	"bufio"+	"context"+	"fmt"+	"sort"+	"strings"+	"sync"+	"time"++	"github.com/tsuru/tsuru/provision"+	appTypes "github.com/tsuru/tsuru/types/app"+	apiv1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/labels"+	knet "k8s.io/apimachinery/pkg/util/net"+)++const (+	logLineTimeSeparator = " "+	logWatchBufferSize   = 1000+)++var (+	watchNewPodsInterval = time.Second * 10+	watchTimeout         = time.Hour+)++func (p *kubernetesProvisioner) ListLogs(app appTypes.App, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	pods, err := podInformer.Lister().Pods(ns).List(listPodsSelectorForLog(args))+	return listLogsFromPods(clusterClient, ns, pods, args)+}++func (p *kubernetesProvisioner) WatchLogs(app appTypes.App, args appTypes.ListLogArgs) (appTypes.LogWatcher, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterClient.SetTimeout(watchTimeout)++	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	selector := listPodsSelectorForLog(args)+	pods, err := podInformer.Lister().Pods(ns).List(selector)+	if err != nil {+		return nil, err+	}+	watchingPods := map[string]bool{}+	ctx, done := context.WithCancel(context.Background())+	watcher := &k8sLogsNotifier{+		ctx:           ctx,+		ch:            make(chan appTypes.Applog, logWatchBufferSize),+		ns:            ns,+		clusterClient: clusterClient,+		done:          done,+	}+	for _, pod := range pods {+		if pod.Status.Phase == apiv1.PodPending {+			continue+		}+		watchingPods[pod.ObjectMeta.Name] = true+		go watcher.watchPod(pod, false)+	}++	podListener := &podListener{+		OnEvent: func(pod *apiv1.Pod) {+			_, alreadyWatching := watchingPods[pod.ObjectMeta.Name]++			if !alreadyWatching && pod.Status.Phase != apiv1.PodPending {+				watchingPods[pod.ObjectMeta.Name] = true+				go watcher.watchPod(pod, true)+			}+		},+	}++	clusterController.addPodListener(args.AppName, podListener)+	go func() {+		<-ctx.Done()+		clusterController.removePodListener(args.AppName, podListener)+	}()++	return watcher, nil+}++func listLogsFromPods(clusterClient *ClusterClient, ns string, pods []*apiv1.Pod, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	var wg sync.WaitGroup+	wg.Add(len(pods))++	errs := make([]error, len(pods))+	logs := make([][]appTypes.Applog, len(pods))+	tailLimit := tailLines(args.Limit)+	if args.Limit == 0 {+		tailLimit = tailLines(100)+	}++	for index, pod := range pods {+		go func(index int, pod *apiv1.Pod) {+			defer wg.Done()++			request := clusterClient.CoreV1().Pods(ns).GetLogs(pod.ObjectMeta.Name, &apiv1.PodLogOptions{+				TailLines:  tailLimit,+				Timestamps: true,+			})+			stream, err := request.Stream()+			if err != nil {+				errs[index] = err+				return+			}++			appName := pod.ObjectMeta.Labels["tsuru.io/app-name"]+			appProcess := pod.ObjectMeta.Labels["tsuru.io/app-process"]++			scanner := bufio.NewScanner(stream)+			tsuruLogs := make([]appTypes.Applog, 0)+			for scanner.Scan() {+				line := scanner.Text()++				if len(line) == 0 {+					continue+				}+				tsuruLog := parsek8sLogLine(line)+				tsuruLog.Unit = pod.ObjectMeta.Name+				tsuruLog.AppName = appName+				tsuruLog.Source = appProcess+				tsuruLogs = append(tsuruLogs, tsuruLog)+			}++			if err := scanner.Err(); err != nil && !knet.IsProbableEOF(err) {+				errs[index] = err+			}++			logs[index] = tsuruLogs+		}(index, pod)+	}++	wg.Wait()++	unifiedLog := []appTypes.Applog{}+	for _, podLogs := range logs {+		unifiedLog = append(unifiedLog, podLogs...)+	}++	sort.Slice(unifiedLog, func(i, j int) bool { return unifiedLog[i].Date.Before(unifiedLog[j].Date) })++	for index, err := range errs {+		if err == nil {+			continue+		}++		pod := pods[index]+		appName := pod.ObjectMeta.Labels["tsuru.io/app-name"]++		unifiedLog = append(unifiedLog, errToLog(pod.ObjectMeta.Name, appName, err))+	}++	return unifiedLog, nil+}++func listPodsSelectorForLog(args appTypes.ListLogArgs) labels.Selector {+	return labels.SelectorFromSet(labels.Set(map[string]string{+		"tsuru.io/app-name": args.AppName,+	}))+}++func parsek8sLogLine(line string) (appLog appTypes.Applog) {+	parts := strings.SplitN(line, logLineTimeSeparator, 2)++	if len(parts) < 2 {+		appLog.Message = string(line)+		return+	}+	appLog.Date, _ = parseRFC3339(parts[0])+	appLog.Message = parts[1]++	return+}++func parseRFC3339(s string) (time.Time, error) {+	if t, timeErr := time.Parse(time.RFC3339Nano, s); timeErr == nil {+		return t, nil+	}+	return time.Parse(time.RFC3339, s)+}++func tailLines(i int) *int64 {+	b := int64(i)+	return &b+}++func errToLog(podName, appName string, err error) appTypes.Applog {+	return appTypes.Applog{+		Date:    time.Now().UTC(),+		Message: fmt.Sprintf("Could not get logs from unit: %s, error: %s", podName, err.Error()),+		Unit:    "apiserver",+		AppName: appName,+		Source:  "kubernetes",+	}+}++func infoToLog(appName string, message string) appTypes.Applog {+	return appTypes.Applog{+		Date:    time.Now().UTC(),+		Message: message,+		Unit:    "apiserver",+		AppName: appName,+		Source:  "kubernetes",+	}+}++type k8sLogsNotifier struct {+	ctx  context.Context+	ch   chan appTypes.Applog+	ns   string+	wg   sync.WaitGroup+	once sync.Once+	done context.CancelFunc++	clusterClient *ClusterClient+}++func (k *k8sLogsNotifier) watchPod(pod *apiv1.Pod, notify bool) {+	k.wg.Add(1)

This is dangerous and could cause unexpected behavior. Waitgroups have one weird quirk, according to the docs:

Note that calls with a positive delta that occur when the counter is zero must happen before a Wait.

Since this is called in a goroutine there's a chance that this code will only be executed after the call to Wait(). This would cause the Wait() call to return immediately and assume everything has finished.

I think it would be safer to increment the waitgroup in the caller, before starting the goroutine.

wpjunior

comment created time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

+// Copyright 2020 tsuru authors. All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++package kubernetes++import (+	"bufio"+	"context"+	"fmt"+	"sort"+	"strings"+	"sync"+	"time"++	"github.com/tsuru/tsuru/provision"+	appTypes "github.com/tsuru/tsuru/types/app"+	apiv1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/labels"+	knet "k8s.io/apimachinery/pkg/util/net"+)++const (+	logLineTimeSeparator = " "+	logWatchBufferSize   = 1000+)++var (+	watchNewPodsInterval = time.Second * 10+	watchTimeout         = time.Hour+)++func (p *kubernetesProvisioner) ListLogs(app appTypes.App, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	pods, err := podInformer.Lister().Pods(ns).List(listPodsSelectorForLog(args))+	return listLogsFromPods(clusterClient, ns, pods, args)+}++func (p *kubernetesProvisioner) WatchLogs(app appTypes.App, args appTypes.ListLogArgs) (appTypes.LogWatcher, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterClient.SetTimeout(watchTimeout)++	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	selector := listPodsSelectorForLog(args)+	pods, err := podInformer.Lister().Pods(ns).List(selector)+	if err != nil {+		return nil, err+	}+	watchingPods := map[string]bool{}+	ctx, done := context.WithCancel(context.Background())+	watcher := &k8sLogsNotifier{+		ctx:           ctx,+		ch:            make(chan appTypes.Applog, logWatchBufferSize),+		ns:            ns,+		clusterClient: clusterClient,+		done:          done,+	}+	for _, pod := range pods {+		if pod.Status.Phase == apiv1.PodPending {+			continue+		}+		watchingPods[pod.ObjectMeta.Name] = true+		go watcher.watchPod(pod, false)+	}++	podListener := &podListener{+		OnEvent: func(pod *apiv1.Pod) {+			_, alreadyWatching := watchingPods[pod.ObjectMeta.Name]++			if !alreadyWatching && pod.Status.Phase != apiv1.PodPending {+				watchingPods[pod.ObjectMeta.Name] = true+				go watcher.watchPod(pod, true)+			}+		},+	}++	clusterController.addPodListener(args.AppName, podListener)+	go func() {+		<-ctx.Done()+		clusterController.removePodListener(args.AppName, podListener)+	}()++	return watcher, nil+}++func listLogsFromPods(clusterClient *ClusterClient, ns string, pods []*apiv1.Pod, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	var wg sync.WaitGroup+	wg.Add(len(pods))++	errs := make([]error, len(pods))+	logs := make([][]appTypes.Applog, len(pods))+	tailLimit := tailLines(args.Limit)+	if args.Limit == 0 {+		tailLimit = tailLines(100)+	}++	for index, pod := range pods {+		go func(index int, pod *apiv1.Pod) {+			defer wg.Done()++			request := clusterClient.CoreV1().Pods(ns).GetLogs(pod.ObjectMeta.Name, &apiv1.PodLogOptions{+				TailLines:  tailLimit,+				Timestamps: true,+			})+			stream, err := request.Stream()+			if err != nil {+				errs[index] = err+				return+			}++			appName := pod.ObjectMeta.Labels["tsuru.io/app-name"]+			appProcess := pod.ObjectMeta.Labels["tsuru.io/app-process"]++			scanner := bufio.NewScanner(stream)+			tsuruLogs := make([]appTypes.Applog, 0)+			for scanner.Scan() {+				line := scanner.Text()++				if len(line) == 0 {+					continue+				}+				tsuruLog := parsek8sLogLine(line)+				tsuruLog.Unit = pod.ObjectMeta.Name+				tsuruLog.AppName = appName+				tsuruLog.Source = appProcess+				tsuruLogs = append(tsuruLogs, tsuruLog)+			}++			if err := scanner.Err(); err != nil && !knet.IsProbableEOF(err) {+				errs[index] = err+			}++			logs[index] = tsuruLogs+		}(index, pod)+	}++	wg.Wait()++	unifiedLog := []appTypes.Applog{}+	for _, podLogs := range logs {+		unifiedLog = append(unifiedLog, podLogs...)+	}++	sort.Slice(unifiedLog, func(i, j int) bool { return unifiedLog[i].Date.Before(unifiedLog[j].Date) })++	for index, err := range errs {+		if err == nil {+			continue+		}++		pod := pods[index]+		appName := pod.ObjectMeta.Labels["tsuru.io/app-name"]++		unifiedLog = append(unifiedLog, errToLog(pod.ObjectMeta.Name, appName, err))+	}++	return unifiedLog, nil+}++func listPodsSelectorForLog(args appTypes.ListLogArgs) labels.Selector {+	return labels.SelectorFromSet(labels.Set(map[string]string{+		"tsuru.io/app-name": args.AppName,

We must use args.Source if set to filter by tsuru.io/app-process here.

wpjunior

comment created time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

+// Copyright 2020 tsuru authors. All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++package kubernetes++import (+	"bufio"+	"context"+	"fmt"+	"sort"+	"strings"+	"sync"+	"time"++	"github.com/tsuru/tsuru/provision"+	appTypes "github.com/tsuru/tsuru/types/app"+	apiv1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/labels"+	knet "k8s.io/apimachinery/pkg/util/net"+)++const (+	logLineTimeSeparator = " "+	logWatchBufferSize   = 1000+)++var (+	watchNewPodsInterval = time.Second * 10+	watchTimeout         = time.Hour+)++func (p *kubernetesProvisioner) ListLogs(app appTypes.App, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	pods, err := podInformer.Lister().Pods(ns).List(listPodsSelectorForLog(args))

Also, we must filter pods by their IDs using args.Units here.

wpjunior

comment created time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

+// Copyright 2020 tsuru authors. All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++package kubernetes++import (+	"bufio"+	"context"+	"fmt"+	"sort"+	"strings"+	"sync"+	"time"++	"github.com/tsuru/tsuru/provision"+	appTypes "github.com/tsuru/tsuru/types/app"+	apiv1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/labels"+	knet "k8s.io/apimachinery/pkg/util/net"+)++const (+	logLineTimeSeparator = " "+	logWatchBufferSize   = 1000+)++var (+	watchNewPodsInterval = time.Second * 10

unused

wpjunior

comment created time in 22 days

Pull request review commenttsuru/tsuru

Stream logs from kubernetes api-server

+// Copyright 2020 tsuru authors. All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++package kubernetes++import (+	"bufio"+	"context"+	"fmt"+	"sort"+	"strings"+	"sync"+	"time"++	"github.com/tsuru/tsuru/provision"+	appTypes "github.com/tsuru/tsuru/types/app"+	apiv1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/labels"+	knet "k8s.io/apimachinery/pkg/util/net"+)++const (+	logLineTimeSeparator = " "+	logWatchBufferSize   = 1000+)++var (+	watchNewPodsInterval = time.Second * 10+	watchTimeout         = time.Hour+)++func (p *kubernetesProvisioner) ListLogs(app appTypes.App, args appTypes.ListLogArgs) ([]appTypes.Applog, error) {+	clusterClient, err := clusterForPool(app.GetPool())+	if err != nil {+		return nil, err+	}+	if !clusterClient.LogsFromAPIServerEnabled() {+		return nil, provision.ErrLogsUnavailable+	}+	clusterController, err := getClusterController(p, clusterClient)+	if err != nil {+		return nil, err+	}++	ns, err := clusterClient.AppNamespace(app)+	if err != nil {+		return nil, err+	}++	podInformer, err := clusterController.getPodInformer()+	if err != nil {+		return nil, err+	}++	pods, err := podInformer.Lister().Pods(ns).List(listPodsSelectorForLog(args))

Missing err handling.

wpjunior

comment created time in 22 days

push eventtsuru/tsuru

Manoel Domingues Junior

commit sha a8c4752b5ff99f1d576b609c54a1c877804a1a0c

docs: add more information about planb configuration

view details

push time in 22 days

PR merged tsuru/tsuru

docs: add more information about planb configuration

This PR adds more information about how to configure planb on systemd based hosts.

This update is necessary because FPM (used at https://github.com/tsuru/planb/blob/master/misc/fpm_recipe.rb) doesn't add as a parameter an environment variable.

Changing the FPM configuration doesn't have the expected result. When FPM script has your pleaserun.input altered to include $PLANB_OPTS, the result is escaped on systemd file.

pleaserun.input ["/usr/bin/planb", "$PLANB_OPTS"]

Generates at /etc/systemd/system/planb.service:

ExecStart=/usr/bin/planb "\$PLANB_OPTS"

Thus, as it is not possible to generate a /etc /systemd/system/planb.service file whereExecStart uses the $PLANB_OPTS environment variable, the documentation has been updated to inform that the user must make this change manually.

FPM has an issue with this behavior: https://github.com/jordansissel/pleaserun/issues/124

+6 -0

0 comment

1 changed file

mdjunior

pr closed time in 22 days

push eventtsuru/tsuru

Cezar Sa Espinola

commit sha 0771f1049276b44cdb0803a0e9f47fa9cc6ac81a

router: Read router definitions from database

view details

Cezar Sa Espinola

commit sha ac01d663985aa39cb5a40c1a6628d6b45662a208

Add API routes to manipulate router templates

view details

Cezar Sa Espinola

commit sha 267a127fe465f74133bf9d0c214c1f26c1602ce9

Rename RouterTemplate to DynamicRouter

view details

push time in 22 days

PR merged tsuru/tsuru

Support dynamically loading router definitions from database
+1527 -241

0 comment

44 changed files

cezarsa

pr closed time in 22 days

push eventcezarsa/sousvide

Cezar Sa Espinola

commit sha fe30cbfd75e937ce8aafe341f49561e1da7ec9a0

Use mean input in temperature controller

view details

push time in 23 days

create barnchcezarsa/sousvide

branch : temp

created branch time in a month

pull request commenttsuru/tsuru

fix(api/service): support for force removal of service instances

Yeah, I think the client could go with --force-unbind and --force-removal.

nettoclaudio

comment created time in a month

pull request commenttsuru/tsuru

fix(api/service): support for force removal of service instances

Maybe we're mixing two different things here. I think the existing request parameter unbindall should only mean that we'll unbind all apps from the instance first. Assuming that we'll also ignore service API errors because unbindall is set can be misleading. What do you think about having a separated request parameter (something like ignoreerrors) and using it to control ignoring service API errors?

nettoclaudio

comment created time in a month

Pull request review commenttsuru/tsuru

fix(provision/kubernetes): fix partial rollback failure that leaves different apps versions after an unsuccessful deploy

 func RunServicePipeline(manager ServiceManager, oldVersion appTypes.AppVersion, }  func rollbackAddedProcesses(args *pipelineArgs, processes map[string]*labelReplicas) {+	var w io.Writer = os.Stderr+	if args.event != nil {+		w = args.event+	}+	hasOldVersion := args.oldVersion != nil+	errors := tsuruErrors.NewMultiError() 	for processName, oldLabels := range processes {-		var err error--		if args.oldVersion == nil || oldLabels.labels == nil {-			err = args.manager.RemoveService(args.app, processName, args.newVersion)-		} else {-			err = args.manager.DeployService(context.Background(), args.app, processName, oldLabels.labels, oldLabels.realReplicas, args.oldVersion, args.preserveVersions)+		if !hasOldVersion || oldLabels.labels == nil {+			if err := args.manager.RemoveService(args.app, processName, args.newVersion); err != nil {+				errors.Add(fmt.Errorf("error removing service for %s[%s] [version %d]: %+v", args.app.GetName(), processName, args.newVersion.Version(), err))+			}+			hasOldVersion = false+			continue 		}-+		err := args.manager.DeployService(context.Background(), args.app, processName, oldLabels.labels, oldLabels.realReplicas, args.oldVersion, args.preserveVersions) 		if err != nil {-			log.Errorf("error rolling back updated service for %s[%s]: %+v", args.app.GetName(), processName, err)+			errors.Add(fmt.Errorf("error rolling back updated service for %s[%s] [version %d]: %+v", args.app.GetName(), processName, args.oldVersion.Version(), err)) 		} 	}+	if hasOldVersion {+		if err := args.manager.CleanupServices(args.app, args.oldVersion, args.preserveVersions); err != nil {+			errors.Add(fmt.Errorf("error cleaning up services after rollback: %+v", err))+		}+	}+	if errors.ToError() != nil {+		fmt.Fprintf(w, "some errors occured when rolling back services: %+v", errors)

What about returning this error and using another MultiError to include it in the error returned by the updateServices action? It would make it easier for us to find future problems caused by failed rollbacks.

nettoclaudio

comment created time in a month

Pull request review commenttsuru/tsuru

provision/kubernetes: add option to use entire cluster as a single pool

 func (p *kubernetesProvisioner) InitializeCluster(c *provTypes.Cluster) error { }  func (p *kubernetesProvisioner) ValidateCluster(c *provTypes.Cluster) error {+	if _, ok := c.CustomData[singlePoolKey]; ok && len(c.Pools) > 1 {

Checking for len(c.Pools) != 1 would be safer because it wouldn't allow default clusters with a single pool annotation.

morpheu

comment created time in a month

Pull request review commenttsuru/tsuru

provision/kubernetes: add option to use entire cluster as a single pool

 func (m *nodeContainerManager) deployNodeContainerForCluster(client *ClusterClie 			}, 		}, 	}+	singlePool, err := client.SinglePool()+	if err != nil {+		return err+	}+	if singlePool && pool != "" {

Did you consider not creating the -all DS at all? We could do something like:

if singlePool {
	 if pool == "" {
		return nil
	}
	affinity = &apiv1.Affinity{}
}

I think this would make it easier to debug the cluster because we wouldn't have an extra DaemonSet the would never match any node.

morpheu

comment created time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 3618ade451720089f151e5f15d82b41241bb82c3

Rename RouterTemplate to DynamicRouter

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 69fd2755cf76620b4742159cc1ecd9f2c721ad93

Rename RouterTemplate to DynamicRouter

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 28696560271cf2c2edaf42b689a5f68a8b375a9f

Rename RouterTemplate to DynamicRouter

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 745496f7c53c94f8f48cec5f2ff1a0a427a1c0af

Rename RouterTemplate to DynamicRouter

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha eb9f9dc568bb18847075e35019241de74d9244ad

Rename RouterTemplate to DynamicRouter

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha d0590db5a4c0c6a544d19a0656deb5770bc747de

Add API routes to manipulate router templates

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 1b7a81849d0757a1025b476be445b42b84a46e2d

Add API routes to manipulate router templates

view details

push time in a month

push eventcezarsa/sousvide

Cezar Sa Espinola

commit sha 9763f71b0b45d3f1e202f0d6e6868f88c58d359d

Remove unused lib

view details

Cezar Sa Espinola

commit sha 3ffcd5a116b2d8c0a1c0f3b348acf9ab3132648f

Add external nonstandard lib as submodule

view details

push time in a month

push eventcezarsa/sousvide

Cezar Sa Espinola

commit sha 89f942658ab00fb84038fd83780b753b99a05cc4

Greatly improved performance and input from rotary encoder

view details

push time in a month

push eventcezarsa/sousvide

Cezar Sa Espinola

commit sha 8bdfd5775741727f527b9e47d5a0264e5cecc063

Rename controller class

view details

Cezar Sa Espinola

commit sha 968ed9a53bbbfc418ceabf003d6d5db008c2306c

enable some extra warnings

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 87d79b43585b816a2cf54f98124f3cb007a9823c

Add API routes to manipulate router templates

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 2d223ad7b2fa86414845ca493413a3f8eea35303

Add API routes to manipulate router templates

view details

push time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha d6beff1a645f7282c203fab1cfda44b57b447813

router: Read router definitions from database

view details

push time in a month

delete branch tsuru/tsuru

delete branch : dynamicrouter

delete time in a month

create barnchtsuru/tsuru

branch : dynamicrouter

created branch time in a month

startedrobsoncouto/arduino-songs

started time in a month

push eventcezarsa/tsuru

Cezar Sa Espinola

commit sha 21bf743512636501da24e41fdf1a20aac3965d9c

router: Read router definitions from database

view details

push time in a month

PR opened tsuru/tsuru

WIP: Support dynamically loading router definitions from database
+615 -217

0 comment

29 changed files

pr created time in a month

create barnchcezarsa/tsuru

branch : dynamicrouter

created branch time in a month

push eventtsuru/custom-cloudstack-ccm

Cezar Sa Espinola

commit sha 666d3046fa0ccde16621f58afc49d7b1d663db0b

ci: push branch to docker hub

view details

push time in a month

PR opened tsuru/custom-cloudstack-ccm

WIP: Allow default project-id and environment

This change makes it easier to use this provider when the entire cluster is using a single cloudstack project in a single cloudstack enviroment.

The environment and project id labels in each node are no longer required in this situation.

+39 -16

0 comment

3 changed files

pr created time in a month

create barnchtsuru/custom-cloudstack-ccm

branch : single-project

created branch time in a month

push eventcezarsa/sousvide

Cezar Sa Espinola

commit sha a461bbb9e31637867ac5f84761ff5e6ade528938

adjust control timings

view details

push time in a month

more