profile
viewpoint
Viktor Stanchev vikstrous @docker San Francisco viktorstanchev.com

txbits/txbits 193

DISCLAIMER: TxBits is not affiliated with any active exchanges. Use them at your own risk and beware of any that violate the AGPL license terms by not releasing their source code as required.

bgirard/Gecko-Profiler-Addon 53

Addon to control the Gecko Built-in Profiler

riyazdf/dockercon-workshop 32

Dockercon 2016 Security Workshop

bgirard/cleopatra 27

UI for the gecko profiler

txbits/TxBitsDeployer 8

An Ansible project to help deploy TxBits in production.

nginx-modules/nginx_upstream_check_module 5

Health checks upstreams for nginx

apetresc/Xindle 3

An ArXiV.org client for the Kindle

txbits/TxBitsDocker 1

(WIP) a config to deploy TxBits with Docker Compose

pull request commentmarcelcorso/gcloud-pubsub-emulator

Configure startup timeout

I ran into the same issue 😱 Bump?

pedro-carneiro

comment created time in 12 days

issue commentgoogleapis/google-cloud-go

storage: endpoint port is dropped

Thanks for fixing. I couldn't verify the fix because both before and after the fix, there was no combination of options that worked for connecting to fake gcs storage backend. I'm using the following instead:

	addr := "127.0.0.1:4443"
	opt := option.WithHTTPClient(&http.Client{
		Transport: &http.Transport{
			DialTLS: func(string, string) (net.Conn, error) {
				return tls.Dial("tcp", addr, &tls.Config{
					InsecureSkipVerify: true,
				})
			},
		},
	})
	client, err := storage.NewClient(ctx, opt)
tbpg

comment created time in 15 days

issue commentcuelang/cue

How to set up per-environment configs

This proposal makes sense to me and seems simpler. It wouldn't be necessary to have separate files for different environments any more.

I assume that the first value would be the default?

This still doesn't address the problem of creating per-pull-request environments, but it's simple enough to do a post-processing step in another tool (or shell script) to fix up any constants that include the PR number.

vikstrous

comment created time in 15 days

PR closed 99designs/gqlgen

Reduce calls to packages.Load stale

This is one of the optimizations suggested in #918

So far it's not giving a performance improvement by itself.. WIP...

The most important parts of this PR were extracted into #944 #945

I have:

  • [ ] Added tests covering the bug / feature (see testing)
  • [ ] Updated any relevant documentation (see docs)
+154 -72

1 comment

18 changed files

vikstrous

pr closed time in 23 days

issue commentgoogleapis/google-cloud-go

storage: NewReader does not use Endpoint override

I'm running into an issue with 95bbb7d. u.Hostname() is used rather than u.Host. This drops custom ports. https://github.com/googleapis/google-cloud-go/commit/95bbb7d728ee6b3f6c1873cfee71b0cb87a0f60d#diff-efc763a7d18657aa44c013131823cfc9R131

Also, I'm very surprised by "Emulators is not an intended use case for endpoint override - it's really for pointing to specific regions or zones for a service and also for sandbox/test instances.". Does that mean that there's no official way to use an emulator without setting the env var?

As much as I hate it, I tried using

	os.Setenv("STORAGE_EMULATOR_HOST", url)

but that failed to connect to the emulator for multiple reasons. The first one was that the emulator was using https rather than http. From what I can tell, there's no reliable way to connect to an emulator in the latest version of this library. https://github.com/googleapis/google-cloud-go/issues/1680#issuecomment-559135642 at least worked. I'll have to use the old workaround and old version for now.

cgeisser

comment created time in a month

pull request comment99designs/gqlgen

Cache all packages.Load calls in a central object

btw, we have autobind on, validation off and model generation off

vektah

comment created time in a month

pull request comment99designs/gqlgen

Cache all packages.Load calls in a central object

Here's a redacted version of what I'm seeing with your print statements. I added "MISSING" on line 47 to differentiate the two places where loads happen.

LOAD MISSING [x/models]
LOAD MISSING [github.com/99designs/gqlgen/graphql/introspection x/service1resolver x/service2resolver x/service3resolver x/service4resolver github.com/99designs/gqlgen/graphql/introspection]
LOAD github.com/99designs/gqlgen/graphql
LOAD x/redacted1
LOAD x/redacted2
LOAD x/redacted3
LOAD x/redacted4
LOAD x/redacted5
LOAD x/redacted6
LOAD x/redacted7
LOAD x/redacted8
LOAD NAMES github.com/vektah/gqlparser
LOAD MISSING [x/resolverroot x/resolverroot]

I think the big difference between multiple load calls and one big load call is that shared dependencies of the various packages. Our codebase is more tightly coupled than I would like and there's a lot of overlap between dependencies from each call.

I did an experiment:

multiLoadTest:

package main

import "golang.org/x/tools/go/packages"

var mode = packages.NeedName |
	packages.NeedFiles |
	packages.NeedImports |
	packages.NeedTypes |
	packages.NeedSyntax |
	packages.NeedTypesInfo

func main() {
	packages.Load(&packages.Config{Mode: mode}, "redacted")
	...
	packages.Load(&packages.Config{Mode: packages.NeedName}, "github.com/vektah/gqlparser")
	packages.Load(&packages.Config{Mode: mode}, "redacted")
}

singleLoadTest:

package main

import "golang.org/x/tools/go/packages"

var mode = packages.NeedName |
	packages.NeedFiles |
	packages.NeedImports |
	packages.NeedTypes |
	packages.NeedSyntax |
	packages.NeedTypesInfo |
	packages.NeedName

func main() {
	_, err := packages.Load(&packages.Config{Mode: mode}, redacted)
	if err != nil {
		panic(err)
	}
}

multiLoadTest:

29.04user 15.36system 0:17.14elapsed 259%CPU (0avgtext+0avgdata 91152maxresident)k
0inputs+0outputs (1major+377223minor)pagefaults 0swaps
39.74user 20.35system 0:22.61elapsed 265%CPU (0avgtext+0avgdata 89468maxresident)k
0inputs+0outputs (1major+348847minor)pagefaults 0swaps
32.08user 17.35system 0:19.00elapsed 260%CPU (0avgtext+0avgdata 90116maxresident)k
0inputs+0outputs (1major+359011minor)pagefaults 0swaps

singleLoadTest:

6.54user 3.19system 0:03.79elapsed 256%CPU (0avgtext+0avgdata 154904maxresident)k
0inputs+0outputs (1major+109919minor)pagefaults 0swaps
6.70user 3.35system 0:03.98elapsed 252%CPU (0avgtext+0avgdata 153816maxresident)k
0inputs+0outputs (1major+102084minor)pagefaults 0swaps

The conclusion is that this PR actually makes the situation worse, not better. Could you share the results of adding the print statements for your repo? Can you try running my experiment with your set of loads?

vektah

comment created time in a month

pull request comment99designs/gqlgen

deduplicate package load between AutoBind and Binder

New comparison vs master:

This PR: 23.82user 9.75system 0:16.15elapsed 207%CPU (0avgtext+0avgdata 189272maxresident)k 0inputs+0outputs (3major+229930minor)pagefaults 0swaps 23.14user 8.50system 0:14.69elapsed 215%CPU (0avgtext+0avgdata 191284maxresident)k 0inputs+0outputs (1major+200711minor)pagefaults 0swaps

master: 32.79user 13.08system 0:20.07elapsed 228%CPU (0avgtext+0avgdata 191624maxresident)k 0inputs+0outputs (1major+289754minor)pagefaults 0swaps 33.82user 13.26system 0:20.65elapsed 227%CPU (0avgtext+0avgdata 194580maxresident)k 0inputs+0outputs (1major+257803minor)pagefaults 0swaps

vikstrous

comment created time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha fbb20b9d1d76f1c01ea8175e3f3b905c08a270b2

fix loading non-packages

view details

push time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 3916fe135b29343951cabdb0a7cdd49aa16b00b5

deduplicate package load between AutoBind and Binder

view details

push time in a month

pull request comment99designs/gqlgen

Cache all packages.Load calls in a central object

I'm seeing pretty different results from you:

package-cache 37.77user 18.94system 0:23.36elapsed 242%CPU (0avgtext+0avgdata 197848maxresident)k 0inputs+0outputs (1major+387685minor)pagefaults 0swaps 43.19user 19.77system 0:26.45elapsed 238%CPU (0avgtext+0avgdata 201328maxresident)k 0inputs+0outputs (1major+372326minor)pagefaults 0swaps

master (ae79e75bc2d8296551e8b88b7b3f8596f038ca94) 39.62user 22.05system 0:26.89elapsed 229%CPU (0avgtext+0avgdata 195872maxresident)k 0inputs+0outputs (1major+428290minor)pagefaults 0swaps 45.03user 23.43system 0:29.11elapsed 235%CPU (0avgtext+0avgdata 201928maxresident)k 0inputs+0outputs (1major+408359minor)pagefaults 0swaps

v0.10.2 37.58user 21.98system 0:25.15elapsed 236%CPU (0avgtext+0avgdata 196608maxresident)k 0inputs+0outputs (1major+434955minor)pagefaults 0swaps 37.08user 21.68system 0:24.70elapsed 237%CPU (0avgtext+0avgdata 194676maxresident)k 0inputs+0outputs (1major+435000minor)pagefaults 0swaps

v0.10.1 (without skip_validation and without disabling model generation) 132.90user 77.27system 1:24.02elapsed 250%CPU (0avgtext+0avgdata 605844maxresident)k 0inputs+0outputs (1major+1054116minor)pagefaults 0swaps 111.52user 62.92system 1:13.44elapsed 237%CPU (0avgtext+0avgdata 187124maxresident)k 0inputs+0outputs (1major+760962minor)pagefaults 0swaps

no-double-load (rebased on master) 26.90user 10.25system 0:16.76elapsed 221%CPU (0avgtext+0avgdata 197608maxresident)k 0inputs+0outputs (1major+200472minor)pagefaults 0swaps 25.54user 10.30system 0:16.09elapsed 222%CPU (0avgtext+0avgdata 188176maxresident)k 0inputs+0outputs (1major+195789minor)pagefaults 0swaps

vektah

comment created time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha bd50bbcbb3d96bc168c1b5186147be14487e0cc6

single packages.Load for NameForPackage

view details

Adam

commit sha f0bea5ffcbdfbf231a6d2848b77f0e9c20288702

Allow customizing http and websocket status codes for errors

view details

Adam

commit sha 842fcc11b1481bdb04d2bd711a1c091354b7a96e

review feedback

view details

Adam

commit sha 7f6f1667bd06e4a5f18128c592ef96e01bca97b6

bump x/tools for consistent import formatting

view details

Adam Scarr

commit sha ae79e75bc2d8296551e8b88b7b3f8596f038ca94

Merge pull request #978 from 99designs/pluggable-error-code Allow customizing http and websocket status codes for errors

view details

Adam Scarr

commit sha c6b3e2a1ef220cd122ad3c2a6e25bc74c89a7a4c

Merge pull request #983 from vikstrous/name-for-package-global single packages.Load for NameForPackage

view details

Adam

commit sha 8dbce3cf161f19c132d3cf29aa95851732c7f922

Capture the time spent reading requests from the client

view details

Adam

commit sha 4dd1008659429e99e94e5da0f3401e358a16b69e

fix test race by only stubbing now where we need to

view details

Adam Scarr

commit sha aa407b1f3553ac2aee1939fbe28c85ed5cbfcdf9

Merge pull request #979 from 99designs/capture-read-times Capture read times

view details

Adam

commit sha 76035df5e63c580004440762edbf6779fe9243db

Fix intermittent websocket ka test failure

view details

Adam Scarr

commit sha ec4f6b151d4c14d704f27ae7fe341f7ad5ad4883

Merge pull request #989 from 99designs/fix-intermittent-test-ka-failure Fix intermittent websocket ka test failure

view details

Viktor Stanchev

commit sha 52f230f6963e8865cbc0d51ef153a88485241d07

deduplicate package load between AutoBind and Binder

view details

push time in a month

issue commentcuelang/cue

How to set up per-environment configs

I think the proposal does work for static environments like prod or staging.

  1. It's less obvious how it would work for dynamically generated environments, like spinning up a new cluster per pull request. I think the dynamic use case could be handled outside cue by doing a basic recursive string replacement (ex. with sed) as a postprocessing step. Do you have other ideas for how a dynamic environment might work?

  2. The import boilerplate stuff makes sense. It makes the wiring explicit and it feels pretty easy to reason about. The effect of choosing one build tag or another is equivalent to copying the imported package into the importing package. It's also very cool how for small projects you can put your environment-specific values directly in services/*.cue files.

vikstrous

comment created time in a month

pull request comment99designs/gqlgen

deduplicate package load between AutoBind and Binder

Rebased. New unscientific benchmark compared to master:

before:

46.94user 27.35system 0:31.48elapsed 236%CPU (0avgtext+0avgdata 193760maxresident)k
0inputs+0outputs (1major+401610minor)pagefaults 0swaps

after:

42.29user 24.44system 0:29.39elapsed 227%CPU (0avgtext+0avgdata 199044maxresident)k
0inputs+0outputs (1major+382901minor)pagefaults 0swaps

I also tested on top of #983 and saw the same 3 second improvement. That seems to be how long the package loading is taking for me.

vikstrous

comment created time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 15e59e8c51442f8c65bb97d0c606637dd466389b

deduplicate package load between AutoBind and Binder

view details

push time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 0f2413b23cba8ce0ee2493d4d3a962d7c977cc5b

deduplicate package load between AutoBind and Binder

view details

push time in a month

push eventvikstrous/gqlgen

Adam

commit sha 249b602d487fd189787bcd3605ff4c3a459771e9

Start drafting new handler interfaces

view details

Adam

commit sha da986181d7e6ca9da2999fb62d8fbc7c33eda21f

port over the setter request context middleware

view details

Adam

commit sha 311887d6a9336c1c5f6f9a59752a94afa6be5b52

convert APQ to middleware

view details

Adam

commit sha afe241b56cd44394a2b32447f7d817a8361f909d

port over tracing

view details

Adam

commit sha d0f683034fbf877457990060a8c2423b1ccfce0d

port json post

view details

Adam

commit sha b5089cac400ddf2ffb00d75c849656731d2cb29e

Split transports into subpackage

view details

Adam

commit sha eed1515c7abeb08a00291e9ab241f88af860d8aa

Split middlware out of handler package

view details

Adam

commit sha cb99b42ed0e4974aeb1fc2d9cee43d061c1152cf

Add websocket transport

view details

Adam

commit sha 2e0c9cab65d4c6a0cd8237a1b15f55a075afb44f

mark validation and parse errors separately to execution errors

view details

Adam

commit sha a7c5e6600729012283270ad2c653de57772eba6b

build middleware graph once at startup

view details

Adam

commit sha f00e5fa0791be8e8909923711f7ebde8d2e74c15

use plugins instead of middleware so multiple hooks can be configured

view details

Adam

commit sha c3dbcf83eaa8bc865b7e482fd17f26fa5139485b

Add apollo tracing

view details

Adam

commit sha ab5665add4f1b6effe21cb1ef77f7346cad1d59c

Add result context

view details

Adam

commit sha 4a69bcd034ade82bacf6b71b4945f4917d2fdfc1

Bring operation middleware inline with other handler interfaces

view details

Adam

commit sha 72c47c985f2727ab7d9dff7f6ebaf3614c44a507

rename result handler to response handler

view details

Adam

commit sha 9d1d77e67df3fd2c75646af7b5de361b9cbe8482

split context.go into 3 files

view details

Adam

commit sha a70e93bcae24130ef3746d89afa48b23e96f4787

consistently name transports

view details

Adam

commit sha 64cfc9add38004e8741fbe3bdd5a247c61718d80

extract shared handler test server stubs

view details

Adam

commit sha aede7d1cf15f054b1762f9801337bd3e8764b54d

Add multipart from transport

view details

Adam

commit sha 0965420a4246492bbac6922da742b157ea968c29

Add query document caching

view details

push time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha bd50bbcbb3d96bc168c1b5186147be14487e0cc6

single packages.Load for NameForPackage

view details

push time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 31341b1d6d8fe11e480d852f999a700e4453a23e

single packages.Load for NameForPackage

view details

push time in a month

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 230562185e9b9a6ea867eb7712836eeff83ef7b7

single packages.Load for NameForPackage

view details

push time in a month

PR opened 99designs/gqlgen

single packages.Load for NameForPackage

I simplified this PR to make the minimum necessary changes to get the job done. This PR supersedes #944 Unfortunately, I had to use global state because the existing code was using global state. I'm willing to refactor this in a follow up, ending up with something similar to #944 if that makes more sense.

I didn't add any new tests because exists tests already cover this change.

Non-scientific test comparing master vs this PR on our proprietary codebase:

Before:

40.92user 23.75system 0:27.99elapsed 231%CPU (0avgtext+0avgdata 197584maxresident)k
0inputs+0outputs (1major+430432minor)pagefaults 0swaps

After:

21.46user 8.37system 0:13.77elapsed 216%CPU (0avgtext+0avgdata 201152maxresident)k
0inputs+0outputs (1major+160637minor)pagefaults 0swaps

I have:

  • n/a Added tests covering the bug / feature (see testing)
  • n/a Updated any relevant documentation (see docs)
+51 -12

0 comment

5 changed files

pr created time in a month

create barnchvikstrous/gqlgen

branch : name-for-package-global

created branch time in a month

issue commentrenovatebot/renovate

google-beta terraform provider doesn't get upgraded

That would be ideal. I did some quick research and didn't find any information about how google-beta works. It doesn't show up as a provider in the terraform registry. It's possible that it's hardcoded somewhere as a redirect. It's still a mystery to me. I didn't see anything special about -beta in the terraform codebase.

vikstrous

comment created time in 2 months

issue openedrenovatebot/renovate

google-beta terraform provider doesn't get upgraded

What Renovate type are you using?

hosted, github

Describe the bug

Other Terraform providers get upgraded (google does ), but google-beta doesn't.

Did you see anything helpful in debug logs?

DEBUG: terraform-provider.getDependencies()
{
  "lookupName": "google"
}
DEBUG: terraform-provider.getDependencies()
{
  "lookupName": "google-beta"
}
DEBUG: terraform-provider.getDependencies()
{
  "lookupName": "archive"
}

...

DEBUG: Response code 404 (Not Found)
{
  "err": {
    "name": "HTTPError",
    "host": "registry.terraform.io",
    "hostname": "registry.terraform.io",
    "method": "GET",
    "path": "/v1/providers/hashicorp/google-beta",
    "protocol": "https:",
    "url": "https://registry.terraform.io/v1/providers/hashicorp/google-beta",
    "gotOptions": {
      "path": "/v1/providers/hashicorp/google-beta",
      "protocol": "https:",
      "slashes": true,
      "auth": null,
      "host": "registry.terraform.io",
      "port": null,
      "hostname": "registry.terraform.io",
      "hash": null,
      "search": null,
      "query": null,
      "pathname": "/v1/providers/hashicorp/google-beta",
      "href": "https://registry.terraform.io/v1/providers/hashicorp/google-beta",
      "headers": {
        "user-agent": "Renovate Bot (GitHub App 2740)",
        "accept": "application/json",
        "accept-encoding": "gzip, deflate"
      },
      "hooks": {
        "beforeError": [],
        "init": [],
        "beforeRequest": [],
        "beforeRedirect": [],
        "beforeRetry": [],
        "afterResponse": []
      },
      "retry": {
        "methods": {},
        "statusCodes": {},
        "errorCodes": {}
      },
      "decompress": true,
      "throwHttpErrors": true,
      "followRedirect": true,
      "stream": false,
      "form": false,
      "json": true,
      "cache": false,
      "useElectronNet": false,
      "hostType": "terraform",
      "method": "GET"
    },
    "statusCode": 404,
    "statusMessage": "Not Found",
    "headers": {
      "server": "Cowboy",
      "content-encoding": "gzip",
      "content-type": "application/json",
      "strict-transport-security": "max-age=31536000; includeSubDomains; preload",
      "via": "1.1 vegur, 1.1 varnish, 1.1 varnish",
      "accept-ranges": "bytes, bytes, bytes, bytes",
      "age": "0",
      "content-length": "49",
      "date": "Thu, 26 Dec 2019 11:10:34 GMT",
      "connection": "close",
      "x-served-by": "cache-iad2135-IAD, cache-sea4472-SEA",
      "x-cache": "MISS, MISS",
      "x-cache-hits": "0, 0",
      "vary": "Accept-Encoding"
    },
    "body": {
      "errors": [
        "Not Found"
      ]
    },
    "message": "Response code 404 (Not Found)",
    "stack": "HTTPError: Response code 404 (Not Found)\n    at EventEmitter.emitter.on (/home/ubuntu/renovateapp/node_modules/got/source/as-promise.js:74:19)\n    at process._tickCallback (internal/process/next_tick.js:68:7)"
  }
}
INFO: Failed to look up dependency google-beta (google-beta)(packageFile="REDACTED", dependency="google-beta")
INFO: Failed to look up dependency google-beta (google-beta)(packageFile="REDACTED", dependency="google-beta")
INFO: Failed to look up dependency google-beta (google-beta)(packageFile="REDACTED", dependency="google-beta")

To Reproduce

Put this in a .tf file, filling in the project ID:

 provider "google-beta" {
   project = "<PROJECT_ID>"
   version = "2.20.0"
 }

Additional context

My understanding is that google publishes their beta and non-beta modules in an unusual way. It might be necessary to just hardcode some map from google-beta to google in the upgrade checking logic.

created time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha e18b68543bbb8f92eb63ac4c708206f2a35134b2

c18p1

view details

Viktor Stanchev

commit sha a54190566278ca8c5ffe9b8a46f00c66a0a9f8e2

p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha c3b713e94fa77522a19f426c27f7ded9973792ca

day 17

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 925f4c9d0a8b3f47eba5c16c729275183146fb8a

fix p2 with help

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha bafa47ad3843e9f3b13e9cb5d43b5b36029ef29f

give up on c16 p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 804e76c81b2fadb7a1d6661fdc0fa6c104ea0c61

c15

view details

Viktor Stanchev

commit sha e0ffb14a6bcab32ec241a9bef79ac31a749bc623

p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha d6a23d5b663b5aa16f83fcb842e3a6008d3942a4

p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 08475c7cbc3f140df61771dbeb95f3786c14278c

c14 p1

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 7a04f0873932fa2f744a95a7606762c4a7b0b1a7

c13

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 4ef1674ddbc76483c1d29aec4f50f71cc1e88c35

p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha f4fa516e3060f7328aa13c0e7630f64f1eaace9c

c12 p1

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha ca1554a643dc2b1af0194489286d47c954d10fbf

c11

view details

push time in 2 months

issue commentvektah/dataloaden

Pass args to dataloader

Ah, I misunderstood the use case. I'm not sure if I fully understand yet though. Loaders are designed to be used when there's a known set of keys to query by. A query like "all events between x and y" is not a fixed data set. Trying to use a loader for that would break caching in the current implementation. I'm trying to wrap my head around the idea of using a loader-like pattern for more complex queries. Do you have some ideas for how caching and deduplication of requests might work in the general case? Or maybe in your specific case of a range query?

NickDubelman

comment created time in 2 months

issue commentvektah/dataloaden

Pass args to dataloader

I don't think that anyone should want to pass in every data source a loader needs with every use of the loader. That should be one time wiring that's done at the same time and place where the context is bound to the loader: per request. It's unfortunate that the only way to pass the loaders into http handlers is through the context object. The context is the only per-request part of http loaders. With gqlgen, resolvers are created per-request, which makes them a natural place to attach loaders.

NickDubelman

comment created time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha d8c716a7b366c5573528b547faf38945bed60853

p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 11d28623fd86670e17a553c15b6f5df0ce268626

c10p1

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha bf5b95c0f21dcab9a4d222ba0231f556a0b9e83b

p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha eeb9ee9105584a2948e0d5ceaf1a6144c3d3f575

c7p1

view details

push time in 2 months

issue closedrenovatebot/app-support

gpg signing key expired

What Renovate type are you using? hosted, github

Describe the bug The github gpg signing key is expired.

image

Did you see anything helpful in debug logs?

n/a

To Reproduce

Let renovate make a new PR

Additional context

We require signed commits before merging, so renovate's PRs can't be merged.

closed time in 2 months

vikstrous

issue commentrenovatebot/app-support

gpg signing key expired

Looks good now. Thanks.

vikstrous

comment created time in 2 months

issue openedrenovatebot/app-support

gpg signing key expired

What Renovate type are you using? hosted, github

Describe the bug The github gpg signing key is expired.

image

Did you see anything helpful in debug logs?

n/a

To Reproduce

Let renovate make a new PR

Additional context

We require signed commits before merging, so renovate's PRs can't be merged.

created time in 2 months

issue openedrenovatebot/renovate

gpg signing key expired

What Renovate type are you using? hosted, github

Describe the bug The github gpg signing key is expired.

image

Did you see anything helpful in debug logs?

n/a

To Reproduce

Let renovate make a new PR

Additional context

We require signed commits before merging, so renovate's PRs can't be merged.

created time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 837c074c9e5e473ed54cbac324dc89a85c608883

c8

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha df8f40f1518de559d75134ba13692eb6357b2bb1

p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha fb4f017d59156c511c271c1b4f4021e556a58810

initial refactor for p7

view details

Viktor Stanchev

commit sha f2c072d82742e63acc4c96ae0d7c402970d78ad5

optimal combinator

view details

Viktor Stanchev

commit sha 17d521e5fb3032af1f51b2f28baa2ad5756d46ee

c7p1

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 49988a5afdbbf912c0d10f60f075c51196ade20d

c6

view details

push time in 2 months

issue openedtendermint/tendermint

tendermint data corruption: leveldb: manifest corrupted

I got the following panic from cosmos:

panic: Error initializing DB: leveldb: manifest corrupted (field 'comparer'): missing [file=MANIFEST-000057]
goroutine 1 [running]:
github.com/tendermint/tm-db.NewDB(0x11aea6a, 0x8, 0xc00018b6e0, 0x9, 0xc00762d3e0, 0x12, 0x7f634d25e420, 0x1587c40)
	/go/pkg/mod/github.com/tendermint/tm-db@v0.1.1/db.go:67 +0x274
github.com/tendermint/tendermint/node.DefaultDBProvider(0xc00b8f4900, 0xc00b8f4900, 0x13bb1d, 0x515e8, 0xc0075dfaa0)
	/go/pkg/mod/github.com/tendermint/tendermint@v0.32.2/node/node.go:66 +0xcc
github.com/tendermint/tendermint/node.createEvidenceReactor(0xc000163180, 0x13a5080, 0x1598280, 0xc01dc5a2c8, 0x15853c0, 0xc013f17240, 0x6, 0xc005bc1520, 0xa, 0x13bb1d)
	/go/pkg/mod/github.com/tendermint/tendermint@v0.32.2/node/node.go:340 +0x83
github.com/tendermint/tendermint/node.NewNode(0xc000163180, 0x157f200, 0xc01383a6e0, 0xc013f16fd0, 0x1566200, 0xc00b6db040, 0xc013f17210, 0x13a5080, 0xc013f17220, 0x15853c0, ...)
	/go/pkg/mod/github.com/tendermint/tendermint@v0.32.2/node/node.go:615 +0x613
github.com/cosmos/cosmos-sdk/server.startInProcess(0xc0001576a0, 0x13a5920, 0x1d, 0x0, 0x0)
	/go/pkg/mod/github.com/cosmos/cosmos-sdk@v0.37.0/server/start.go:129 +0x4df
github.com/cosmos/cosmos-sdk/server.StartCmd.func1(0xc000522f00, 0xc00016aea0, 0x1, 0x9, 0x0, 0x0)
	/go/pkg/mod/github.com/cosmos/cosmos-sdk@v0.37.0/server/start.go:43 +0xb5
github.com/spf13/cobra.(*Command).execute(0xc000522f00, 0xc00016ae10, 0x9, 0x9, 0xc000522f00, 0xc00016ae10)
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:826 +0x465
github.com/spf13/cobra.(*Command).ExecuteC(0xc000164000, 0x11d02bf, 0xc0008ffed8, 0x587c64)
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914 +0x2fc
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
github.com/tendermint/tendermint/libs/cli.Executor.Execute(0xc000164000, 0x13a5d30, 0x11b5f46, 0x10)
	/go/pkg/mod/github.com/tendermint/tendermint@v0.32.2/libs/cli/setup.go:89 +0x3c
main.main()
	/go/src/github.com/cosmos/gaia/cmd/gaiad/main.go:68 +0x876

Every restart after that I got the same error.

The following logs are from right before the data corruption: Screenshot 2019-12-05 at 8 45 32 PM

The node was killed by a deployment. I think kubernetes tries to send a graceful shutdown signal, but I'm not sure what timeout it has or if cosmos responds to shutdown signals.

Sorry, I'm not sure exactly what version of cosmos this was, but we were using the official gaia images and this happened on Nov 13, 2019.

created time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 463e0f7fd3fd5a77b629b90f1c1ef06d023adc54

c5

view details

push time in 2 months

issue commentton-blockchain/ton

malicious validator can steal funds from any contract that receives an external message?

I found out why I was confused about this. It was this quote from the original paper:

“Messages from nowhere” can also define some transaction fee which is deducted from the receiver’s account on top of the gas payment for redistribution to the validators.

vikstrous

comment created time in 2 months

issue closedton-blockchain/ton

malicious validator can steal funds from any contract that receives an external message?

I'm posting this directly to the issue tracker because TON is not live yet and so there's no threat to anyone's funds right now. I hope that I'm wrong about my conclusion, so let me know where I've made a mistake.

As a part of implementing the signing of a message to the simple contract for withdrawing funds, I noticed that import fees are not signed. That means that any malicious validator can choose to include a modified version of an external message in the blockchain and send itself more funds than intended by the author of the message.

The structure of a message that sends funds from the simple contract is the following:

message$_ {X:Type} info:CommonMsgInfo
  init:(Maybe (Either StateInit ^StateInit))
  body:(Either X ^X) = Message X;

ext_in_msg_info$10 src:MsgAddressExt dest:MsgAddressInt 
  import_fee:Grams = CommonMsgInfo;

The signed part of the message is the part inside body and consists of the following data directly in the current cells:

signature || seqno || mode

And in a child cell:

message$_ {X:Type} info:CommonMsgInfoRelaxed
  init:(Maybe (Either StateInit ^StateInit))
  body:(Either X ^X) = MessageRelaxed X;

int_msg_info$0 ihr_disabled:Bool bounce:Bool bounced:Bool
  src:MsgAddress dest:MsgAddressInt 
  value:CurrencyCollection ihr_fee:Grams fwd_fee:Grams
  created_lt:uint64 created_at:uint32 = CommonMsgInfoRelaxed;

The signature is over seqno || mode and the child cell, but it doesn't cover import_fee which is outside in the external message wrapper.

Attack example:

  • Alice has 1000 grams in her address (1 kg?)
  • Alice signs a message to send 10 grams to Bob
  • Alice wraps the signed message in an external message with import_fee of 0
  • Alice sends the external message to Mallory because Mallory is running a full node or validator and wants her to include the message in the blockchain
  • Mallory unwraps the unsigned parts of Alice's message, takes the signed portion and the signature and constructs her own external message with import_fee of 990 grams
  • Mallory includes the external message in the next block she validates or passes it on to an other validator to include it in the blockchain.

Either Mallory directly benefits from rewarding herself the fees from the message or the fees get distributed to some other validator. I haven't read enough about exactly how fees get distributed. Either way, Alice loses 990 grams.

Is there a way to defend against this attack? Can the simple contract enforce an upper limit on the fees it's willing to accept? Can it require the external message to contain a signed copy of the import fees amount?

closed time in 2 months

vikstrous

issue commentton-blockchain/ton

malicious validator can steal funds from any contract that receives an external message?

Ok, I proved myself wrong finally. It turns out that the import_fee field doesn't do anything. I thought that it's an additional fee to incentivize validators to include the transaction, but it's actually not. It's just another ignored field.

Here's the transaction that proves it:

https://test.ton.org/testnet/transaction?account=EQBqNxo9YMUiHZGHtzGDaS0ra1wDPEG32Ht-k4V2rpSAKApH&lt=688840000001&hash=05DFF97887606FE1553778941D03C9ACBE01C4EF1F3156DBAF4995E17C13E141

total_fees is only 4 mg even though import_fee is 1 gram. Also, the balance is still at 4.8 grams even though it started at 5 grams.

Closing this issue because the scenario I described is not possible.

vikstrous

comment created time in 2 months

issue commentton-blockchain/ton

malicious validator can steal funds from any contract that receives an external message?

In your first link, InMsg seems to be a message received by a shard from another shard, not an external message. That's why it has no "import fees", but in this context they are referring to fees "imported" into the new shard.

In your second link, all values are parsed with the _skip versions of methods. This is in the validate_skip method on CommonMsgInfo, so that makes sense. In the third link, it's also in the skip method of the parent object. It doesn't mean the field is not used. It's just a function implementation.

I think this indicates that import_fees are used:

https://github.com/ton-blockchain/ton/blob/090e0c16eb86184eaa3fda0e5d1c9838cf2ff88e/validator/impl/collator.cpp#L3551

The easiest way to test whether or not import fees actually do what I think they do is to set them on an external message and observe if they decrease the balance of the receiving account. Sorry for not doing that before posting.

vikstrous

comment created time in 2 months

issue openedton-blockchain/ton

malicious validator can steal funds from any contract that receives an external message?

I'm posting this directly to the issue tracker because TON is not live yet and so there's no threat to anyone's funds right now. I hope that I'm wrong about my conclusion, so let me know where I've made a mistake.

As a part of implementing the signing of a message to the simple contract for withdrawing funds, I noticed that import fees are not signed. That means that any malicious validator can choose to include a modified version of an external message in the blockchain and send itself more funds than intended by the author of the message.

The structure of a message that sends funds from the simple contract is the following:

message$_ {X:Type} info:CommonMsgInfo
  init:(Maybe (Either StateInit ^StateInit))
  body:(Either X ^X) = Message X;

ext_in_msg_info$10 src:MsgAddressExt dest:MsgAddressInt 
  import_fee:Grams = CommonMsgInfo;

The signed part of the message is the part inside body and consists of the following data directly in the current cells:

signature || seqno || mode

And in a child cell:

message$_ {X:Type} info:CommonMsgInfoRelaxed
  init:(Maybe (Either StateInit ^StateInit))
  body:(Either X ^X) = MessageRelaxed X;

int_msg_info$0 ihr_disabled:Bool bounce:Bool bounced:Bool
  src:MsgAddress dest:MsgAddressInt 
  value:CurrencyCollection ihr_fee:Grams fwd_fee:Grams
  created_lt:uint64 created_at:uint32 = CommonMsgInfoRelaxed;

The signature is over seqno || mode and the child cell, but it doesn't cover import_fee which is outside in the external message wrapper.

Attack example:

  • Alice has 1000 grams in her address (1 kg?)
  • Alice signs a message to send 10 grams to Bob
  • Alice wraps the signed message in an external message with import_fee of 0
  • Alice sends the external message to Mallory because Mallory is running a full node or validator and wants her to include the message in the blockchain
  • Mallory unwraps the unsigned parts of Alice's message, takes the signed portion and the signature and constructs her own external message with import_fee of 990 grams
  • Mallory includes the external message in the next block she validates or passes it on to an other validator to include it in the blockchain.

Either Mallory directly benefits from rewarding herself the fees from the message or the fees get distributed to some other validator. I haven't read enough about exactly how fees get distributed. Either way, Alice loses 990 grams.

Is there a way to defend against this attack? Can the simple contract enforce an upper limit on the fees it's willing to accept? Can it require the external message to contain a signed copy of the import fees amount?

created time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha f98eece7e5192c1dd084f32af77d0876611efba6

c4

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 667642e7960d321fb6eb40d0828cbb89a348d6f2

c3p2

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha c5e433cdfb6c72ff3b2614e94a2704edb9ef57f6

c3p1

view details

push time in 2 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha ea9fd12caabce1604adbe852c8afdbedf7ecdac0

c2

view details

push time in 3 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 1457d4fc5285748353db4cf3350726372f412f6f

c1/p2

view details

push time in 3 months

push eventvikstrous/adventofcode2019

Viktor Stanchev

commit sha 55a614ea62b6d6cfd784bb3f6a5c40999dc8f6f2

c1p1

view details

push time in 3 months

create barnchvikstrous/adventofcode2019

branch : master

created branch time in 3 months

created repositoryvikstrous/adventofcode2019

created time in 3 months

issue commentcuelang/cue

How to set up per-environment configs

Yeah, that would be ideal. It would be very similar in UX to the symlink structure or explicitly selecting a file for the environment.

I think it would be helpful to see a more concrete spec so I can comment on whether or not it selves my use case completely. Reminder: in addition to hardcoded environments (dev, prod), I need to have dynmaically generated environments per pull request, so some type of dynamic non-file input would be necessary as well. Without that, I would still have to wrap cue with a tool to generate files.

vikstrous

comment created time in 3 months

issue commentgoogleapis/google-cloud-go

regression with using fake local GCS server

WithEndpoint was the right hint, but I actually had to use option.WithEndpoint("https://www.googleapis.com/storage/v1/") to connect to my local fakegcs server. Any other value was not working as expected. I didn't dig far enough to understand exactly why.

vikstrous

comment created time in 3 months

issue commenthelm/helm

`helm template` output of manifests does not follow order of filenames and manifests within each file (random results)

I think it's extremely important for helm template output to be deterministic. For my use case the order doesn't matter, but I need to be able to diff the output. Without being able to diff the output, how can I make sure that when I change a chart, I'm changing what I intend to change? I don't want to deploy a kubernetes cluster to validate that a trivial config change is made correctly or a refactor hasn't changed the output.

AndiDog

comment created time in 3 months

create barnchvikstrous/gqlgen

branch : combined

created branch time in 3 months

issue comment99designs/gqlgen

generation performance

Ok, it works now. There was a bug in #945 that's fixed now.

vikstrous

comment created time in 3 months

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 8c110b7af3e4685e7678a65884ec18598b2e6c42

fix autobind packages list

view details

push time in 3 months

issue comment99designs/gqlgen

generation performance

Actually, sorry, I'm getting some diffs in the generated output on my repo. I'll have to really make sure each change doesn't affect the output because it's very easy to break things with PRs like this.

vikstrous

comment created time in 3 months

issue comment99designs/gqlgen

generation performance

With #940 #941 #942 #944 #945 all merged in, model generation and verification disabled, I'm getting 7 seconds generation time, down from 1:15.

vikstrous

comment created time in 3 months

issue commentgoogleapis/google-cloud-go

regression with using fake local GCS server

Ah, actually, it looks like we had to do some hacks to set the endpoint. Our code looks something like this:

	addr := "127.0.0.1:4443"
	opt := option.WithHTTPClient(&http.Client{
		Transport: &http.Transport{
			DialTLS: func(string, string) (net.Conn, error) {
				return tls.Dial("tcp", addr, &tlsConfig)
			},
		},
	})
	client, err := storage.NewClient(ctx, opt)
	if err != nil {
		panic(err)
	}
vikstrous

comment created time in 3 months

PR opened 99designs/gqlgen

deduplicate package load between AutoBind and Binder

Before:

auto bind load 2.684139918s
binder load 19.267647024s
auto bind load 3.140190561s
binder load 19.725400875s
build time 23.085849454s

After:

combined load 17.965141019s
combined load 17.479315829s
build time 17.721352532s

I have:

  • [ ] Added tests covering the bug / feature (see testing)
  • [ ] Updated any relevant documentation (see docs)
+20 -17

0 comment

4 changed files

pr created time in 3 months

create barnchvikstrous/gqlgen

branch : no-double-load

created branch time in 3 months

PR opened 99designs/gqlgen

single packages.Load for NameForPackage

This is one of the optimizations suggested in #918. The original suggestion was to load every package that needs to be loaded in a single packages.Load call. I ran into issues trying to implement that because packages.Load has flags that define how much data should be loaded. It turned out that if we load all data for all relevant packages, that actually slows things down. Instead, I batched the loads of packages for calls to NameForPackage.

I tested this change on top of all of #942 #941 #940.

This is what the timings looked like before this commit (in my private codebase):

load autobind 1.893891096s
binder load time 2.383718776s
build time 4.523106863s
load time 548.545523ms
load time 503.184846ms
load time 513.424915ms
load time 628.098503ms
load time 570.70579ms
load time 508.176378ms
load time 480.067102ms
load time 495.141265ms
load time 544.685734ms
load time 537.060982ms
load time 510.831309ms
load time 544.138726ms
load time 580.567556ms
load time 601.751553ms
load time 583.932285ms
load time 893.051704ms
load time 835.748479ms
load time 854.607076ms
load time 1.057673489s
load time 602.564898ms
load time 654.87885ms
load time 652.425622ms
load time 652.79054ms
generate time 16.909831482s
29.26user 24.03system 0:23.90elapsed 222%CPU (0avgtext+0avgdata 198584maxresident)k
0inputs+0outputs (1major+459356minor)pagefaults 0swaps

After this commit:

load autobind 2.05095349s
binder load time 2.620924994s
load time 719.630163ms
build time 5.659686914s
generate time 2.255979872s
12.99user 8.68system 0:10.07elapsed 215%CPU (0avgtext+0avgdata 196372maxresident)k
0inputs+0outputs (1major+182189minor)pagefaults 0swaps

build and generate time correspond to the two main parts of api/generate.go. load time is the time spent in packages.Load for the purposes of NameForPackage. As you can see, the combined packages.Load call is moved from the Load phase into the Build phase but even though so many more packages are being loaded, it takes about as long as a single one of the later calls.

I didn't see an improvement when this commit is applied without the others, but I didn't test it extensively by itself.

I have:

  • [x] Added tests covering the bug / feature (see testing)
  • n/a Updated any relevant documentation (see docs)
+111 -50

0 comment

13 changed files

pr created time in 3 months

create barnchvikstrous/gqlgen

branch : name-for-package-noload

created branch time in 3 months

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 697dd11e987cbb170e3c48641c0804dba0ab8746

reduce packages.Load calls

view details

push time in 3 months

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 14d831b1c37270d2278815f000c05e6419db702c

reduce packages.Load calls

view details

push time in 3 months

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 4db0e6eccc8745ed765f3863d221ea13c57f0bd1

keep function private

view details

push time in 3 months

PR opened 99designs/gqlgen

Reduce calls to packages.Load

Describe your PR and link to any relevant issues.

I have:

  • [ ] Added tests covering the bug / feature (see testing)
  • [ ] Updated any relevant documentation (see docs)
+209 -91

0 comment

20 changed files

pr created time in 3 months

create barnchvikstrous/gqlgen

branch : single-load

created branch time in 3 months

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha c06f05b319fc9287110ac0dce2f7e4aafbd34873

add doc

view details

push time in 3 months

PR opened 99designs/gqlgen

add skip_validation flag

Addresses part of #918

This adds a flag called skip_validation. When enabled, the final validation pass after generation is not performed. This saves me 10 seconds from a 1:15 initial generation time.

I have:

  • [ ] Added tests covering the bug / feature (see testing)
  • [ ] Updated any relevant documentation (see docs)
+5 -2

0 comment

2 changed files

pr created time in 3 months

create barnchvikstrous/gqlgen

branch : disable-validation

created branch time in 3 months

PR opened 99designs/gqlgen

shortcut QualifyPackagePath in go module mode

Addresses part of #918

This is a code generation performance optimization that applies only in go module mode. It speeds up generation time for me from 1:15 to 0:40.

I have:

  • n/a Added tests covering the bug / feature (see testing)
  • n/a Updated any relevant documentation (see docs)
+24 -4

0 comment

2 changed files

pr created time in 3 months

create barnchvikstrous/gqlgen

branch : qualify-package-path-faster

created branch time in 3 months

push eventvikstrous/gqlgen

Viktor Stanchev

commit sha 3a05d2dd985ee4f1e2d3390a65d4a24447a5ecb4

add mention in the docs

view details

push time in 3 months

PR opened 99designs/gqlgen

make model generation optional

Describe your PR and link to any relevant issues.

I have:

  • [ ] Added tests covering the bug / feature (see testing)
  • [ ] Updated any relevant documentation (see docs)
+22 -13

0 comment

2 changed files

pr created time in 3 months

create barnchvikstrous/gqlgen

branch : optional-modelgen

created branch time in 3 months

issue commentcuelang/cue

How to set up per-environment configs

@mpvl

Tool files have access to the package scope, but none of the fields defined in a tool file influence the output of a package.

I can't have anything in a _tool file affect how the config will be rendered. I'd have to set up something to render all possible configs and then exact the relevant environment's version, which is what I'm already doing in the example I provided in https://github.com/cuelang/cue/issues/190#issuecomment-558033346

@xinau

Based on your suggestions it sounds like the structure you are describing is pretty different from the one in the tutorials? Your structure sounds more like the traditional inheritance based structure, but cue actually doesn't allow you to overwrite anything, so isn't that an issue? The tutorials seemed to advocate for the following structure:

You just have root, intermediate and leaf directories where each one adds more and more specialization. You can put common structures in the root or import them from other packages. Then you unify things as needed at run time in _tool.cue files by passing the cue cmd command a list of packages to act on. That way different subsets of your app can be in different directories that can be deployed independently.

Re: making environments into packages and everything else as a src package

The problem with this approach is that it removes the ability to choose a subset of leaf directories for the tools to act on. You have only one centralized environment specific directory (or cue file as described in the example in https://github.com/cuelang/cue/issues/190#issuecomment-558483153). I don't see how I can keep the ability to specify a subset of packages to act on. I also don't like having to explicitly import everything into a centralized package. That seems to go against the design of cue. I thought that cue was all about not having to explicitly instantiate everything?

Sorry if I'm totally misunderstanding things. I would love to see a more complete example of the best way to structure a cue config tree with different environments.

vikstrous

comment created time in 3 months

PR opened anchorageoss/tezosprotocol

upgrade linter
+2 -1

0 comment

2 changed files

pr created time in 3 months

create barnchvikstrous/tezosprotocol

branch : upgrade-linter

created branch time in 3 months

issue openedgoogleapis/google-cloud-go

regression with using fake local GCS server

Client

Storage

Describe Your Environment

Local docker container running https://github.com/fsouza/fake-gcs-server, tests trying to connect to it with the GCS client in this repo.

Expected Behavior

Connects to the local server

Actual Behavior

Connects to https://storage.googleapis.com/storage/v1/ unconditionally and returns a 404 error

The regression was introduced by https://github.com/googleapis/google-cloud-go/commit/fc09f3a79b851d8012f55167dac291610ebf01a2 in https://github.com/googleapis/google-cloud-go/releases/tag/storage%2Fv1.3.0 (version 1.3.0)

created time in 3 months

issue commentcuelang/cue

... path doesn't work with symlinks?

I'm not understanding how your example would be extended for non-trivial use cases. Would you have to explicitly list every struct and every package for every environment?

I agree that symlink hacks are a last resort solution, but in this particular case, the symlink structure feels simpler than multiple packages and removes all the repetition.

Could we just restrict symlinks to not be allowed to escape the module and allow them?

Could we come up with some other way to inject an environment-specific config?

This partial solution kind of works:

cue eval ./environments/development/development.cue ./services/frontend/frontend.cue

but you can't specify a file followed by ./services/... because it's a mix of files and directories. Maybe that should be allowed somehow? or there should be some other way to inject a root config?

I was thinking of maybe using _tool.cue files, but that doesn't seem possible.

Another solution is to wrap everything in a shell script that copies the right environment config to the root, but that seems even worse than symlinks.

A related problem: some environments are dynamic (ex. environments created to test pull requests), so how do we inject those values into the config?

I actually come up with another solution that uses the hacky syntax from doc/tutorial/kubernetes/manual/services/k8s.cue. The idea was to render all environment configs at the same time and then use -e to exact only the one you need.

tree .
.
├── cue.mod
│   ├── module.cue
│   ├── pkg
│   └── usr
├── environment_config.cue
└── services
    └── frontend
        └── frontend.cue
cat environment_config.cue
package services

environmentConfig: development: {
  name: "development"
}
environmentConfig: production: {
  name: "production"
}
cat services/frontend/frontend.cue
package services

_frontend: ENV: output: {
  services: frontend: name: "frontend \(ENV.name)"
}

environment: {
  for envkey, envconfig in environmentConfig {
    "\(envkey)": (_frontend & {ENV: envconfig}).ENV.output
  }
}

I'll change the issue title to be about per-environment configs since that's the real issue here. Initially I was hoping that there's a simple solution, but maybe not.

vikstrous

comment created time in 3 months

issue commentcuelang/cue

... path doesn't work with symlinks?

This fixes the issue:

diff --git cue/load/fs.go cue/load/fs.go
index 082efbb..4af150c 100644
--- cue/load/fs.go
+++ cue/load/fs.go
@@ -243,7 +243,7 @@ var skipDir = errors.Newf(token.NoPos, "skip directory")
 type walkFunc func(path string, info os.FileInfo, err errors.Error) errors.Error

 func (fs *fileSystem) walk(root string, f walkFunc) error {
-       fi, err := fs.lstat(root)
+       fi, err := fs.stat(root)
        if err != nil {
                err = f(root, fi, err)
        } else if !fi.IsDir() {

Is there any reason not to make this change?

vikstrous

comment created time in 3 months

pull request commentcuelang/cue

fix typos in docs

@googlebot I signed it!

vikstrous

comment created time in 3 months

PR opened cuelang/cue

fix typos in docs
+3 -3

0 comment

3 changed files

pr created time in 3 months

create barnchvikstrous/cue

branch : fix-typos

created branch time in 3 months

issue openedcuelang/cue

... doesn't work with symlinks?

Repro:

tree
.
├── cue.mod
│   ├── module.cue
│   ├── pkg
│   └── usr
├── environments
│   ├── development
│   │   ├── development.cue
│   │   └── services -> ../../services/
│   └── production
│       ├── production.cue
│       └── services -> ../../services/
└── services
    └── frontend
        └── frontend.cue

10 directories, 4 files
cat environments/development/development.cue
package service

Environment:: "development"
cat environments/production/production.cue
package service

Environment:: "production"
cat services/frontend/frontend.cue
package service

main: "hello \(Environment)"
cue eval ./environments/development/services/frontend/
Environment :: "development"
main:          "hello development"

Expected:

cue eval ./environments/development/services/...
Environment :: "development"
main:          "hello development"

Actual:

cue eval ./environments/development/services/...
cue: "./environments/development/services/..." matched no packages

Side note: if there's any info about how to customize configs for different environments or any best practices, please let me know. I've read all of the docs and I didn't see anything mentioning this issue.

created time in 3 months

more