profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/orf/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Tom Forbes orf @onfido Lisbon https://tomforb.es Django person living and working in Lisbon.

orf/cargo-bloat-action 79

Track rust binary sizes across builds using Github Actions

django/django-docker-box 71

Run the Django test suite across all supported databases and python versions

orf/bare-hugo-theme 41

A Hugo theme based on Bulma.io

onfido/faiss_prebuilt 40

Prebuilt .whl files for MacOS + Linux of the Facebook FAISS library

onfido/ecr-mirror 25

Mirror public repositories to internal ECR repos

orf/CTF 8

Simple capture the flag web application

orf/aio-pipes 6

Asynchronous pipes in Python

orf/alfred-quip-workflow 5

Fulltext, local Quip document search

orf/alfred-pycharm 4

Quickly open Pycharm projects via Alfred

pull request commentpython-poetry/poetry

[1.1] Fix archive hash generation

I believe this has caused https://github.com/python-poetry/poetry/issues/4523.

Specifically we're hard-coding sha256 here (https://github.com/python-poetry/poetry/pull/4444/files#diff-de189aa00e987348b593bd76b6498c6b1e6dbbbb69ffeaa4824a789f1e8c837eR612), which will result in failures to install any MD5-hashed dependencies.

sdispater

comment created time in 3 days

fork orf/aws-cloudformation-user-guide

The open source version of the AWS CloudFormation User Guide

fork in 4 days

delete branch orf/s3fs

delete branch : get-bucket-location

delete time in 6 days

pull request commentdask/s3fs

Call get_bucket_location before calling list_objects_v2

All green! Thanks for the quick response here 💪

orf

comment created time in 6 days

push eventorf/s3fs

Tom Forbes

commit sha 80a5cf3809357269b1c401d51df68b4c1d2908df

formatting

view details

push time in 6 days

issue commentdask/s3fs

Disallowing ListObjectsV2 at the root of the bucket makes s3fs attempt to create a bucket

I've inverted the calls. I'd assume get-bucket-location, but it probably depends on the type of user.

orf

comment created time in 6 days

push eventorf/s3fs

Tom Forbes

commit sha 6e4f44740b3b417e33a4d91ddea36699af009ad9

Call get_bucket_location after list_objects_v2

view details

push time in 6 days

issue commentdask/s3fs

Disallowing ListObjectsV2 at the root of the bucket makes s3fs attempt to create a bucket

Unfortunately get-bucket-location is a specific permission that needs to be added, so only using that would be a breaking change.

While this certainly adds an extra request, this does seem to be in the error-path rather than the happy-path, so it's surely not much of a big deal? And I think the results are also cached?

We could also invert the flow here: if list-objects fails then try get-bucket-location?

orf

comment created time in 6 days

issue commentdask/s3fs

Disallowing ListObjectsV2 at the root of the bucket makes s3fs attempt to create a bucket

Ok, well please let me know how I should proceed. I've got a PR and I'm willing to spend some time getting it merged as this issue is blocking some of our work. I think just calling get-bucket-location before list-objects-v2 is a good compromise and should not impact anyone in any way.

orf

comment created time in 6 days

issue commentdask/s3fs

Disallowing ListObjectsV2 at the root of the bucket makes s3fs attempt to create a bucket

cc @isidentical

Would be happy to see different techniques to determine if a bucket exists, but I would note that listing a bucket's contents might be allowed but getting its details not allowed. The AWS permissions model is complex!

Also note that mkdirs doesn't do anything except create a bucket (because S3 has no folders).

In the PR I made we can call get-bucket-location before falling back to the existing implementation. Unfortunately head-bucket needs list_objects permission.

However... why don't we use MaxKeys=0 as well? This works:

In [14]: c.list_objects_v2(Bucket="s3fs-test-bucket-123", MaxKeys=0)
Out[14]:
{'ResponseMetadata': {},
 'IsTruncated': False,
 'Name': 's3fs-test-bucket-123',
 'Prefix': '',
 'MaxKeys': 0,
 'EncodingType': 'url',
 'KeyCount': 0}
orf

comment created time in 6 days

issue commentdask/s3fs

Disallowing ListObjectsV2 at the root of the bucket makes s3fs attempt to create a bucket

For anyone else reading this, you can get around this with a specific bucket condition like so:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::s3fs-test-bucket-123",
            "Condition": {
                "NumericNotEquals": {
                    "s3:max-keys": "1"
                },
                "StringNotEquals": {
                    "s3:delimiter": ""
                }
            }
        }
    ]
}

But this sucks, as you are basically giving complete list access (albeit 1 key at a time!)

orf

comment created time in 6 days

PR opened dask/s3fs

Call get_bucket_location before calling list_objects_v2

Closes #532

+7 -0

0 comment

1 changed file

pr created time in 6 days

create barnchorf/s3fs

branch : get-bucket-location

created branch time in 6 days

issue openeddask/s3fs

Disallowing ListObjectsV2 at the root of the bucket makes s3fs attempt to create a bucket

<!-- Please include a self-contained copy-pastable example that generates the issue if possible.

Please be concise with code posted. See guidelines below on how to provide a good bug report:

  • Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
  • Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve

Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. -->

What happened:

It's not uncommon to have a bucket that disallows listing the root, but allows listing a specific prefix. In this case s3fs will fail any writes and will attempt to create the bucket, which often fails with a completely different error.

What you expected to happen:

Falling back to creating a bucket is very strange behaviour. I imagine it's legacy and impossible to change, but I would expect that s3fs does not require full list objects permissions over the bucket to perform any writes.

Minimal Complete Verifiable Example:

In [1]: import s3fs

In [2]: s3 = s3fs.S3FileSystem(anon=False)

In [5]: s3.mkdirs("s3://s3fs-test-bucket-123/foo/bar")
2021-09-17 17:34:50,222 - s3fs - DEBUG - _call_s3 -- CALL: list_objects_v2 - () - {'MaxKeys': 1, 'Bucket': 's3fs-test-bucket-123'}
2021-09-17 17:34:50,516 - s3fs - DEBUG - _call_s3 -- Nonretryable error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
2021-09-17 17:34:50,516 - s3fs - DEBUG - _call_s3 -- CALL: create_bucket - () - {'Bucket': 's3fs-test-bucket-123', 'ACL': ''}
2021-09-17 17:34:50,576 - s3fs - DEBUG - _call_s3 -- Nonretryable error: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.

The full traceback is like so:

File "/home/app/.cache/pypoetry/virtualenvs/x/lib/python3.9/site-packages/dask/dataframe/io/parquet/arrow.py", line 819, in initialize_write
    fs.mkdirs(path, exist_ok=True)
  File "/home/app/.cache/pypoetry/virtualenvs/x/lib/python3.9/site-packages/fsspec/spec.py", line 1159, in mkdirs
    return self.makedirs(path, exist_ok=exist_ok)
  File "/home/app/.cache/pypoetry/virtualenvs/x/lib/python3.9/site-packages/fsspec/asyn.py", line 88, in wrapper
    return sync(self.loop, func, *args, **kwargs)
  File "/home/app/.cache/pypoetry/virtualenvs/x/lib/python3.9/site-packages/fsspec/asyn.py", line 69, in sync
    raise result[0]
  File "/home/app/.cache/pypoetry/virtualenvs/x/lib/python3.9/site-packages/fsspec/asyn.py", line 25, in _runner
    result[0] = await coro
  File "/home/app/.cache/pypoetry/virtualenvs/x/lib/python3.9/site-packages/s3fs/core.py", line 731, in _makedirs
    await self._mkdir(path, create_parents=True)
  File "/home/app/.cache/pypoetry/virtualenvs/x/lib/python3.9/site-packages/s3fs/core.py", line 716, in _mkdir
    await self._call_s3("create_bucket", **params)

It seems like it's failing to detect the bucket exists on this line. There are much better methods to detect if a bucket exists, like get-bucket-location.

created time in 6 days

pull request commentdask/distributed

Add type annotations to various functions within distributed.worker

Thank you! I'll go through and add some more annotations soon, there are some heavily used user-facing functions that could benefit a lot from it.

I don't know the best place to ask this, but would you also accept contributions that added __class_getitem__ methods so we could do Future[int] or somesuch in our annotations?

orf

comment created time in 9 days

delete branch orf/distributed

delete branch : add-type-annotations

delete time in 9 days

create barnchorf/homebrew-core

branch : update-gping.rb-1631458600

created branch time in 11 days

push eventorf/homebrew-core

Tom Forbes

commit sha c28f4dc3d4ba1ea46e00fdd554e661bf531f5943

gping gping-v1.2.5 Created by https://github.com/mislav/bump-homebrew-formula-action

view details

push time in 11 days

PR opened Homebrew/homebrew-core

gping gping-v1.2.5

Created by https://github.com/mislav/bump-homebrew-formula-action

+2 -2

0 comment

1 changed file

pr created time in 11 days

created tagorf/gping

tagpinger-v0.3.6

Ping, but with a graph

created time in 11 days

created tagorf/gping

taggping-v1.2.5

Ping, but with a graph

created time in 11 days

push eventorf/gping

Tom Forbes

commit sha 132a7ee03b74b8c21e1da2338dcfea6e75a814c5

(cargo-release) version 0.3.6

view details

Tom Forbes

commit sha fdca3819d263c8419d9cceceeec5b7a67e23854a

(cargo-release) version 1.2.5

view details

push time in 11 days

pull request commentorf/gping

Small changes. Default color: green. Fix bug: NaN displayed as zero

Thanks!

fox0

comment created time in 11 days

push eventorf/gping

fox0

commit sha e2dba6c22b71deabefd87594e5eb07fee897b97f

small changes (#143)

view details

push time in 11 days

PR merged orf/gping

Small changes. Default color: green. Fix bug: NaN displayed as zero
+10 -10

0 comment

4 changed files

fox0

pr closed time in 11 days

created tagorf/gping

taggping-v1.2.4

Ping, but with a graph

created time in 11 days

created tagorf/gping

tagpinger-v0.3.5

Ping, but with a graph

created time in 11 days

push eventorf/gping

Tom Forbes

commit sha 58903d23635563cfcfff01ca23ff69e519b031b9

(cargo-release) version 0.3.5

view details

Tom Forbes

commit sha d16ce642e1b99a5bc6d2ae67d88a372e07967c29

(cargo-release) version 1.2.4

view details

Tom Forbes

commit sha 2c7b243877d5b6433cb7bddda46d86aefa652f71

(cargo-release) start next development iteration 0.3.6-alpha.0

view details

Tom Forbes

commit sha 4a94167e34b445a7b75c6d68d579266a5ace4b32

(cargo-release) start next development iteration 1.2.5-alpha.0

view details

push time in 11 days

issue closedorf/gping

Empty screen without host

How to reproduce: simply type gping

Now empty screen is being shown but still possible to exit with q.

When first time this happened, I wasn't sure if it was loading for something or not.

Maybe it's good to expliclity fail and print help (as if user typed gping --help) instead?

The version of gping installed on my machine is 1.2.1 on M1, macOS.

closed time in 11 days

ryuheechul

issue commentorf/gping

Empty screen without host

Hey, this shouldn't be an issue with the latest release.

ryuheechul

comment created time in 11 days