profile
viewpoint

agronholm/anyio 269

High level compatibility layer for multiple asynchronous event loop implementations on Python

njsmith/colorspacious 110

A powerful, accurate, and easy-to-use Python library for doing colorspace conversions

graydon/stxt 83

sketch of a secure async group communication system

HyperionGray/trio-websocket 39

WebSocket implementation for Python Trio

bots-for-humanity/trio-gitter-bot 2

Gitter bot for the Trio project

executablebooks/myst 2

Myst - Markedly Structured Text

njsmith/async_generator 2

Making it easy to write async iterators in Python 3.5

njsmith/codetrawl 1

Better software through corpus analysis

njsmith/cython 1

A Python to C compiler

issue commentpython-trio/trio-click

Multiple trio-click/asyncclick repositories

Hmm, ok, so in that case possibly the simplest way to accomplish that combination of redirects would be to:

  1. delete this repo
  2. transfer smurfix/trio-click to the python-trio org, so it becomes python-trio/trio-click
  3. rename python-trio/trio-clickpython-trio/asyncclick

Does that seem like a good plan? Would we be losing anything important if we deleted this repo in favor of the repo that's currently called smurfix/trio-click?

makkus

comment created time in 11 days

issue commentpython-trio/trio-click

Multiple trio-click/asyncclick repositories

@graingert I don't see a trio-click org?

makkus

comment created time in 11 days

issue commentpython-trio/trio-click

Multiple trio-click/asyncclick repositories

tbh I'm not sure what's supposed to be happening here either. @smurfix can you sort us out?

makkus

comment created time in 11 days

push eventpypa/manylinux

pyup-bot

commit sha b9e7b5d75bd789e358de42afb9e413e8b14eca98

Update pip from 20.2.2 to 20.2.3

view details

push time in 14 days

push eventpypa/manylinux

pyup-bot

commit sha c3e957d3941ea0976a659a0e101906c7d375520b

Update pip from 20.2.2 to 20.2.3

view details

push time in 14 days

create barnchpypa/manylinux

branch : pyup-update-pip-20.2.2-to-20.2.3

created branch time in 14 days

pull request commentpython-hyper/h11

Preserve header casing

The benchmark I cited isn't terribly clever, but it does exercise a full request/response cycle with some realistic headers.

tomchristie

comment created time in 15 days

pull request commentpython-hyper/h11

Preserve header casing

What do you get from PYTHONPATH=. python bench/benchmarks/benchmarks.py before and after this change?

tomchristie

comment created time in 15 days

issue commentpython-trio/pytest-trio

Handle unittest-based tests

@catern can you explain more what you mean about not wanting to switch to pytest, but wanting to use pytest-trio? What parts of pytest are you trying to avoid?

njsmith

comment created time in 16 days

pull request commentpython-trio/trio

Bump up the timeout in test_pipes to hopefully reduce spurious CI failures

Yeah, I think these free CI systems are overprovisioned and have really noisy neighbors, so sometimes your whole VM will just like, freeze for a few seconds and then resume. So you need pretty generous timeouts.

njsmith

comment created time in 17 days

delete branch njsmith/trio

delete branch : deflake-test_pipes

delete time in 17 days

push eventpypa/manylinux

pyup-bot

commit sha 8447027206d7b7e3e34da04249eb798ead97565f

Update setuptools from 49.2.0 to 50.3.0

view details

push time in 18 days

push eventpypa/manylinux

pyup-bot

commit sha d4db787c806b81941d4eae593658f0a3cd26dde6

Update setuptools from 44.1.1 to 50.3.0

view details

push time in 18 days

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-50.3.0

created branch time in 18 days

issue commentpython-trio/trio

Design: alternative scheduling models

Another likely example of strict-checkpoint-fairness causing problems: https://gitter.im/python-trio/general?at=5f526634dfaaed4ef52ef17f

Best-guess analysis: https://gitter.im/python-trio/general?at=5f536ec9a5788a3c29d5f248

njsmith

comment created time in 18 days

issue commentpython-trio/trio

Tracking issue: intermittent test failures

test_signals (as reported in #1170, it seems) failed again in #1705

This failure is super weird, and different from the issue in #1170. The test essentially does:

        with move_on_after(1.0) as scope:
            async with await open_process(SLEEP(3600)) as proc:
                proc.terminate()
        assert not scope.cancelled_caught
        assert proc.returncode == -SIGTERM

...and then the test fails on the last line because proc.returncode is -9 == -SIGKILL, when we were expecting -15 == SIGTERM.

So it seems like somehow, the child process is dying from a SIGKILL. How could that be? The __aexit__ from the async with proc: block can send a SIGKILL to the process, but only after await proc.wait() either returns normally or is cancelled, and then verifying that proc._proc.returncode is None:

        try:
            await self.wait()
        finally:
            if self._proc.returncode is None:
                self.kill()
                with trio.CancelScope(shield=True):
                    await self.wait()

I don't think wait could be cancelled here, because the test code does assert not scope.cancelled_caught to confirm that the timeout isn't firing. So that suggests that proc.wait must have returned after the SIGTERM, so proc._proc.returncode should have already been set to -SIGTERM. In fact, the code for wait even says assert self._proc.returncode is not None. So the finally: block in Process.aclose shouldn't have sent a SIGKILL.

So this seems to be one of those "that can't happen" errors... I don't know where this SIGKILL could be coming from.

test_pipes failed again on macOS 3.8: #1713 (comment)

I think this one is just a too-aggressive timeout: #1715

njsmith

comment created time in 18 days

create barnchnjsmith/trio

branch : deflake-test_pipes

created branch time in 18 days

push eventpypa/manylinux

pyup-bot

commit sha c8ad28f9c775fd1f4aa7ff7dc30912f573d1603c

Update setuptools from 49.2.0 to 50.2.0

view details

push time in 19 days

push eventpypa/manylinux

pyup-bot

commit sha 7a557d2f0f7cec3ee144a8d448898bc6f8afca57

Update setuptools from 44.1.1 to 50.2.0

view details

push time in 19 days

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-50.2.0

created branch time in 19 days

push eventpypa/manylinux

pyup-bot

commit sha 03526021def753582453b6480eca655f5400168e

Update setuptools from 49.2.0 to 50.1.0

view details

push time in 20 days

push eventpypa/manylinux

pyup-bot

commit sha f0275c39fe9c050b869e52bf6c20e7fbee40099a

Update setuptools from 44.1.1 to 50.1.0

view details

push time in 20 days

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-50.1.0

created branch time in 20 days

issue commentpython-trio/trio

Using one process's output as another's input creates challenges with non-blocking status

I guess one hacky but workable option would be to put p1.stdout into a mode where it toggles O_NONBLOCK on-and-off-again every time the parent process tries to use it.

...and we could probably use the same code for #174, now that I think about it.

mic006

comment created time in 21 days

issue commentpython-trio/trio

Using one process's output as another's input creates challenges with non-blocking status

@mic006

Following your discussion, os.set_blocking(p1.stdout.fileno(), True) between the 2 trio.open_process can be used as a workaround.

Right, that's a good workaround for right now, but it's (1) awkward to force users to do that, (2) if you then try to access p1.stdout from the parent, then the parent will lock up, because Trio will try to access the fd in a non-blocking manner, but the fd will be in blocking mode. Not a problem for your use case b/c you're not going to use p1.stdout in the parent, but it would be nice if we could figure out an API design that didn't leave this footgun lying around.

Using socket.socketpair() instead of a pipe may be a problem for some specific applications (splice system call may not work for example).

splice should be OK, because we'd only need to use socketpair on macOS/BSDs, and those don't have splice :-). But yeah, in general I'm wary. Using a socket instead of a pipe should just work, but it's not a common thing to do so we might well discover a year later that there's some quirky program that objects for some obscure reason and then have to start over.

I guess one hacky but workable option would be to put p1.stdout into a mode where it toggles O_NONBLOCK on-and-off-again every time the parent process tries to use it.

Some other cases worth at least thinking about:

  • Piping multiple processes into one output process: (foo & bar) | baz
  • Once we have stdio support in the parent (#174), we will want to think about what happens if you pass those explicitly to a child process (instead of relying on implicit inheritance).
mic006

comment created time in 21 days

push eventpypa/manylinux

pyup-bot

commit sha 5650cbc5d6f08f5534d4d02f73571fad9fa19124

Update setuptools from 49.2.0 to 50.0.3

view details

push time in 21 days

push eventpypa/manylinux

pyup-bot

commit sha 51116f38ef5003967279465030fb292455fc9358

Update setuptools from 44.1.1 to 50.0.3

view details

push time in 21 days

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-50.0.3

created branch time in 21 days

push eventpypa/manylinux

pyup-bot

commit sha 33f46186175cd2c09a78091a1681eec760f659c6

Update setuptools from 49.2.0 to 50.0.2

view details

push time in 21 days

push eventpypa/manylinux

pyup-bot

commit sha 413bbbf95348595e4829d4a7f687d631cd1b7010

Update setuptools from 44.1.1 to 50.0.2

view details

push time in 21 days

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-50.0.2

created branch time in 21 days

push eventpypa/manylinux

pyup-bot

commit sha ca604e4ebdde6e94e938eb1e019315f1dc0dd774

Update setuptools from 49.2.0 to 50.0.1

view details

push time in 22 days

push eventpypa/manylinux

pyup-bot

commit sha f5c7988e16218a04832e8da82a6a5f3f238e8e3c

Update setuptools from 44.1.1 to 50.0.1

view details

push time in 22 days

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-50.0.1

created branch time in 22 days

issue commentpython-trio/trio

A nursery might inject a Cancelled even when none of the tasks received a Cancelled

I'm happy for nursery exit to be an unconditional schedule point and unconditional lack of a cancellation point, if that sounds good to you.

Let's do it.

We could drop the schedule point if the nursery has ever started a task, but I'm not sure that helps any practical use case enough to pay for the weirdness -- dropping it is maybe better for performance, but that seems better served by having a more general mechanism for dropping schedule points if we've scheduled recently enough.

Yeah, let's not bother with trying to micro-optimize this right now. (And yeah, eliding schedule points if we've scheduled recently is probably a good idea, but orthogonal to the rest of this.)

vxgmichel

comment created time in 22 days

PR closed python-trio/trio

Support alternatives to trio.run() for pytest-trio

Draft for:

  • [x] Generally concluding the other PRs are going to work
    • Since moving trio_test() to pytest-trio, the other PRs no longer depend on this one.
  • [ ] Maybe deprecate trio_test(), maybe just delete it?

Relates to:

  • https://github.com/python-trio/pytest-trio/pull/105
  • https://github.com/altendky/qtrio/pull/152
+20 -14

6 comments

1 changed file

altendky

pr closed time in 22 days

pull request commentpython-trio/trio

Support alternatives to trio.run() for pytest-trio

Yeah, idk, maybe we should deprecate and remove it, but we don't need to make a decision right now, now that this is off your critical path.

altendky

comment created time in 22 days

delete branch python-trio/trio

delete branch : dependabot/pip/black-20.8b1

delete time in 22 days

push eventpython-trio/trio

dependabot-preview[bot]

commit sha 6ba5ead12f804d827ff2b33c3f815ac4fab0432f

Bump black from 19.10b0 to 20.8b1 Bumps [black](https://github.com/psf/black) from 19.10b0 to 20.8b1. - [Release notes](https://github.com/psf/black/releases) - [Changelog](https://github.com/psf/black/blob/master/CHANGES.md) - [Commits](https://github.com/psf/black/commits) Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

view details

Quentin Pradet

commit sha a540ff3f258ab3c500e546eaab6ee41c84a7160f

Remove useless trailing commas black 20.8b1 reformats trailing commas to include only a single element per line: removing the trailing comma allows keeping the previous behavior.

view details

Quentin Pradet

commit sha f872235a81049f8eac2dd9c229023090291f48e9

Run black 20.8b1

view details

Quentin Pradet

commit sha c747376cdc0efd9cb8de3fc17060bcc796fc7b33

Generate sources after black update

view details

Nathaniel J. Smith

commit sha 4f5dc9aee287364559d96344e90af5f16773f75f

Merge pull request #1699 from python-trio/dependabot/pip/black-20.8b1 Bump black from 19.10b0 to 20.8b1

view details

push time in 22 days

PR merged python-trio/trio

Reviewers
Bump black from 19.10b0 to 20.8b1 dependencies

Bumps black from 19.10b0 to 20.8b1. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/psf/black/blob/master/CHANGES.md">black's changelog</a>.</em></p> <blockquote> <h3>20.8b1</h3> <h4><em>Packaging</em></h4> <ul> <li>explicitly depend on Click 7.1.2 or newer as <code>Black</code> no longer works with versions older than 7.0</li> </ul> <h3>20.8b0</h3> <h4><em>Black</em></h4> <ul> <li> <p>re-implemented support for explicit trailing commas: now it works consistently within any bracket pair, including nested structures (<a href="https://github-redirect.dependabot.com/psf/black/issues/1288">#1288</a> and duplicates)</p> </li> <li> <p><code>Black</code> now reindents docstrings when reindenting code around it (<a href="https://github-redirect.dependabot.com/psf/black/issues/1053">#1053</a>)</p> </li> <li> <p><code>Black</code> now shows colored diffs (<a href="https://github-redirect.dependabot.com/psf/black/issues/1266">#1266</a>)</p> </li> <li> <p><code>Black</code> is now packaged using 'py3' tagged wheels (<a href="https://github-redirect.dependabot.com/psf/black/issues/1388">#1388</a>)</p> </li> <li> <p><code>Black</code> now supports Python 3.8 code, e.g. star expressions in return statements (<a href="https://github-redirect.dependabot.com/psf/black/issues/1121">#1121</a>)</p> </li> <li> <p><code>Black</code> no longer normalizes capital R-string prefixes as those have a community-accepted meaning (<a href="https://github-redirect.dependabot.com/psf/black/issues/1244">#1244</a>)</p> </li> <li> <p><code>Black</code> now uses exit code 2 when specified configuration file doesn't exit (<a href="https://github-redirect.dependabot.com/psf/black/issues/1361">#1361</a>)</p> </li> <li> <p><code>Black</code> now works on AWS Lambda (<a href="https://github-redirect.dependabot.com/psf/black/issues/1141">#1141</a>)</p> </li> <li> <p>added <code>--force-exclude</code> argument (<a href="https://github-redirect.dependabot.com/psf/black/issues/1032">#1032</a>)</p> </li> <li> <p>removed deprecated <code>--py36</code> option (<a href="https://github-redirect.dependabot.com/psf/black/issues/1236">#1236</a>)</p> </li> <li> <p>fixed <code>--diff</code> output when EOF is encountered (<a href="https://github-redirect.dependabot.com/psf/black/issues/526">#526</a>)</p> </li> <li> <p>fixed <code># fmt: off</code> handling around decorators (<a href="https://github-redirect.dependabot.com/psf/black/issues/560">#560</a>)</p> </li> <li> <p>fixed unstable formatting with some <code># type: ignore</code> comments (<a href="https://github-redirect.dependabot.com/psf/black/issues/1113">#1113</a>)</p> </li> <li> <p>fixed invalid removal on organizing brackets followed by indexing (<a href="https://github-redirect.dependabot.com/psf/black/issues/1575">#1575</a>)</p> </li> <li> <p>introduced <code>black-primer</code>, a CI tool that allows us to run regression tests against existing open source users of Black (<a href="https://github-redirect.dependabot.com/psf/black/issues/1402">#1402</a>)</p> </li> <li> <p>introduced property-based fuzzing to our test suite based on Hypothesis and Hypothersmith (<a href="https://github-redirect.dependabot.com/psf/black/issues/1566">#1566</a>)</p> </li> <li> <p>implemented experimental and disabled by default long string rewrapping (<a href="https://github-redirect.dependabot.com/psf/black/issues/1132">#1132</a>), hidden under a <code>--experimental-string-processing</code> flag while it's being worked on;</p> </li> </ul> <!-- raw HTML omitted --> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/psf/black/commits">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.

If all status checks pass Dependabot will automatically merge this pull request.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Automerge options (never/patch/minor, and dev/runtime dependencies)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

</details>

+102 -190

2 comments

43 changed files

dependabot-preview[bot]

pr closed time in 22 days

issue commentpython-trio/trio

Using one process's output as another's input creates challenges with non-blocking status

Portability is an interesting point... on Windows this problem doesn't happen at all, and on Linux like you note there's a simple workaround (modulo any exotic environments where /dev and /proc are inaccessible). Does macOS support MSG_NOWAIT? I feel like I looked into this at some point and it might not exist there?

mic006

comment created time in 22 days

issue commentpython-trio/trio

Using one process's output as another's input creates challenges with non-blocking status

...That might also let us simplify the fd creation code a bit, because stdin=PIPE would become equivalent to a, b = make_pipe(); stdin=b.

mic006

comment created time in 22 days

issue commentpython-trio/trio

Using one process's output as another's input creates challenges with non-blocking status

Huh, that's an interesting case! I'm really not sure what the correct behavior here is.

What makes it tricky is that:

  • The parent process holds onto a reference to the pipe that it passes in to the second child, so in principle it could continue to read/write from it as well via its own FdStream object, and that FdStream requires that the fd be in non-blocking mode

  • In principle, you might want to explicitly set an fd to non-blocking before passing it in. This would only happen in a super-exotic case, but in general we do want to make super-exotic cases at least possible to handle. So maybe it would be bad to unconditionally remove the O_NONBLOCK flag when spawning a process?

One option: have a special case where if you pass in an FdStream object as a new process's stdin/stdout/stderr, then open_process sets it to blocking + closes the FdStream in the parent process. (And if someone wants the super-exotic case of passing in the fd in raw non-blocking mode, then they can pass a raw file descriptor instead.)

This feels... odd, but also convenient.

mic006

comment created time in 22 days

push eventpypa/manylinux

pyup-bot

commit sha cd5dad865c6d203afbec85865b00c8faeebc3b9a

Update setuptools from 49.2.0 to 50.0.0

view details

push time in 23 days

push eventpypa/manylinux

pyup-bot

commit sha 21263a08028a935aa603ec23aef393d629761d0a

Update setuptools from 44.1.1 to 50.0.0

view details

push time in 23 days

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-50.0.0

created branch time in 23 days

delete branch python-trio/trio

delete branch : dependabot/quentin/test-fix

delete time in a month

push eventpython-trio/trio

Quentin Pradet

commit sha cb1b5f3ef345a88a94fcdc1179e03a7f3b53ad31

Ignore dependabot pushes correctly dependabot opens a branch in the main repository and then opens a pull request, which triggers the same build twice. To avoid that, we had branches-ignore set to ignore the push. However the syntax was incorrect: since the branch name contains slashes, we have to use two stars to ignore it. See [0] for details on that syntax. [0]: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#filter-pattern-cheat-sheet

view details

Nathaniel J. Smith

commit sha c3b4696547ceca4b6c990f98dda6ce48fbbcdcf7

Merge pull request #1701 from python-trio/dependabot/quentin/test-fix Ignore dependabot pushes correctly

view details

push time in a month

PR merged python-trio/trio

Ignore dependabot pushes correctly

dependabot opens a branch in the main repository and then opens a pull request, which triggers the same build twice. To avoid that, we had branches-ignore set to ignore the push.

However the syntax was incorrect: since the branch name contains slashes, we have to use two stars to ignore it. See https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#filter-pattern-cheat-sheet for details on that syntax.

(I opened a branch in the main repository to mimick the way dependabot works, that way we'll see if the fix works.)

+1 -1

2 comments

1 changed file

pquentin

pr closed time in a month

issue commentpython-trio/trio

Tracking issue: intermittent test failures

Transient failure in test_pipes on macOS, maybe just a too-aggressive timeout: https://github.com/python-trio/trio/pull/1701#issuecomment-681589841

njsmith

comment created time in a month

issue commentpython-trio/pytest-trio

How to handle MultiErrors containing pytest magic exceptions?

idk, pytest doesn't interpret this as a skip either:

@pytest.fixture
def broken():
    raise RuntimeError

def test_foo(broken):
    pytest.skip()

I feel like for a testing tool, when weird things happen, it's probably better to err on the side of dumping a traceback?

oremanj

comment created time in a month

pull request commentpython-trio/trio

Support alternatives to trio.run() for pytest-trio

What would you say to copying this code into pytest-trio instead, and fixing it there? It's sort of been on my todo list anyway, and doing it now seems like it would simplify your life by consolidating all the changes into one project...

altendky

comment created time in a month

issue commentpython-trio/trio

A nursery might inject a Cancelled even when none of the tasks received a Cancelled

I think of nurseries as a compositional primitive: if the elements (tasks) obey some property of interest, then their composition should too.

Yeah, that's also the intuition for why, maybe, an always-empty nursery should not do a checkpoint :-)

I.e. in general we expect these to be mostly equivalent:

await foo()

async with trio.open_nursery() as nursery:
    nursery.start_soon(foo)

Right now they're not, b/c the nursery injects a checkpoint. Injecting a schedule point is unavoidable, but more-or-less harmless, since schedule points have minimal effect on user-visible behavior. Injecting a cancel point is avoidable, though, and as this issue shows it causes practical problems, and it's just kind of surprising b/c of how it breaks the equivalence above.

Generalizing, we expect these to be more-or-less similar:

for f in fs:
    await f()

async with trio.open_nursery() as nursery:
    for f in fs:
        nursery.start_soon(f)

Obviously the latter runs the functions concurrently rather than sequentially, and there are some semantic differences that follow directly from that – in particular, it means that if two calls raise exceptions, we have to report them both, inside of the first one breaking out of the loop. And again it forces at least one schedule point. But again again, you wouldn't expect it to inject a cancellation point.

So by this argument, you would expect these two trivial cases to be equivalent:

pass  # don't call anything

async with trio.open_nursery() as nursery:
    pass  # don't start anything

...but of course that's exactly what putting a checkpoint in always-empty-nurseries would change, so we have two sets of principles that we all agree on but that contradict each other here.

I guess one way to look at it is: is a nursery block an "operation" (so should be a checkpoint), or just scaffolding for arranging other operations?

vxgmichel

comment created time in a month

issue commentpython-trio/trio

A nursery might inject a Cancelled even when none of the tasks received a Cancelled

@oremanj How do you feel about this not containing any checkpoints?

async def noop():
    pass

async with trio.open_nursery() as nursery:
   nursery.start_soon(noop)

?

Every nursery that contains a child task has to contain at least one schedule point (proof: the nursery can't exit until the child task has been scheduled), so adding a schedule point to the __aexit__ for always-empty-nurseries does strengthen that guarantee to "every nursery contains at least one schedule point". But adding a cancel point to always-empty-nurseries doesn't.

I would hesitate to add a checkpoint to nursery entry because:

async def handler(stream: Stream):
    async with stream:
        ...

stream = await open_stream(...)
async with trio.open_nursery() as nursery:
    nursery.start_soon(handler, stream)

Right now, the above code guarantees that stream will be closed; if we add a checkpoint to nursery entry then that guarantee goes away.

vxgmichel

comment created time in a month

pull request commentpython-trio/trio

Nursery: don't act as a checkpoint when not running anything

However, this doesn't fully solve #1457, because we'll still inject a cancel even if none of the child tasks raise Cancelled.

Here's the code that does that, at line 923:

            def aborted(raise_cancel):
                self._add_exc(capture(raise_cancel).error)

...I was going to say we could delete this line and simply ignore cancellation here, but that's not quite right. We do want to ignore Cancelled exceptions here, but if Trio is using cancellation to deliver a KeyboardInterrupt to the main task while it's blocked in __aexit__, then we still need that to be injected so that the nursery will cancel the other tasks and then raise KeyboardInterrupt. That should be fixed at some point Soon™, but in the mean time I guess we could work around this by adding a check for whether capture(raise_cancel).error is a Cancelled, and if so then ignore it.

catern

comment created time in a month

push eventpython-trio/pytest-trio

Kyle Altendorf

commit sha a8ee6daa4afa6cd28ea37b7fdd001f4462844a40

Fix a few missed .assert_outcomes(errors=) Given they are failed not failures, shouldn't these have been erred or errored not errors? Oh well.

view details

Nathaniel J. Smith

commit sha 80c1638f0ce69ac2f9fd358dab2dd21bc45e3ec2

Merge pull request #102 from altendky/more_pytest_6_updates Fix a few missed .assert_outcomes(errors=)

view details

push time in a month

PR merged python-trio/pytest-trio

Reviewers
Fix a few missed .assert_outcomes(errors=)

Given they are failed not failures, shouldn't these have been erred or errored not errors? Oh well.

I missed these in #99 because they were xfailed or commented out.

+3 -3

2 comments

3 changed files

altendky

pr closed time in a month

pull request commentpython-trio/pytest-trio

Fix a few missed .assert_outcomes(errors=)

Thanks!

altendky

comment created time in a month

issue commentpython-trio/trio

A nursery might inject a Cancelled even when none of the tasks received a Cancelled

This bit @catern today: https://gitter.im/python-trio/general?at=5f447dd3ec534f584fbcf7d7

I think this issue is correct, and nursery __aexit__ should only raise Cancelled if one of the child tasks raises Cancelled.

vxgmichel

comment created time in a month

Pull request review commentpython-trio/outcome

Add typing with mypy

 import abc+from typing import (+    Any, AsyncGenerator, Awaitable, Callable, Generator, Generic, NoReturn,+    TypeVar, cast+)  import attr  from ._util import AlreadyUsedError, remove_tb_frames  __all__ = ['Error', 'Outcome', 'Value', 'acapture', 'capture'] --def capture(sync_fn, *args, **kwargs):-    """Run ``sync_fn(*args, **kwargs)`` and capture the result.--    Returns:-      Either a :class:`Value` or :class:`Error` as appropriate.--    """-    try:-        return Value(sync_fn(*args, **kwargs))-    except BaseException as exc:-        exc = remove_tb_frames(exc, 1)-        return Error(exc)---async def acapture(async_fn, *args, **kwargs):-    """Run ``await async_fn(*args, **kwargs)`` and capture the result.--    Returns:-      Either a :class:`Value` or :class:`Error` as appropriate.--    """-    try:-        return Value(await async_fn(*args, **kwargs))-    except BaseException as exc:-        exc = remove_tb_frames(exc, 1)-        return Error(exc)+V = TypeVar('V')+E = TypeVar('E', bound=BaseException)+Y = TypeVar('Y')+R = TypeVar('R')  -@attr.s(repr=False, init=False, slots=True)-class Outcome(abc.ABC):+@attr.s(repr=False, init=False)+class Outcome(Generic[V, E]):

Another argument for only being generic on the value type: the core idea of an Outcome is that it represents the result of calling a Python function, and in the mypy type system, Python function types are only generic on the return value, not the errors they can raise.

RazerM

comment created time in a month

PullRequestReviewEvent

issue commentpython-trio/trio

Real time chat and other venues for community interaction beyond github issues

It turns out that there are some third-party apps that let you do very low-friction preview and embedding of Discord channels, without logging in:

  • https://titanembeds.com/

  • https://widgetbot.io/

I'm not sure why there are two, or what the tradeoffs are between them.

ghost

comment created time in a month

pull request commentpython-trio/pytest-trio

Update python_requires to >= 3.6

Pinning tools do have lousy support for multiple platforms, but we don't have a lot of platform-specific dependencies, so we've managed to get away with it so far.

On Sat, Aug 22, 2020 at 4:47 PM Kyle Altendorf notifications@github.com wrote:

Oh sure, as far as alerting I just get an email... as the owner? I don't know. I just meant from the point of not having pinning. And pinning is hard what with no tooling for handling multiple platforms. At least I'm not familiar with anything yet other than my boots/romp combo but... meh.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/python-trio/pytest-trio/pull/98#issuecomment-678709903, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEU42A2DHTTNCOMFCG5XKDSCBKHLANCNFSM4QIJHXAA .

-- Nathaniel J. Smith -- https://vorpus.org http://vorpus.org

altendky

comment created time in a month

push eventpython-trio/pytest-trio

Kyle Altendorf

commit sha b519b0f5fd4eddbc6e5bb543db8c904557cc8ac3

Update python_requires to >= 3.6 Trio >= 0.15.0 is required and that in turn requires Python >= 3.6

view details

Nathaniel J. Smith

commit sha 1ca48fa8f19c60b64497c8f27b51dfc8583a8d4b

Merge branch 'master' into python_requires_3.6

view details

Nathaniel J. Smith

commit sha 8599a2999f0cc6416408d160a7b0559f90e0676b

Merge pull request #98 from altendky/python_requires_3.6 Update python_requires to >= 3.6

view details

push time in a month

PR merged python-trio/pytest-trio

Update python_requires to >= 3.6

Trio >= 0.15.0 is required and that in turn requires Python >= 3.6

https://github.com/python-trio/pytest-trio/blob/cb90c329a621e57e1243c2747fe6406866f0d80b/setup.py https://github.com/python-trio/trio/blob/v0.15.0/setup.py#L97

+1 -1

6 comments

1 changed file

altendky

pr closed time in a month

push eventaltendky/pytest-trio

Kyle Altendorf

commit sha bfef85c948cfe6bf19c78baa2d497caf7b56d81a

Update .assert_outcomes() calls for pytest 6.0 https://github.com/python-trio/pytest-trio/pull/98#issuecomment-678699693

view details

Nathaniel J. Smith

commit sha b05c806fbc81504a4e9a1ef161ef3281ef637d97

Merge pull request #99 from altendky/update_for_pytest_6.0 Update .assert_outcomes() calls for pytest 6.0

view details

Nathaniel J. Smith

commit sha 1ca48fa8f19c60b64497c8f27b51dfc8583a8d4b

Merge branch 'master' into python_requires_3.6

view details

push time in a month

push eventpython-trio/pytest-trio

Kyle Altendorf

commit sha bfef85c948cfe6bf19c78baa2d497caf7b56d81a

Update .assert_outcomes() calls for pytest 6.0 https://github.com/python-trio/pytest-trio/pull/98#issuecomment-678699693

view details

Nathaniel J. Smith

commit sha b05c806fbc81504a4e9a1ef161ef3281ef637d97

Merge pull request #99 from altendky/update_for_pytest_6.0 Update .assert_outcomes() calls for pytest 6.0

view details

push time in a month

PR merged python-trio/pytest-trio

Reviewers
Update .assert_outcomes() calls for pytest 6.0

https://github.com/python-trio/pytest-trio/pull/98#issuecomment-678699693

+5 -5

1 comment

2 changed files

altendky

pr closed time in a month

pull request commentpython-trio/pytest-trio

Update python_requires to >= 3.6

Side note, since we don't pin the deps wouldn't just a nightly build would cut it for catching stuff like this instead of needing dependabot?

It could, but pinning + dependabot is better, because when something goes wrong then you end up with a failed PR that you can investigate and resolve at your leisure. Without pinning, things just catch on fire and you find out after the fact.

(Also, I'm not sure how you'd set up useful alerting for nightly builds.)

altendky

comment created time in a month

pull request commentpython-trio/pytest-trio

Update python_requires to >= 3.6

Too bad we don't have dependabot on this repo, would have caught it much earlier when pytest made the release that broke things...

altendky

comment created time in a month

pull request commentpython-trio/pytest-trio

Update python_requires to >= 3.6

Oh, and now CI is broken because of pytest changes (I guess). Fantastic...

altendky

comment created time in a month

issue closedpython-trio/trio

receive_some on a ReceiveStream splits the header from content on a HTTP response ?

Using trio 0.13.0. I spent some time, because await stream.receive_some() only returns the HTTP header data.

stream = await trio.open_tcp_stream(backend_addr.hostname, backend_addr.port)
if backend_addr.use_ssl:
    ssl_context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
    ssl_context.load_default_certs()
    stream = trio.SSLStream(stream, ssl_context, server_hostname=backend_addr.hostname)
req = b"GET %s HTTP/1.1\r\nHost: %s \r\n\r\n" % (target, backend_addr.hostname.encode("idna"))
await stream.send_all(req)
rep = await stream.receive_some()
rep2 = await stream.receive_some()
print(req.decode("utf8"))
print(rep.decode("utf8"))
print(rep2.decode("utf8"))

gives out :

> GET /redir/foo/bar?a=1&b=2 HTTP/1.1
Host: example.com

> HTTP/1.1 200 OK
content-type: text/html;charset=utf-8
content-length: 1071
date: Fri, 21 Aug 2020 13:48:19 GMT
server: mysrv

> <!doctype html>\n<html lang="en">\n\n<head>...</body>\n\n</html>

with rep = await stream.receive_some(4096), this is the same result. trio behaves like it reads internally the stream and separates the header from the content and receive header and content in 2 chunks. It is normal or expected? Is it related to an underlying HTTP server mechanism? I seek for that, and I feel this is a trio related behavior.

closed time in a month

bitlogik

issue commentpython-trio/trio

receive_some on a ReceiveStream splits the header from content on a HTTP response ?

ReceiveStream splits the data into arbitrary sized chunks.

In this case, what's probably happening is that the HTTP server is sending the headers in one packet, and then sending the body in a second packet. So the headers arrive all at once, and then the body arrives a little bit later. Trio gives you the data as soon as it arrives, so you get the headers first in one chunk, and then the body.

But, this is just a coincidence: your server, network, OS, or Trio might at any point decide to split things up differently if it's more convenient. So you should just assume that you're getting arbitrary chunks of bytes, and be prepared for them to be split up in arbitrary ways.

bitlogik

comment created time in a month

issue commentpypa/manylinux

Inconsistent page-size on arm64

@geoffreyblake great analysis!

ianw

comment created time in a month

issue commentpython-trio/trio

Bad interaction between Trio and C libraries that set their own SIGINT handler

It would probably also be possible to remove all instances of is_main_thread from Trio, by instead just trying to perform the operations and then handling failures.

Also in practice, this is currently having exactly zero effect on any users, so the best answer may be to defer any changes it until it causes an actual problem.

Nikratio

comment created time in a month

issue commentpython-trio/trio

Bad interaction between Trio and C libraries that set their own SIGINT handler

@graingert Interesting idea!

On Unix, file descriptors are signed ints, and -2 is never a valid file descriptor. So this definitely works reliably there.

On Windows, can -2 ever be a valid value? This gets obscure quickly.

According to https://docs.microsoft.com/en-us/windows/win32/winsock/socket-data-type-2, in theory a real socket handle could have the value -2:

Windows Sockets handles have no restrictions, other than that the value INVALID_SOCKET is not a valid socket. Socket handles may take any value in the range 0 to INVALID_SOCKET–1.

(Socket handles are unsigned, so -1 is the same as INVALID_SOCKET, which is the same as the maximum possible socket value.)

Could this ever happen in practice? On the vast majority of current systems, socket handles are kernel handles, and kernel handles are guaranteed to be a multiple of 4, so -2 can't happen. The only time socket handles aren't kernel handles are if you have some wacky "Layered Service Provider" installed. And even then I assume it's rare that a LSP will happen to pick -2 as a socket handle. (There's a lot more discussion of LSPs in #52.)

So... in principle it's possible for set_wakeup_fd(-2) to succeed, but it will be extraordinarily uncommon in practice.

I'm not quite sure what to do with that conclusion. All of the issues we're talking about here are very edge-case-y: the original code ran into a bug with old versions of gevent, which has since been fixed. The new code has a problem with C libraries that set their own SIGINT handlers, but we don't have any users actually complaining about this as a practical issue currently. The set_wakeup_fd(-2) trick is theoretically not guaranteed to work, but actually will in almost all cases. So the question is which of these edge-cases is the least bad.

Nikratio

comment created time in a month

issue commentpypa/auditwheel

add "--page-size=65536" to patchelf invocation

You could also potentially hack the compiler on the manylinux2014 images to enforce that (maybe by modifying the linker scripts?).

mattip

comment created time in a month

pull request commentagronholm/anyio

Pass along the received item to the next receiver if the task was cancelled

It's a general fact about cancellations that every operation has a "point of no return", where if the cancellation arrives after that then it gets deferred until the next checkpoint, or dropped if there is no next checkpoint. That's fine and correct – if you try to cancel something that already completed, then the cancellation should be a no-op.

agronholm

comment created time in a month

pull request commentagronholm/anyio

Pass along the received item to the next receiver if the task was cancelled

I'm not entirely sure what you mean, but anyio's cancel scopes should work (at least within anyio code) the same way as trio's do. When a scope is cancelled, any code that hits a checkpoint within that scope gets a cancellation exception raised.

"Stateful" cancellation means that if the code keeps executing checkpoints inside a cancelled scope, then it keeps getting repeated cancellation exceptions.

with trio.CancelScope as cscope:
   cscope.cancel()
   try:
        await checkpoint()  # raises trio.Cancelled
   finally:
        await checkpoint()  # *also* raises trio.Cancelled, because we're still in a cancelled scope

So if you discard the cancellation in receive, then it doesn't really discard the cancellation, it just defers it until the next checkpoint. Which is the same thing that both Trio and asyncio do if the receive task gets cancelled on the same tick as it succeeds, and the cancellation happens just after the send: they try to deliver a cancellation to receive, but discover that it's too late, so they let receive return normally and then deliver the cancellation later.

(I guess this whole thread also suggests you might want some better primitives to implement things like this?)

agronholm

comment created time in a month

issue commentagronholm/anyio

Allow applications to handle KeyboardInterrupt

Note that Trio will probably switch to delivering some KeyboardInterrupts out of trio.run instead of inside the main task; see https://github.com/python-trio/trio/pull/1537 and linked issues

agronholm

comment created time in a month

pull request commentagronholm/anyio

Pass along the received item to the next receiver if the task was cancelled

It's common that a cancellation can arrive "too late" to take effect on the current operation, and has to wait to be delivered on the next operation instead. That's what happens in the case where the cancel arrives after the send but before receive returns.

....I guess I am assuming that anyio's cancellation is stateful, like Trio's, and I'm not sure if you managed to implement that on asyncio/curio or not.

agronholm

comment created time in a month

pull request commentagronholm/anyio

Pass along the received item to the next receiver if the task was cancelled

At a quick skim I didn't notice any reasons why this can't work. It does seem more complicated than the "un-cancel" approach discussed in the issue thread though, and also less correct (because of the thing where this allows the buffer to overflow).

agronholm

comment created time in a month

delete branch pypa/manylinux

delete branch : pyup-update-wheel-0.34.2-to-0.35.0

delete time in a month

push eventpypa/manylinux

pyup-bot

commit sha 97d49f1f4dc60588539352684691896cf5b93602

Update wheel from 0.34.2 to 0.35.1

view details

push time in a month

push eventpypa/manylinux

pyup-bot

commit sha 169dcf291ccca91554d820396d7eedfa0b780178

Update wheel from 0.34.2 to 0.35.1

view details

push time in a month

push eventpypa/manylinux

pyup-bot

commit sha 81fdbd350350c5a6dc19a16c6bbeac7ab3f0a6a0

Update wheel from 0.34.2 to 0.35.1

view details

push time in a month

create barnchpypa/manylinux

branch : pyup-update-wheel-0.34.2-to-0.35.1

created branch time in a month

push eventpypa/manylinux

pyup-bot

commit sha 19ecbe32002449641389d71c7c785526edc2eeee

Update setuptools from 49.2.0 to 49.6.0

view details

push time in a month

push eventpypa/manylinux

pyup-bot

commit sha 506090a967cc2dbc29b8536148d02923d87ef58a

Update setuptools from 44.1.1 to 49.6.0

view details

push time in a month

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-49.6.0

created branch time in a month

push eventpypa/manylinux

pyup-bot

commit sha 7c0e15948bb9dd12dc35b00e4831a3e2090492c9

Update setuptools from 49.2.0 to 49.5.0

view details

push time in a month

push eventpypa/manylinux

pyup-bot

commit sha 738fdaf597e6047322e4a2285acc48a23ef77b62

Update setuptools from 44.1.1 to 49.5.0

view details

push time in a month

create barnchpypa/manylinux

branch : pyup-update-setuptools-44.1.1-to-49.5.0

created branch time in a month

push eventpypa/manylinux

pyup-bot

commit sha f79473de38966f39b02b9c043076b426afdae9cd

Update wheel from 0.34.2 to 0.35.0

view details

push time in a month

push eventpypa/manylinux

pyup-bot

commit sha c3a73adac5df922fa3d0fbb4816f1d8eee31d704

Update wheel from 0.34.2 to 0.35.0

view details

push time in a month

push eventpypa/manylinux

pyup-bot

commit sha 9390a17646903422f7c891dc1156ba3f8aeb4e1b

Update wheel from 0.34.2 to 0.35.0

view details

push time in a month

create barnchpypa/manylinux

branch : pyup-update-wheel-0.34.2-to-0.35.0

created branch time in a month

issue commentpython-trio/trio

New API for handling concurrent exceptions

It sounds pretty cool, similar to the idea in https://github.com/python-trio/exceptiongroup/issues/5 (and linked issues). The current API is definitely pretty bad.

In particular, this comment has some details on exactly what semantics we've been thinking about, including all the annoying edge cases that can happen: https://github.com/python-trio/exceptiongroup/issues/5#issuecomment-460158408

Do you think you could write a short comparison of how your semantics compare to those?

efficiosoft

comment created time in a month

issue commentagronholm/anyio

Object stream randomly drops items

Maybe you could detect when you've been simultaneously awoken by a cancellation + getting an object to return, and "undo" the cancellation so the operation completes successfully?

mjwestcott

comment created time in a month

more