profile
viewpoint
Carl Lerche carllerche @buoyantio Portland, OR I do stuff, I say stuff. It's serious business.

carllerche/eventual 237

A future & stream abstraction for Rust

carllerche/astaire-old 160

The basic Sinatra DSL inside of ActionController

alexcrichton/tokio-curl 101

Asynchronous HTTP client built on libcurl

carllerche/codegen 91

A Rust library providing a builder API to generate Rust code.

carllerche/eventual_io 48

Async IO based around Futures & Streams

carllerche/futures-pool 40

Work-stealing thread pool for executiong futures

carllerche/better-future 21

Extra utilities for working with futures-rs

carllerche/automaton 16

Rust library for parsing regular languages

carllerche/futures-broadcast 13

Futures aware pub/sub channel

carllerche/astaire 12

WIP - Don't look!

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 344846c8c53347a6320336fe67b9126f07356f72

Deploy Tokio API documentation

view details

push time in a day

pull request commenttokio-rs/tokio

Start migrating CI to Github Actions

@RadicalZephyr thanks for getting this started and @taiki-e for following up :+1:

RadicalZephyr

comment created time in a day

push eventtokio-rs/bytes

Deployment Bot (from Azure Pipelines)

commit sha 2fe0a1440275396615b9460bf7b1dd82c99d8119

Deploy Bytes API documentation

view details

push time in 2 days

issue commenttokio-rs/tokio

Dropping threaded runtime with time and IO enabled results in memory leaks

Thanks for the investigation. I will try to dig in some shortly and will report back what I find.

edigaryev

comment created time in 2 days

issue closedtokio-rs/tokio

Support something like smol::Async

https://github.com/stjepang/smol/blob/master/src/async_io.rs#L107

closed time in 3 days

yihuang

issue commenttokio-rs/tokio

Support something like smol::Async

Closing as there is not enough information to support the request (use case, motivation, ....).

Feel free to comment if you disagree. I can reopen it.

yihuang

comment created time in 3 days

push eventtokio-rs/bytes

Deployment Bot (from Azure Pipelines)

commit sha 31018f54a2406d5f0284334ce40ce397173063e5

Deploy Bytes API documentation

view details

push time in 4 days

push eventtokio-rs/bytes

Deployment Bot (from Azure Pipelines)

commit sha 3faf40ce7d8261e3d83800702f7f80307652cdb0

Deploy Bytes API documentation

view details

push time in 4 days

push eventtokio-rs/bytes

Deployment Bot (from Azure Pipelines)

commit sha c1119610a6f8b19bd6c8cb1170ef61b90798ec60

Deploy Bytes API documentation

view details

push time in 4 days

push eventtokio-rs/bytes

Deployment Bot (from Azure Pipelines)

commit sha d8f4f3caf5266c809b6c0be5718f1d2d816cc1d9

Deploy Bytes API documentation

view details

push time in 4 days

push eventtokio-rs/bytes

Deployment Bot (from Azure Pipelines)

commit sha 742f5cd3612bc6be5c19a62cd5156eff9123f05c

Deploy Bytes API documentation

view details

push time in 4 days

push eventtokio-rs/bytes

Deployment Bot (from Azure Pipelines)

commit sha 6f86ca2e7146d37e80addc6f650fbafef37c4f45

Deploy Bytes API documentation

view details

push time in 4 days

issue commenttokio-rs/tokio

Dropping threaded runtime with time and IO enabled results in memory leaks

It is unclear to me if this is an actual bug or an unfortunate race. One way to check would be to track all std::thread::JoinHandle values returned from here and join them all before completing shutdown.

If valgrind still complains after, then something else is going on.

edigaryev

comment created time in 4 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha dc3062744569ec29b5cd140b48c948f984dd975e

Deploy Tokio API documentation

view details

push time in 4 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha d32f2499bd1e34861d9a0c36ca42364afbf7bb08

Deploy Tokio API documentation

view details

push time in 5 days

issue closedtokio-rs/tokio

`block_in_place` panic on runtime block_on

version: tokio 0.2.1 feature: ["time", "io-util", "tcp", "dns", "rt-threaded", "blocking"]

fn main() {
    let mut rt = tokio::runtime::Runtime::new().unwrap();
    rt.block_on(async move {
        tokio::task::block_in_place(|| println!("test"))
    })
}

output:

thread 'main' panicked at 'can only call blocking when on Tokio runtime', src/libcore/option.rs:1190:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

The cause of this problem is that the ON_BLOCK of the current thread is not set.

Other workers On_BLOCK is set on here:

https://github.com/tokio-rs/tokio/blob/632ee507ba08b4e50dd080609ffd5d0fb7db1163/tokio/src/runtime/builder.rs#L425

But Runtime current thread not set on block_on function, is it a feature or a bug?

closed time in 5 days

driftluo

issue commenttokio-rs/tokio

`block_in_place` panic on runtime block_on

I believe so :+1:

driftluo

comment created time in 5 days

Pull request review commenttokio-rs/tokio

coop: Undo budget decrement on Pending

 cfg_blocking_impl! { cfg_coop! {     use std::task::{Context, Poll}; +    #[must_use]+    pub(crate) struct RestoreOnPending(Cell<Option<Budget>>);++    impl RestoreOnPending {+        pub(crate) fn made_progress(&self) {

I don't know if it matters or not, but this could take self then do mem::forget(self) instead of tracking Cell<Option<...>>

jonhoo

comment created time in 5 days

Pull request review commenttokio-rs/tokio

coop: Undo budget decrement on Pending

 cfg_blocking_impl! { cfg_coop! {     use std::task::{Context, Poll}; +    #[must_use]+    pub(crate) struct RestoreOnPending(Cell<Option<Budget>>);++    impl RestoreOnPending {+        pub(crate) fn made_progress(&self) {+            self.0.set(None);+        }+    }++    impl Drop for RestoreOnPending {+        fn drop(&mut self) {+            if let Some(budget) = self.0.get() {+                CURRENT.with(|cell| {+                    cell.set(budget);+                });+            }+        }+    }+     /// Returns `Poll::Pending` if the current task has exceeded its budget and should yield.+    ///+    /// When you call this method, the current budget is decremented. However, to ensure that+    /// progress is made every time a task is polled, the budget is automatically restored to its+    /// former value if the returned `RestoreOnPending` is dropped. It is the caller's+    /// responsibility to call `RestoreOnPending::made_progress` if it made progress, to ensure+    /// that the budget empties appropriately.+    ///+    /// Note that `RestoreOnPending` restores the budget **as it was before `poll_proceed`**.+    /// Therefore, if the budget is _further_ adjusted between when `poll_proceed` returns and+    /// `RestRestoreOnPending` is dropped, those adjustments are erased unless the caller indicates+    /// that progress was made.     #[inline]-    pub(crate) fn poll_proceed(cx: &mut Context<'_>) -> Poll<()> {+    pub(crate) fn poll_proceed(cx: &mut Context<'_>) -> Poll<RestoreOnPending> {         CURRENT.with(|cell| {             let mut budget = cell.get();              if budget.decrement() {+                let restore = RestoreOnPending(Cell::new(cell.get().if_dynamic()));

Why is if_dynamic needed here? Couldn't we unconditionally store the budget?

jonhoo

comment created time in 5 days

push eventtokio-rs/mio

Deployment Bot (from Azure Pipelines)

commit sha d074c789756567952c60dac7b5e01236f0bf1201

Deploy Mio API documentation

view details

push time in 5 days

push eventtokio-rs/mini-redis

Taiki Endo

commit sha 4c9edec0b183353a643b705ea0703a5f9658ef90

Remove unnecessary allocations (#49)

view details

push time in 6 days

PR merged tokio-rs/mini-redis

Remove unnecessary allocations
  • Vec<Box<Frame>> -> Vec<Frame> (Frame is small enough that no additional boxing is needed: std::mem::size_of::<Frame>() == 40)
  • remove a .to_string()
+14 -19

0 comment

4 changed files

taiki-e

pr closed time in 6 days

issue commenttokio-rs/tokio

Spawn a !Send future onto any thread

Your snippet has the future implementing Send?

The reason we don't support this right now is any arbitrary thread can become blocked indefinitely (because of block_in_place). A future pinned to a thread may not be able to execute.

Diggsey

comment created time in 7 days

push eventtokio-rs/mio

Deployment Bot (from Azure Pipelines)

commit sha 677e6c9459ac838367186490e65dd669d07be0aa

Deploy Mio API documentation

view details

push time in 7 days

issue commenttokio-rs/tokio

Budgeting with a conditional check against an always ready stream blocks the condition

One option that I can think of is combinators could check to ensure that if Ready is returned, the budget has been decremented.

Nemo157

comment created time in 8 days

issue commenttokio-rs/tokio

Budgeting with a conditional check against an always ready stream blocks the condition

For reference, the example is:

use tokio::time::{self, Duration};

async fn some_async_work() {
    // do work
}

#[tokio::main]
async fn main() {
    let mut delay = time::delay_for(Duration::from_millis(50));

    while !delay.is_elapsed() {
        tokio::select! {
            _ = &mut delay, if !delay.is_elapsed() => {
                println!("operation timed out");
            }
            _ = some_async_work() => {
                println!("operation completed");
            }
        }
    }
}
Nemo157

comment created time in 8 days

issue commenttokio-rs/tokio

Budgeting with a conditional check against an always ready stream blocks the condition

I'm not 100% sure what the best strategy is here. Leaving select! out of budgeting was an explicit decision. The reasoning is the sub calls will increment the budget. The example here is running a no-op. This could be representative of some arbitrary CPU computation.

That doesn't fix any generic combinators from outside the Tokio ecosystem.

At this point, resources external from Tokio are expected to provide their own budgeting strategies.

Nemo157

comment created time in 8 days

IssuesEvent

issue closedtokio-rs/tokio

Budgeting with a conditional check against an always ready stream blocks the condition

Version

0.2.20

Platform

playground

Description

Taking the first correct example from the tokio::select! docs and running it in the playground results in an infinite hang (playground with println commented out).

The tested code includes an additional time::delay_for(Duration::from_millis(10)).await; bit of work in the some_async_work which avoids this issue.

This has the same underlying problem as https://github.com/rust-lang/futures-rs/issues/2157, the budgeting is only applied to the conditional and blocks it from ever becoming true, while the actual work continues running because it is not subject to the budgeting.

closed time in 8 days

Nemo157

issue commenttokio-rs/tokio

Budgeting with a conditional check against an always ready stream blocks the condition

This example is interesting... some_async_work is a no-op (or

Nemo157

comment created time in 8 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 0ae83c209dd58d86eb0e2699d40a4c8c38e0c3c3

Deploy Tokio API documentation

view details

push time in 9 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 0302696156cee3af5a771d74c7fb62488111599e

Deploy Tokio API documentation

view details

push time in 9 days

push eventtokio-rs/mini-redis

Alice Ryhl

commit sha 9af44d6336d6bab4e151c81bbc953c5fb34ce202

Update Tokio to 0.2.21 (#46) As Tokio version 0.2.21 contains a fix in the broadcast channel that removes a memory leak in mini-redis, I don't think 0.2.20 should be considered the minimum supported version, even if the codebase compiles with that version. Refs: #38

view details

push time in 10 days

PR merged tokio-rs/mini-redis

Update Tokio to 0.2.21

As Tokio version 0.2.21 contains a fix in the broadcast channel that removes a memory leak in mini-redis, I don't think 0.2.20 should be considered the minimum supported version, even if the codebase compiles with that version.

Refs: #38

+3 -3

0 comment

2 changed files

Darksonn

pr closed time in 10 days

issue closedtokio-rs/mini-redis

Apparent memory leak of spawned tasks

I believe there is some form of memory leak, and this was discussed some in the Discord channel #tokio-users recently. This graph should illustrate it pretty well:

Memory leak graph

The orange 6.2MB are a single allocation by tracing-subscriber, the green are RawTasks that are allocated via spawn, and the blue are something in broadcast. The workload is just running while true; do target/release/cli get k; done for a while (perhaps after setting a value for k, but I believe it doesn't actually matter).

As far as I can tell, the spawned tasks have actually completed. A different implementation (using watch instead of broadcast) fixes it. Another project using broadcast does not have a similar issue. My hunch is that there is some kind of reference cycle, maybe in part because the shutdown coordination is bidirectional, but I have not looked more closely yet.

closed time in 10 days

jebrosen

issue commenttokio-rs/mini-redis

Apparent memory leak of spawned tasks

Fixed w/ the tokio release.

jebrosen

comment created time in 10 days

push eventtokio-rs/mini-redis

Carl Lerche

commit sha a3f3dd867d46357206640c3cfa31996cb3f0c6e3

rename bins (#44) This better supports `cargo install mini-redis`.

view details

push time in 10 days

PR merged tokio-rs/mini-redis

rename bins

This better supports cargo install mini-redis.

+8 -0

0 comment

1 changed file

carllerche

pr closed time in 10 days

push eventtokio-rs/mini-redis

Carl Lerche

commit sha a372a3cb8c5bbd107fb878765d7e432e064e52d2

remove accidental Tokio 0.1 dependency (#43)

view details

push time in 10 days

issue commenttokio-rs/tokio

tokio::stream::timeout::Timeout is private

Thanks for the feedback.

Could you clarify the use cases in which you need to store Timeout in a struct?

krojew

comment created time in 10 days

pull request commenttokio-rs/mini-redis

use structopt instead of Clap

Ah, must have missed it. We can revisit switching once a final is out. The change is trivial.

carllerche

comment created time in 11 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 522e9960eaee7a0a42afac814d9e48814dbc4751

Deploy Tokio API documentation

view details

push time in 11 days

push eventtokio-rs/mini-redis

Carl Lerche

commit sha 24edede36bb7998123bb64cbd619ec1a90d2d340

Update Cargo.toml Co-authored-by: Lucio Franco <luciofranco14@gmail.com>

view details

push time in 11 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 7b9d19c4f43dcfa993e5bd3e723b346adf2f3f34

Deploy Tokio API documentation

view details

push time in 11 days

issue commenttokio-rs/loom

Arc doesn't allow storing unsized (trait objects)

But we don't get compiler coersion. So there is no clear way to do it. We could add additional fns.

seanmonstar

comment created time in 12 days

PR opened tokio-rs/mini-redis

rename bins

This better supports cargo install mini-redis.

+9 -0

0 comment

1 changed file

pr created time in 12 days

create barnchtokio-rs/mini-redis

branch : rename-bins

created branch time in 12 days

PR opened tokio-rs/mini-redis

remove accidental Tokio 0.1 dependency
+5 -349

0 comment

2 changed files

pr created time in 12 days

create barnchtokio-rs/mini-redis

branch : prune-deps

created branch time in 12 days

push eventtokio-rs/mini-redis

Carl Lerche

commit sha 2965f9a1cf5eb87e75526a896babc442f0e2be7e

Prepare v0.1.0 release (#42)

view details

push time in 12 days

PR merged tokio-rs/mini-redis

Prepare v0.1.0 release

Get an initial version of the crate released.

+8 -0

0 comment

1 changed file

carllerche

pr closed time in 12 days

PR opened tokio-rs/mini-redis

Prepare v0.1.0 release
+8 -0

0 comment

1 changed file

pr created time in 12 days

create barnchtokio-rs/mini-redis

branch : release-0.1.0

created branch time in 12 days

issue commenttokio-rs/tokio

Dropping threaded runtime with time and IO enabled results in memory leaks

Out of curiosity, if you do the following, does the leak still happen?

fn main() {
    let rt = tokio::runtime::Builder::new()
        .threaded_scheduler()
        .max_threads(1)
        .enable_time()
        .enable_io()
        .build()
        .unwrap();

    rt.spawn(my_loop());

    std::thread::sleep(std::time::Duration::from_secs(1));

    // New code here
    drop(rt);

    std::thread::sleep(std::time::Duration::from_secs(1));
}
edigaryev

comment created time in 12 days

issue commenttokio-rs/tokio

Dropping threaded runtime with time and IO enabled results in memory leaks

It's going to be racy repro.

I'm running it in a virtualbox VM.

edigaryev

comment created time in 12 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 6db3012399d8feb088ddbe500851923ca76cebed

Deploy Tokio API documentation

view details

push time in 12 days

issue commenttokio-rs/tokio

Dropping threaded runtime with time and IO enabled results in memory leaks

One potential problem that I do see is, unless we collect all the join handles of spawned threads, there is potential for valgrind to complain.

edigaryev

comment created time in 12 days

issue commenttokio-rs/tokio

Dropping threaded runtime with time and IO enabled results in memory leaks

How reliably to you reproduce this? I have attempted to run the example but have not seen that error.

edigaryev

comment created time in 12 days

release tokio-rs/tokio

tokio-0.2.21

released time in 12 days

created tagtokio-rs/tokio

tagtokio-0.2.21

A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...

created time in 12 days

push eventtokio-rs/tokio

Carl Lerche

commit sha 02661ba30aed5d02e142212a323103d1bf84557b

chore: prepare v0.2.21 release (#2530)

view details

push time in 12 days

PR merged tokio-rs/tokio

chore: prepare v0.2.21 release A-tokio C-maintenance
+20 -3

3 comments

3 changed files

carllerche

pr closed time in 12 days

pull request commenttokio-rs/tokio

Added map associated function to MutexGuard that uses new MappedMutexGuard type

@sunjay i understand this, but when would one want it guarded by the mutex vs. let v = &mut my_guard.field;.

sunjay

comment created time in 12 days

issue commenttokio-rs/tokio

CancellationToken cfg

This is by design. As the API will change still, it is important that the end application opts into it. Adding friction for libraries is an explicit choice at this point.

That said, I am wondering if we can explore the API in a separate crate in order to make it "public".

MikailBag

comment created time in 12 days

pull request commenttokio-rs/tokio

Added map associated function to MutexGuard that uses new MappedMutexGuard type

What are the use cases for mapping a mutex guard? I don't think I've ever used it.

sunjay

comment created time in 12 days

push eventtokio-rs/tokio

Carl Lerche

commit sha 2c07b1bb10c96f39ce271f578dee7f2bf369dc0b

update date

view details

push time in 12 days

Pull request review commenttokio-rs/tokio

chore: prepare v0.2.21 release

+# 0.2.21 (May 12, 2020)
# 0.2.21 (May 13, 2020)
carllerche

comment created time in 12 days

pull request commenttokio-rs/tokio

chore: prepare v0.2.21 release

@MikailBag It's flagged as unstable and require opting in with a cfg flag. I am unsure how to document those features. Thoughts @Matthias247 ?

carllerche

comment created time in 12 days

push eventtokio-rs/tokio

Carl Lerche

commit sha 6a688745acd97b5e274a8ab0c83812ecd5e8de70

Update tokio/CHANGELOG.md Co-authored-by: Eliza Weisman <eliza@buoyant.io>

view details

push time in 13 days

PR opened tokio-rs/tokio

chore: prepare v0.2.21 release
+20 -3

0 comment

3 changed files

pr created time in 13 days

create barnchtokio-rs/tokio

branch : release-0.2.21

created branch time in 13 days

push eventtokio-rs/mini-redis

Carl Lerche

commit sha fdba12b964954b534be9747ad6846999765c4d64

use structopt instead of Clap (#41) mini-redis uses the CLI derive pattern. Clap does not yet have a release supporting this pattern. Using structopt allows mini-redis to avoid git dependencies.

view details

push time in 13 days

PR merged tokio-rs/mini-redis

use structopt instead of Clap

mini-redis uses the CLI derive pattern. Clap does not yet have a release supporting this pattern. Using structopt allows mini-redis to avoid git dependencies.

+430 -458

0 comment

4 changed files

carllerche

pr closed time in 13 days

push eventtokio-rs/mini-redis

Carl Lerche

commit sha 84f70862382e9c0232c63a732ab334cfa28002e9

use a released version of Tokio (#40)

view details

Carl Lerche

commit sha ca9987e299b59eed24b8d27b4fda3d4932712ea7

Merge remote-tracking branch 'origin/master' into use-structopt

view details

push time in 13 days

push eventtokio-rs/mini-redis

Carl Lerche

commit sha 84f70862382e9c0232c63a732ab334cfa28002e9

use a released version of Tokio (#40)

view details

push time in 13 days

PR merged tokio-rs/mini-redis

use a released version of Tokio

At this point, there is no reason to use a git dependency.

+169 -154

0 comment

2 changed files

carllerche

pr closed time in 13 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha b70fecf62b224b8a5587c0ba31abedec42ce0a95

Deploy Tokio API documentation

view details

push time in 13 days

PR opened tokio-rs/mini-redis

use structopt instead of Clap

mini-redis uses the CLI derive pattern. Clap does not yet have a release supporting this pattern. Using structopt allows mini-redis to avoid git dependencies.

+54 -64

0 comment

4 changed files

pr created time in 13 days

create barnchtokio-rs/mini-redis

branch : use-structopt

created branch time in 13 days

PR merged tokio-rs/tokio

sync: use intrusive list strategy for broadcast A-tokio C-enhancement M-sync

Previously, in the broadcast channel, receiver wakers were passed to the sender via an atomic stack with allocated nodes. When a message was sent, the stack was drained. This caused a problem when many receivers pushed a waiter node then dropped. The waiter node remained indefinitely in cases where no values were sent.

This patch switches broadcast to use the intrusive linked-list waiter strategy used by Notify and `Semaphore.

+455 -134

3 comments

4 changed files

carllerche

pr closed time in 13 days

PR opened tokio-rs/mini-redis

use a released version of Tokio

At this point, there is no reason to use a git dependency.

+169 -154

0 comment

2 changed files

pr created time in 13 days

push eventtokio-rs/tokio

Carl Lerche

commit sha fb7dfcf4322b5e60604815aea91266b88f0b7823

sync: use intrusive list strategy for broadcast (#2509) Previously, in the broadcast channel, receiver wakers were passed to the sender via an atomic stack with allocated nodes. When a message was sent, the stack was drained. This caused a problem when many receivers pushed a waiter node then dropped. The waiter node remained indefinitely in cases where no values were sent. This patch switches broadcast to use the intrusive linked-list waiter strategy used by `Notify` and `Semaphore.

view details

push time in 13 days

create barnchtokio-rs/mini-redis

branch : update-tokio

created branch time in 13 days

Pull request review commenttokio-rs/tokio

sync: use intrusive list strategy for broadcast

 where     /// }     /// ```     pub fn try_recv(&mut self) -> Result<T, TryRecvError> {-        let guard = self.recv_ref()?;+        let guard = self.recv_ref(None)?;         guard.clone_value().ok_or(TryRecvError::Closed)     } -    #[doc(hidden)] // TODO: document+    #[doc(hidden)]+    #[deprecated(since = "0.2.21", note = "use async fn recv()")]     pub fn poll_recv(&mut self, cx: &mut Context<'_>) -> Poll<Result<T, RecvError>> {-        if let Some(value) = ok_empty(self.try_recv())? {-            return Poll::Ready(Ok(value));+        use Poll::{Pending, Ready};++        // The borrow checker prohibits calling `self.poll_ref` while passing in+        // a mutable ref to a field (as it should). To work around this,+        // `waiter` is first *removed* from `self` then `poll_recv` is called.+        //+        // However, for safety, we must ensure that `waiter` is **not** dropped.+        // It could be contained in the intrusive linked list. The `Receiver`+        // drop implementation handles cleanup.+        //+        // The guard pattern is used to ensure that, on return, even due to+        // panic, the waiter node is replaced on `self`.++        struct Guard<'a, T> {+            waiter: Option<Pin<Box<UnsafeCell<Waiter>>>>,+            receiver: &'a mut Receiver<T>,         } -        self.register_waker(cx.waker());+        impl<'a, T> Drop for Guard<'a, T> {+            fn drop(&mut self) {+                self.receiver.waiter = self.waiter.take();+            }+        } -        if let Some(value) = ok_empty(self.try_recv())? {-            Poll::Ready(Ok(value))-        } else {-            Poll::Pending+        if self.waiter.is_none() {

I made this change (though tweaked to make the node allocation lazy).

carllerche

comment created time in 13 days

Pull request review commenttokio-rs/tokio

sync: use intrusive list strategy for broadcast

 where     /// }     /// ```     pub fn try_recv(&mut self) -> Result<T, TryRecvError> {-        let guard = self.recv_ref()?;+        let guard = self.recv_ref(None)?;         guard.clone_value().ok_or(TryRecvError::Closed)     } -    #[doc(hidden)] // TODO: document+    #[doc(hidden)]+    #[deprecated(since = "0.2.21", note = "use async fn recv()")]

Also, I'm not sure how you would add a poll based method that requires the receiver to be pinned. We are specifically avoiding having to pin the receiver. The problem is we have nowhere to store the waiter w/o the future struct.

carllerche

comment created time in 13 days

Pull request review commenttokio-rs/tokio

sync: use intrusive list strategy for broadcast

 where     /// }     /// ```     pub fn try_recv(&mut self) -> Result<T, TryRecvError> {-        let guard = self.recv_ref()?;+        let guard = self.recv_ref(None)?;         guard.clone_value().ok_or(TryRecvError::Closed)     } -    #[doc(hidden)] // TODO: document+    #[doc(hidden)]+    #[deprecated(since = "0.2.21", note = "use async fn recv()")]

Ah, I would rather hold off on adding a poll based fn for now and figure out a longer term strategy that we can apply universally across Tokio.

I wonder if we could provide a utility that handles the pinning across the board.

carllerche

comment created time in 13 days

push eventtokio-rs/tokio

Carl Lerche

commit sha fc20209803f80cb6883fd19e266d02292a2b9ac9

fmt

view details

push time in 13 days

pull request commenttokio-rs/tokio

Added map associated function to MutexGuard that uses new MappedMutexGuard type

@udoprog could you elaborate on why the APIs differ between the two?

sunjay

comment created time in 13 days

Pull request review commenttokio-rs/tokio

sync: use intrusive list strategy for broadcast

 impl<T> Receiver<T> {             // the slot lock.             drop(slot); -            let tail = self.shared.tail.lock().unwrap();+            let mut tail = self.shared.tail.lock().unwrap();              // Acquire slot lock again             slot = self.shared.buffer[idx].read().unwrap(); -            // `tail.pos` points to the slot that the **next** send writes to. If-            // the channel is closed, the previous slot is the oldest value.-            let mut adjust = 0;-            if tail.closed {-                adjust = 1-            }-            let next = tail-                .pos-                .wrapping_sub(self.shared.buffer.len() as u64 + adjust);+            // Make sure the position did not change. This could happen in the+            // unlikely event that the buffer is wrapped between dropping the+            // read lock and acquiring the tail lock.+            if slot.pos != self.next {+                let next_pos = slot.pos.wrapping_add(self.shared.buffer.len() as u64);++                if next_pos == self.next {+                    // Store the waker+                    if let Some((waiter, waker)) = waiter {+                        // Safety: called while locked.+                        unsafe {+                            // Only queue if not already queued+                            waiter.with_mut(|ptr| {+                                if !(*ptr).queued {+                                    (*ptr).queued = true;+                                    (*ptr).waker = Some(waker.clone());+                                    tail.waiters.push_front(NonNull::new_unchecked(&mut *ptr));+                                }+                            });+                        }+                    }++                    return Err(TryRecvError::Empty);+                } -            let missed = next.wrapping_sub(self.next);+                // `tail.pos` points to the slot that the **next** send writes to. If+                // the channel is closed, the previous slot is the oldest value.+                let mut adjust = 0;

Receivers can't just be dropped. They have to iterate each remaining slot to decrement the remaining count. If the user wishes to get the latest, on error, they can drop the receiver and create a new one. This would be equivalent. I'm happy to revisit the details (i'm not entirely satisfied w/ the broadcast channel) but in 0.2 we must maintain behavior.

carllerche

comment created time in 13 days

Pull request review commenttokio-rs/tokio

sync: use intrusive list strategy for broadcast

 pub struct Receiver<T> {     /// Next position to read from     next: u64, -    /// Waiter state-    wait: Arc<WaitNode>,+    /// Used to support the deprecated `poll_recv` fn+    waiter: Option<Pin<Box<UnsafeCell<Waiter>>>>,

@Matthias247 Why not both? Primarily because Receiver is Unpin (a requirement).

carllerche

comment created time in 13 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 091ed3dfb31627d813ad5aa208bba352282f406a

Deploy Tokio API documentation

view details

push time in 13 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 386a8e4f7951c2a80f19f7782b1e30439129708a

Deploy Tokio API documentation

view details

push time in 13 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 8576f1c43e3e9a052128e7095c3f6d92c7a2aa13

Deploy Tokio API documentation

view details

push time in 14 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha fe14606f12f603feefafa8c3a91bec366b2e6490

Deploy Tokio API documentation

view details

push time in 14 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha efba7609a217456089a7505b3abadeeb7e574765

Deploy Tokio API documentation

view details

push time in 14 days

push eventtokio-rs/tokio

Deployment Bot (from Azure Pipelines)

commit sha 14bd754bf611bcb56065851e63a4620e8088c431

Deploy Tokio API documentation

view details

push time in 14 days

pull request commenttokio-rs/tokio

sync: use intrusive list strategy for broadcast

I have verified that https://github.com/tokio-rs/mini-redis/issues/38 is fixed w/ this PR.

carllerche

comment created time in 14 days

issue commenttokio-rs/mini-redis

Apparent memory leak of spawned tasks

I have been running the reproduction using the PR linked above and memory is stable for me.

jebrosen

comment created time in 14 days

issue commenttokio-rs/mini-redis

Apparent memory leak of spawned tasks

Open PR here: https://github.com/tokio-rs/tokio/pull/2509

jebrosen

comment created time in 14 days

more