profile
viewpoint
jean-airoldie Sessions should be nested with care.

jean-airoldie/humantime-serde 2

Serde support for the humantime crate

jean-airoldie/abi_stable_crates 0

Rust-to-Rust ffi,ffi-safe equivalents of std types,and creating libraries loaded at startup.

jean-airoldie/alexandrie 0

An alternative crate registry, implemented in Rust.

jean-airoldie/async-tungstenite 0

Async binding for Tungstenite, the Lightweight stream-based WebSocket implementation

jean-airoldie/bincode 0

A binary encoder / decoder implementation in Rust.

jean-airoldie/caps-rs 0

A pure-Rust library to work with Linux capabilities

jean-airoldie/cargo 0

The Rust package manager

jean-airoldie/chrono 0

Date and time library for Rust

jean-airoldie/CppCoreGuidelines 0

The C++ Core Guidelines are a set of tried-and-true guidelines, rules, and best practices about coding in C++

startedtime-rs/time

started time in 19 hours

issue closedantifuchs/governor

Quota exceeded when used in tokio executor

I'm getting behavior where the quota per minute that I set is exceeded in a multi-threaded environment. Specifically it seems like the real quota seems about 50% higher than the one I set.

Here is the minimal replication code.

[package]
name = "governor_rate_limit"
version = "0.1.0"
authors = ["jean-airoldie <25088801+jean-airoldie@users.noreply.github.com>"]
edition = "2018"

[dependencies]
governor = { git = "https://github.com/antifuchs/governor" }
tokio = { version = "0.2", features = ["rt-threaded", "time", "macros"] }
nonzero_ext = "0.2.0"
use {
    governor::{
        clock::DefaultClock,
        state::{direct::NotKeyed, InMemoryState},
        Quota, RateLimiter,
    },
    nonzero_ext::nonzero,
};

use std::{sync::{Arc, atomic::{AtomicU32, Ordering}}, time::{Duration, Instant}};

const LIMIT: u32 = 1_200;

async fn fetch_add_forever(
    limiter: Arc<RateLimiter<NotKeyed, InMemoryState, DefaultClock>>,
    count: Arc<AtomicU32>,
) {
    let n = nonzero!(10u32);
    loop {
        limiter.until_n_ready(n).await.unwrap();
        count.fetch_add(n.get(), Ordering::Relaxed);
    }
}

#[tokio::main]
async fn main() {
    let start = Instant::now();

    let quota = Quota::per_minute(nonzero!(LIMIT));
    let limiter = Arc::new(RateLimiter::direct(quota));
    let count = Arc::new(AtomicU32::new(0));

    for _ in 0..4 {
        let fut = fetch_add_forever(limiter.clone(), count.clone());
        tokio::spawn(fut);
    }

    // Wait some time for the spawn tasks to use the rate limiter,
    // but not enought that the quota is reset.
    tokio::time::delay_for(Duration::from_secs(30)).await;

    let value = count.load(Ordering::Acquire);

    // Make sure that the total time of the experiment is
    // less than a minute.
    let delta = Instant::now() - start;
    assert!(delta < Duration::from_secs(60));

    dbg!(value);

    assert!(value <= LIMIT);
}

Which prints

[src/main.rs:49] value = 1800
thread 'main' panicked at 'assertion failed: value <= LIMIT', src/main.rs:51:5

when executed.

closed time in 3 days

jean-airoldie

issue commentantifuchs/governor

Quota exceeded when used in tokio executor

I think the improved constructor does the trick so I'll close this.

jean-airoldie

comment created time in 3 days

issue closedantifuchs/governor

Test whether a cell can be allowed

I have a use case where I want to test if the rate limiter can currently allow n cells and then make a decision based on that. It is possible that I don't actually end up allowing cells through so I don't want to actually consume the quota.

Currently check & check_n don't simply check whether the cell can be allowed through, which I think is confusing naming.

I'm thinking of check & check_n are renamed to something like try_* & try_*_n so that we avoid confusion with the naming and we add check & check_n that simply performs a check. Ideally it would be consistent with until_ready but try_until_ready would be pretty misleading naming. Maybe try_wait & try_wait_n?

closed time in 3 days

jean-airoldie

issue commentantifuchs/governor

Test whether a cell can be allowed

I changed my design completely I this is no longer needed. Anyway like you mentionned this is a better fit for a semaphore based rate limiter.

jean-airoldie

comment created time in 3 days

issue commentsnapview/tungstenite-rs

Allow sending of shared data

I'll give #104 another try in a couple of weeks. I'm thinking about writing a decent benchmark to view the overhead of using Bytes over a Vec<u8>.

  • If the overhead is significant I'm thinking of using @application-developer-DA's approach and

    [modify the] Message structure, so that the Message::Binary accepts some Payload type (and we can provideFrom implementation to support Bytes and Vec<u8>) [...]

  • If the overhead is minimal then maybe replacing Vec<u8> directly by Bytes makes sense.

Note that both cases would load to a breaking change.

jean-airoldie

comment created time in 16 days

issue closedrust-lang/futures-rs

Inconsistant naming between future & stream select & join

Currently Future::select, Future::select_all and the select! macro all follow the general logic of executing all futures and return with first future to finish.

Intuitively I assumed that Stream::select and Stream::select_all would be consistent and execute all stream until one stream is terminated then return. However its not the case since they both instead execute until all streams are completed, which is more consistent with the behavior of Future::join and Future::join_all that execute until all futures are completed.

I think that the current Stream::select and Stream::select_all should probably named Stream::join and Stream::join_all, to prevent potential confusion (which occurred in my case). Then I guess that what I get for not fully reading the documentation.

closed time in 19 days

jean-airoldie

issue commentrust-lang/futures-rs

Inconsistant naming between future & stream select & join

Yeah its clearly a duplicate, I'll go ahead and close this. Seems like its unlikely to change.

jean-airoldie

comment created time in 19 days

pull request commentMatthias247/futures-intrusive

Added shared Mutex support

Yeah I indeed need to lock before the task starts. I guess I could use a shared semaphore with a capacity of one to achieve the same result.

jean-airoldie

comment created time in 25 days

pull request commentMatthias247/futures-intrusive

Added shared Mutex support

My use case is when an acquired guard gets moved to a standalone future that is spawn by a task executor (e.g. tokio spawn). This couldn't be achieved if the guard has a lifetime associated to it. Might be too much of an exotic use case to justify the additional maintenance though.

jean-airoldie

comment created time in 25 days

startedmmstick/cargo-deb

started time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha 5404a24b18f84a46334d47c19911c5bc2a8d0a45

Added shared Mutex support

view details

jean-airoldie

commit sha 798150ce9a432d061ebe55ed8ba967d144a4d3d8

Merge branch 'shared_mutex' of https://github.com/jean-airoldie/futures-intrusive into dev

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha 5404a24b18f84a46334d47c19911c5bc2a8d0a45

Added shared Mutex support

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha ef1c56a34c6ba7c0dceff9b2d45e674fcdc58756

Added shared Mutex support

view details

jean-airoldie

commit sha 10a773e2d8a2f3c9f7f8d7e36ae141fb1b88a4ee

Merge branch 'shared_mutex' of https://github.com/jean-airoldie/futures-intrusive into dev

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha ef1c56a34c6ba7c0dceff9b2d45e674fcdc58756

Added shared Mutex support

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha 0717c50ab5e06a6e49e92caa2d34be4a80736eb2

Added shared Mutex support

view details

jean-airoldie

commit sha 9d3934f4a30e377a4352a728374b471330824b3b

Merge branch 'shared_mutex' of https://github.com/jean-airoldie/futures-intrusive into dev

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha 0717c50ab5e06a6e49e92caa2d34be4a80736eb2

Added shared Mutex support

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha c6f33e264465589894aa28f66d955ab1b567e00d

Added shared Mutex support

view details

jean-airoldie

commit sha 9fe9395ee42f9c117e0c6a3296be69aa5d5b814d

Merge branch 'shared_mutex' of https://github.com/jean-airoldie/futures-intrusive into dev

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha c6f33e264465589894aa28f66d955ab1b567e00d

Added shared Mutex support

view details

push time in a month

PR opened Matthias247/futures-intrusive

WIP: Added shared Mutex support

This adds support for a shared Mutex whose future & guard doesn't need any lifetime.

+219 -0

0 comment

2 changed files

pr created time in a month

create barnchjean-airoldie/futures-intrusive

branch : shared_mutex

created branch time in a month

issue commentantifuchs/governor

Quota exceeded when used in tokio executor

Alright so now I understand better than to your explanation & docs. The behavior I'm looking for is a quota that gets fully refilled every interval, as opposed to a BUST quota that gets refilled periodically.

Do you think such behavior could be possible or would it require a different algorithm?

jean-airoldie

comment created time in a month

pull request commentantifuchs/governor

Improve Quota constructor clarity

This is cool. I think this is way clearer.

antifuchs

comment created time in a month

Pull request review commentantifuchs/governor

Improve Quota constructor clarity

 use std::time::Duration;  /// A rate-limiting quota. ///-/// Quotas are expressed in a positive number of "cells" (the number of positive decisions /-/// allowed items) per unit of time.+/// Quotas are expressed in a positive number of "cells" (the maximum number of positive decisions /+/// allowed items until the rate limiter needs to replenish) and the amount of time for the rate+/// limiter to replenish a single cell. ///-/// Neither the number of cells nor the unit of time may be zero.+/// Neither the number of cells nor the replenishment unit of time may be zero. /// /// # Burst sizes /// There are multiple ways of expressing the same quota: a quota given as `Quota::per_second(1)` /// allows, on average, the same number of cells through as a quota given as `Quota::per_minute(60)`. /// However, the quota of `Quota::per_minute(60)` has a burst size of 60 cells, meaning it is possible /// to accommodate 60 cells in one go, followed by a minute of waiting.+///+/// Burst size gets really important when you construct a rate limiter that should allow multiple+/// elements through at one time (using [`RadeLimiter.check_n`](struct.RateLimiter.html#method.check_n)

Should be RateLimiter.check_n here.

antifuchs

comment created time in a month

issue openedantifuchs/governor

Quota exceeded when used in tokio executor

I'm getting behavior where the quota per minute that I set is exceeded in a multi-threaded environment. Specifically it seems like the real quota seems about 50% higher than the one I set.

Here is the minimal replication code.

[package]
name = "governor_rate_limit"
version = "0.1.0"
authors = ["jean-airoldie <25088801+jean-airoldie@users.noreply.github.com>"]
edition = "2018"

[dependencies]
governor = { git = "https://github.com/antifuchs/governor" }
tokio = { version = "0.2", features = ["rt-threaded", "time", "macros"] }
nonzero_ext = "0.2.0"
use {
    governor::{
        clock::DefaultClock,
        state::{direct::NotKeyed, InMemoryState},
        Quota, RateLimiter,
    },
    nonzero_ext::nonzero,
};

use std::{sync::{Arc, atomic::{AtomicU32, Ordering}}, time::{Duration, Instant}};

const LIMIT: u32 = 1_200;

async fn fetch_add_forever(
    limiter: Arc<RateLimiter<NotKeyed, InMemoryState, DefaultClock>>,
    count: Arc<AtomicU32>,
) {
    let n = nonzero!(10u32);
    loop {
        limiter.until_n_ready(n).await.unwrap();
        count.fetch_add(n.get(), Ordering::Relaxed);
    }
}

#[tokio::main]
async fn main() {
    let start = Instant::now();

    let quota = Quota::per_minute(nonzero!(LIMIT));
    let limiter = Arc::new(RateLimiter::direct(quota));
    let count = Arc::new(AtomicU32::new(0));

    for _ in 0..4 {
        let fut = fetch_add_forever(limiter.clone(), count.clone());
        tokio::spawn(fut);
    }

    // Wait some time for the spawn tasks to use the rate limiter,
    // but not enought that the quota is reset.
    tokio::time::delay_for(Duration::from_secs(30)).await;

    let value = count.load(Ordering::Acquire);

    // Make sure that the total time of the experiment is
    // less than a minute.
    let delta = Instant::now() - start;
    assert!(delta < Duration::from_secs(60));

    dbg!(value);

    assert!(value <= LIMIT);
}

Which prints

[src/main.rs:49] value = 1800
thread 'main' panicked at 'assertion failed: value <= LIMIT', src/main.rs:51:5

when executed.

created time in a month

issue commentjean-airoldie/libzmq-rs

thread 'main' panicked at 'invalid option'

know about putting { git=... } in Cargo.toml but that won't let me publish.

If you put libzmq = { git = "https://github.com/jean-airoldie/libzmq-rs" } this will use the latest master branch commit has a dependency. Then you can run cargo update which will fetch de dependencies etc.

L1AN0

comment created time in a month

issue commentjean-airoldie/libzmq-rs

thread 'main' panicked at 'invalid option'

I'm also running into this bug with 0.2.1 which is expected as it's older than this bug report.

My bad, seems like I didn't actually publish a release. I'll do that right now.

L1AN0

comment created time in a month

push eventjean-airoldie/libzmq-rs

jean-airoldie

commit sha 6647dd9eeea1b23cb02c6a7eba2a3d6740fa6100

Prepare for release 0.2.2

view details

push time in a month

pull request commentMatthias247/futures-intrusive

Added cancel method for the shared ChanncelSendFuture version

I'm not the biggest fan of using Pin::into_inner_unchecked for the cancel tests, but the only other alternative I could think of would be to use the raw Pin::new_unchecked API so that the original future does not get shadowed by pin_mut.

jean-airoldie

comment created time in a month

Pull request review commentMatthias247/futures-intrusive

Added cancel method for the shared ChanncelSendFuture version

 mod if_std {             assert_eq!(count, 11);         }     }+

Right, just me being lazy. I'll fix that.

jean-airoldie

comment created time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha cf7e8a043f161fa2030df403ab4efe57e275606d

Added cancel method for the shared ChanncelSendFuture version

view details

push time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha 39ad28f776df3ceb03ff4b2a36d3aaae42fabcac

Update src/channel/channel_future.rs Co-Authored-By: Matthias Einwag <matthias.einwag@live.com>

view details

push time in a month

delete branch jean-airoldie/futures-intrusive

delete branch : state_try_recv

delete time in a month

pull request commentMatthias247/futures-intrusive

Added try_receive method to state broadcast channel

@Matthias247 Interesting idea about the Stream. I might work on that next.

jean-airoldie

comment created time in a month

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha c46cc52d47ae8e037fb4b8a09f37aa4ee56ff249

Add stream method to channel (#12) This method returns a type which implements Future::Stream.

view details

Steven Fackler

commit sha 9687ce93924cd74e87d387cd30509d5064900cfc

Upgrade parking_lot (#22)

view details

Matthias Einwag

commit sha 911fb12ba9fa96a597e52e15d1557c68790f850d

Fix benchmarks

view details

Matthias Einwag

commit sha c716c5d64744107fe0925a6f9c18d0c03908980b

Upgrade tokio and async-std dev dependencies

view details

Matthias Einwag

commit sha fb4f15f34f549615bd88d6f3c62c2c3d9c594944

Use double linked lists in sync primitives This increases the performance when the primitives are used from a big amount of tasks, at the cost of a minor memory increase (+1 pointer size per Future). Fixes #24

view details

Matthias Einwag

commit sha 24cc776038cbd4d4906604e0d7f671e1499cfccf

Reformat code for rust 1.40

view details

Matthias Einwag

commit sha f499d45e07e4238d8dea5774074362ce6496da33

Use double linked list everywhere

view details

Matthias Einwag

commit sha b353e934fea070aa2c6d61bf38630547c1405081

Update dev-dependency versions

view details

Matthias Einwag

commit sha aab4c384123f19ced38879450a643465547b150a

Yield in Mutex benchmarks Async mutexes are intended to guard over longer time durations, which lead to blocking a task. In order to simulate the tasks being blocked for a longer time, the benchmark gets changed to let the task which holds a Mutex to yield to the executor for a certain amount of time.

view details

Matthias Einwag

commit sha ff7eb703d978a8842d45a335ad8507c928aef147

Wake Waker in Mutex outside of synchronous Mutex Waking a Waker inside a synchronous Mutex can lead the next task to get scheduled very quickly and immediately getting blocked on the synchronous Mutex. In this case the thread executing this task might need to get yield and scheduled again later. In order to avoid the unnecessary scheduling operation we return the Waker and wake it outside of the synchronous Mutex. In this case the next task will have an increased chance to grab the synchronous Mutex.

view details

jean-airoldie

commit sha 603fa6fbf8c2a8959c5ad366d408fded9246e619

Updated mpmc Channel::close method signature (#26) The Channel::close method now returns a `CloseStatus`. This allows the user to know whether the channel was already closed or not.

view details

jean-airoldie

commit sha f26a82bdeff282736fe9afde123f3b3884b0fa8d

Updated {Oneshot,Broadcast}Channel::close signature (#27) Oneshot & broadcast channels now return `CloseStatus` when `close` is called, which allows the user to know if the channel was already closed.

view details

Matthias Einwag

commit sha 2ff9847aa0fdb81664a2296225d7d58afe2a57c6

Wake tasks waiting on the channel outside of mutex This change wakes sending and receiving tasks waiting on the MPMC channel to get ready outside of the synchronous Mutex. This increases the performance of the channel.

view details

Matthias Einwag

commit sha 181dbb11739202e1c7fc6185536505cff397444c

Add a Semaphore benchmark

view details

jean-airoldie

commit sha eb4e8dbff3ab58bb97a175021979717c84de73ad

Correctly export CloseStatus in channel root (#29)

view details

Matthias Einwag

commit sha d1a38ec21d0f53f49c5d722dabe3933926f5ee8b

Prepare for version 0.3 release

view details

Matthias Einwag

commit sha d527794c53d1d9e254c5b2f7a53eda5c85ff724c

Use a debug_assert in linked list to check consistency

view details

Matthias Einwag

commit sha 4559fa204576b5fa91abd42fde7a6340021afba9

Fix rustfmt for Rust 1.41

view details

Matthias Einwag

commit sha e96c49be2d16ba656fb88f69241bf780fa588bba

Improve safety docs in linked list

view details

jean-airoldie

commit sha 796ea8a786f3ae7b0a3dc6dcb3364096419b5d4a

Added cancel method for the shared ChanncelSendFuture version

view details

push time in 2 months

PR opened Matthias247/futures-intrusive

Added cancel method for the shared ChanncelSendFuture version

This adds the ability to call cancel on the shared version of ChannelSendFuture.

I added some tests similar to the non-shared channel.

+239 -0

0 comment

2 changed files

pr created time in 2 months

create barnchjean-airoldie/futures-intrusive

branch : cancel_send_future

created branch time in 2 months

pull request commentMatthias247/futures-intrusive

Added try_receive method to state broadcast channel

Also has a side note, I'm using a wrapped version of the state broadcast API. I found that replacing the StateId::new() method by a method on the receiver made this clearer.

/// The order of a value in a monotonically increasing sequence.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct SeqId(StateId);

/// An update to a state produced by a `StateSx`.
pub struct State<T> {
    /// The sequence ID associated with the value.
    pub id: SeqId,

    /// The actual value.
    pub value: T,
}

#[derive(Debug, Clone)]
pub struct StateRx<T>(StateReceiver<T>)
where
    T: 'static + Clone;

impl<T> StateRx<T>
where
    T: 'static + Clone,
{
    /// Returns a future that gets fullfilled as soon as a value that occured after
    /// the given sequence is emmited.
    pub fn after(&self, id: SeqId) -> StateFuture<T> {
        StateFuture::new(self.0.receive(id.0))
    }

    /// Returns that latest value whose `StateId` occured after the one provided,
    /// if any.
    pub fn try_after(&self, id: SeqId) -> Option<State<T>> {
        self.0.try_receive(id.0).map(|(id, value)| State { id: SeqId(id), value })
    }

    /// Returns a future the resolves with the latest emitted value.
    ///
    /// If no value has been emitted yet, this will wait for it.
    pub fn latest(&self) -> StateFuture<T> {
        StateFuture::new(self.0.receive(StateId::new()))
    }

    /// Returns the latest value contained in the channel, if any.
    pub fn try_latest(&self) -> Option<State<T>> {
        self.0.try_receive(StateId::new()).map(|(id, value)| State { id: SeqId(id), value })
    }
}
jean-airoldie

comment created time in 2 months

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha c8603da5bc36d373fda1216fc3aab1f01f4c7db3

Added try_receive method to state broadcast channel

view details

push time in 2 months

PR opened Matthias247/futures-intrusive

Added try_receive method to state broadcast channel

This is useful when you want to check synchronously if you received a new update.

Normally you would do this via receiver.receive(StateId::new()).await and then check if the StateId is greater than the previous, but this might block if there is no value in the channel.

+76 -19

0 comment

3 changed files

pr created time in 2 months

create barnchjean-airoldie/futures-intrusive

branch : state_try_recv

created branch time in 2 months

delete branch jean-airoldie/libzmq-rs

delete branch : default_sockopt

delete time in 2 months

push eventjean-airoldie/libzmq-rs

jean-airoldie

commit sha e816495732848211252b962de966f7a8fdad8ba5

Document default socket options in `Socket` trait

view details

jean-airoldie

commit sha 760bf91f76476c1d5b1ee5c70f740a1db53b0716

Merge pull request #130 from jean-airoldie/default_sockopt Document default socket options in `Socket` trait

view details

push time in 2 months

PR merged jean-airoldie/libzmq-rs

Document default socket options in `Socket` trait

Documents the unexpected behavior described in #129.

+6 -0

0 comment

1 changed file

jean-airoldie

pr closed time in 2 months

PR opened jean-airoldie/libzmq-rs

Document default socket options in `Socket` trait

Documents the unexpected behavior described in #129.

+6 -0

0 comment

1 changed file

pr created time in 2 months

create barnchjean-airoldie/libzmq-rs

branch : default_sockopt

created branch time in 2 months

issue commentjean-airoldie/libzmq-rs

libzmq does not ensure all messages are sent before socket is dropped

I'll add some clarification on the default behavior in the libzmq::prelude::Socket doc.

L1AN0

comment created time in 2 months

issue commentjean-airoldie/libzmq-rs

libzmq does not ensure all messages are sent before socket is dropped

Currently the linger period is disabled for all socket using the ZMQ_BLOCKY socket option. I swear I documented the default socket options that I used somewhere but I can't find it in the doc.

https://github.com/jean-airoldie/libzmq-rs/blob/c2ea5bcaac736e0e16c38e25bb07dbbdf2ce7723/libzmq/src/ctx.rs#L365

If you have a valid use case, we could expose this API publicly.

However I don't think that your example use case is a valid use because of the nature of the gather and scatter transport. ZMQ provides no synchronization mechanism for PUB & SUB nor GATHER & SCATTER because "the data stream is [considered] infinite and has no start and no end and therefore cannot be used for reliable messaging. That's why messages are dropped on a slow subscriber (albeit that behavior can be configured).

The way the ZMQ guide suggest to do the sync is to either sleep a given amount of time (which is a terrible idea) or use another reliable socket to do the sync (which i think is equally terrible). I think the ideal solution would be for the Scatter socket to be notified when a Gather socket subscribes, but thats out of the scope of ZMQ I'm afraid.

L1AN0

comment created time in 2 months

startedspacejam/rio

started time in 2 months

issue commentantifuchs/governor

Test whether a cell can be allowed

[...] you seem to be talking about a concurrency limiter

Kinda but not quite. Essentially I have a weird use case where I want to be running multiple request in close proximity (maybe concurrently). So basically I want to be able to know ahead of time if I can run 10 requests without busting my API limit. If I can, I then proceed to send up to 10 requests. If i don't send the full 10 requests, I don't want it to count towards my API limit (since its a very limited resource in my case).

So the kind of interface I was talking about:

  1. Try to reserve 10 requests so that other task don't use them
  2. Use up completely or partially the reserved requests
  3. Return the unused allocations so that they don't count towards the rate limit.

I think that that interface that you are proposing would work as long as acquiring the handle both acquires the semaphore and counts towards the quota (or at least reserves some quota) that can later be used partially / entirely.

jean-airoldie

comment created time in 2 months

push eventjean-airoldie/futures-intrusive

jean-airoldie

commit sha eb4e8dbff3ab58bb97a175021979717c84de73ad

Correctly export CloseStatus in channel root (#29)

view details

Matthias Einwag

commit sha d1a38ec21d0f53f49c5d722dabe3933926f5ee8b

Prepare for version 0.3 release

view details

Matthias Einwag

commit sha d527794c53d1d9e254c5b2f7a53eda5c85ff724c

Use a debug_assert in linked list to check consistency

view details

Matthias Einwag

commit sha 4559fa204576b5fa91abd42fde7a6340021afba9

Fix rustfmt for Rust 1.41

view details

Matthias Einwag

commit sha e96c49be2d16ba656fb88f69241bf780fa588bba

Improve safety docs in linked list

view details

push time in 2 months

issue commentantifuchs/governor

Test whether a cell can be allowed

I don't know how complex it would be to implement, or if this is even possible, but this could work:

// Acquire a permit with 10 allocations. This counts as a burst of 10 and also
// reduces the quota / max capacity by 10 cells, until the permit is returned.
// This allows the user to reserve allocations ahead of time for long periods.
let permit = limiter.acquire(10).await?;
// The allocation of 10 cells are not returned, but the 10 quotas are.
drop(permit);
// You could also acquire allocations but return them.
let permit = limiter.acquire(10).await?;
// Consume 5 of the 10 allocations. This raises the rate limiter's
// max capacity by 5.
permit.consume(5);
// We try to return unused allocations to the rate limiter, consuming the permit.
// This raises the limiter's max capacity by the 5 left. If the permit is not expired,
// this also returns the unused allocations.
assert!(!permit.refund().is_expired());

Although this might be too complex of an interface.

jean-airoldie

comment created time in 2 months

issue commentantifuchs/governor

Test whether a cell can be allowed

That is, have limit checks also update the limit (and don't let "peek" operations interfere with the thread-safe nature of the limiter).

Right that makes sense, I didn't think about the multi-threaded scenario.

You could imagine that the snapshot also comes with readers/a field for the number of cells that could be allowed, the next theoretical arrival time, etc.

What if you could reserve a permit with a given capacity ahead of time and if that permit is dropped, the capacity left is returned to the rate limiter. The permit would also keep track of the next theoretical arrival time.

// Acquire a permit for a burst of 10 ahead of time.
let permit = limiter.acquire(10).await?;
// Consume 5 cells of the permit.
permit.until_n_ready(5).await?;
// The 5 cells that are left are returned to the limiter (assuming the interval hasn't expired).
drop(permit);
jean-airoldie

comment created time in 2 months

issue openedantifuchs/governor

Allow a wait check whether a cell can be allowed

I have a use case where I want to check if the rate limiter can currently allow n cells and then make a decision based on that. Is possible that I don't actually end up allowing cells through so I don't want to actually consume the quota.

Currently check & check_n don't simply check whether the cell can be allowed through, which I think is confusing naming.

I'm thinking of check & check_n are renamed to something like try_wait & try_wait_n so that we avoid confusion with the naming and we add check & check_n that simply performs a check. Then maybe also rename until_ready to simply wait for consistency?

created time in 2 months

startedpravega/bincode2

started time in 2 months

push eventjean-airoldie/governor

jean-airoldie

commit sha a48514e0356031f207d21d528b0c2a0551fe2b19

Added DirectRateLimiter::until_n_ready{,_with_jitter} Allows the user to wait asynchronously for n cells to be available.

view details

Andreas Fuchs

commit sha 45bcd48be960683f522d05474c94bee9720c3a3d

Fix off-by-one error in check_all's capacity check This fixes #11: I'd missed that the "weight" variable is _additional weight_ that gets added to the first cell, so check_all (when given one more element than fit in the burst capacity) would return a "denied" result instead of an "insufficient capacity" result - effectively preventing futures from ever completing. Now, we treat additional weight as what it is: * rename "weight" to "additional_weight" to make it clear what is going on * Add a comment over the capacity check, clarifying the off-by-one error potential * Add a test about for more capacity-excession scenarios, including the off-by-one, to prevent regressions.

view details

Andreas Fuchs

commit sha 3bf4b4f5fd61d1769eb277efbb18a1115978c348

Run benches on nightly Maybe this will succeed to build now (but I should probably just turn them off).

view details

bors[bot]

commit sha 74e04016306cadc9ad07b939f38b70e4dc16951f

Merge #12 12: Fix off-by-one error in check_all's capacity check r=antifuchs a=antifuchs This fixes #11: I'd missed that the "weight" variable is _additional weight_ that gets added to the first cell, so check_all (when given one more element than fit in the burst capacity) would return a "denied" result instead of an "insufficient capacity" result - effectively preventing futures from ever completing. Now, we treat additional weight as what it is: * rename "weight" to "additional_weight" to make it clear what is going on * Add a comment over the capacity check, clarifying the off-by-one error potential * Add a test about for more capacity-excession scenarios, including the off-by-one, to prevent regressions. Thanks to @jean-airoldie for discovering the bug & reporting it! Co-authored-by: Andreas Fuchs <asf@boinkor.net>

view details

bors[bot]

commit sha 2705fec938d27839d490a6662fbf0103391d1afe

Merge #10 10: Added DirectRateLimiter::until_n_ready{,_with_jitter} r=antifuchs a=jean-airoldie Allows the user to wait asynchronously for n cells to be available. Closes #9. Co-authored-by: jean-airoldie <25088801+jean-airoldie@users.noreply.github.com>

view details

Andreas Fuchs

commit sha 28a6b1b5adb64443dc69542d22d65fcb4f6b6444

Rename the main benchmark entrypoint module Naming it "criterion" confuses me, and looks weird. Let's call it something descriptive and see how that sits.

view details

Andreas Fuchs

commit sha 4210485e1071e95904acbb3e712aeafa81602536

Rename check_all -> check_n The old name was confusing to people - it seemed like a check whether the entire burst capacity was available. Instead, rename to check_n, which should make the intent clear.

view details

Andreas Fuchs

commit sha f63e6e19d92d820e6af098890bd648ba230b72ef

Fix a docstring reference to _n

view details

bors[bot]

commit sha 985b40275ab6f69b68eefbac64cd0513b0c1c05a

Merge #14 14: Rename check_all -> check_n r=antifuchs a=antifuchs The old name was confusing to people - it seemed like a check whether the entire burst capacity was available. Instead, rename to check_n, which should make the intent clear. Co-authored-by: Andreas Fuchs <asf@boinkor.net>

view details

bors[bot]

commit sha 89f58eaf06bddb7dd63a8d9f72ad7a60100dfccb

Merge #13 13: Rename the main benchmark entrypoint module r=antifuchs a=antifuchs Naming it "criterion" confuses me, and looks weird. Let's call it something descriptive and see how that sits. Co-authored-by: Andreas Fuchs <asf@boinkor.net>

view details

push time in 2 months

issue openedhyperium/hyper

Return ownership of the Request on failure

Use case

Currently Client::request takes ownership of the request. This can be a problem if the request fails for some reason not caused by the user code and has to be retried. Currently the user has to manually recreate the request and retry.

Possible solutions

Make Request impl Clone

I think this might not be possible in the case where the Body is a stream. Moreover this potentially adds significant overhead if the requests are large.

Return ownership of the Request on failure

It would work with no additional overhead. It might even be possible to return the collected Body if it is a stream (assuming that the stream is kept in its entirety).

Allow passing a Arc<Request<B>>

This would also work from the user's perspective at a small overhead cost. However this wouldn't work properly if the Body is a stream.

From the API's side, I'm thinking of something like this:

enum RequestKind<T> {
  Owned(Request<T>),
  Shared(Arc<Request<T>>),
}
pub fn request(&self, req: T) -> ResponseFuture where T: Into<RequestKind<B>> { ... }

impl<T> From<Request<T>> for RequestKind { ... }
impl<T> From<Arc<Request<T>>> for RequestKind { ... }

created time in 2 months

fork jean-airoldie/hyper

An HTTP library for Rust

https://hyper.rs

fork in 2 months

delete branch jean-airoldie/governor

delete branch : until_n_ready

delete time in 2 months

pull request commentantifuchs/governor

WIP: Added DirectRateLimiter::until_n_ready{,_with_jitter}

Btw cool crate. Very well documented as well.

jean-airoldie

comment created time in 2 months

pull request commentantifuchs/governor

WIP: Added DirectRateLimiter::until_n_ready{,_with_jitter}

Alright so here is my two cents:

For the sake of consistency between the check_all method and the until_ready for multiples cells I think both should either be named:

  • check_all & until_all_ready
  • check_n & until_n_ready

Personally I think that the _n version is clearer because the _all version seems to imply that we wait for all the capacity to be available.

jean-airoldie

comment created time in 2 months

push eventjean-airoldie/governor

jean-airoldie

commit sha a48514e0356031f207d21d528b0c2a0551fe2b19

Added DirectRateLimiter::until_n_ready{,_with_jitter} Allows the user to wait asynchronously for n cells to be available.

view details

push time in 2 months

Pull request review commentantifuchs/governor

WIP: Added DirectRateLimiter::until_n_ready{,_with_jitter}

 where             delay.await;         }     }++    /// Asynchronously resolves as soon as the rate limiter allows it.+    pub async fn until_n_ready(&self, n: NonZeroU32) -> Result<(), InsufficientCapacity> {+        self.until_n_ready_with_jitter(n, Jitter::NONE).await+    }++    /// Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait+    /// period.+    pub async fn until_n_ready_with_jitter(+        &self,+        n: NonZeroU32,+        jitter: Jitter,+    ) -> Result<(), InsufficientCapacity> {+        match self.check_all(n) {

Right.

jean-airoldie

comment created time in 2 months

Pull request review commentantifuchs/governor

WIP: Added DirectRateLimiter::until_n_ready{,_with_jitter}

+use std::num::NonZeroU32;+ use super::RateLimiter; use crate::{     clock,     state::{DirectStateStore, NotKeyed},-    Jitter,+    Jitter, NegativeMultiDecision, };-use futures_timer::Delay;+use {futures_timer::Delay, thiserror::Error};++#[derive(Debug, Error, Clone)]+#[error("required number of cell {} exceeds bucket's capacity", _0)]

Sure thing. Whats the reasoning to avoid the syn crate? Is it to reduce compile time?

jean-airoldie

comment created time in 2 months

pull request commentantifuchs/governor

WIP: Added DirectRateLimiter::until_n_ready{,_with_jitter}

Blocked by #11.

jean-airoldie

comment created time in 2 months

issue openedantifuchs/governor

DirectRaterLimiter::check_all does not error on exceeded cap

If I correcltly understand the DirectRateLimiter::check_all doc, it should error when a n greater than the initial max_burst when creating the Quota is passed. However it is not the case:

#[test]
fn errors_on_exceeded_cap() {
    let lim = RateLimiter::direct(Quota::per_second(nonzero!(10u32)));

    // This does not error even though we exceed the capacity.
    block_on(lim.check(nonzero!(11u32))).unwrap_err();
}

created time in 2 months

push eventjean-airoldie/governor

jean-airoldie

commit sha 033bf9c32d8f368ee3664225216963d4b2d8156f

Added DirectRateLimiter::until_n_ready{,_with_jitter} Allows the user to wait asynchronously for n cells to be available.

view details

push time in 2 months

PR opened antifuchs/governor

WIP: Added DirectRateLimiter::until_n_ready{,_with_jitter}

Allows the user to wait asynchronously for n cells to be available.

Closes #9.

+32 -2

0 comment

2 changed files

pr created time in 2 months

create barnchjean-airoldie/governor

branch : until_n_ready

created branch time in 2 months

fork jean-airoldie/governor

A rate-limiting library for Rust (formerly ratelimit_meter)

fork in 2 months

issue openedantifuchs/governor

Support waiting for an abitrary weight

In some contexts the rate can be limited by a budget that gets refilled at a given rate where certain types of requests have a higher cost or weight than others.

Do you think this use case could be supported by the DirectRateLimiter?

I'm thinking of something along the lines of:

pub async fn until_n_ready(&self, n: NoneZeroU32) -> Result<(), InsufficientCapacity> {
    loop {
        match self.check_all(n) {
            Ok(()) => return Ok(()),
            Err(NegativeMultiDecision::BatchNonConforming(_, deadline)) => {
                tokio::time::delay_until(deadline).await;
            }
            Err(NegativeMultiDecision::InsufficientCapacity(cap) => {
                return Err(InsufficientCapacity(cap));
            }
        }
    }
}

This would allow the user to specify a weight for each request.

created time in 2 months

PR closed rust-lang/futures-rs

Added FusedStream impl to Buffered

Note that the buffered method seem to be missing tests.

+18 -1

3 comments

1 changed file

jean-airoldie

pr closed time in 2 months

pull request commentrust-lang/futures-rs

Added FusedStream impl to Buffered

Fair enough, I agree with this reasoning. I'll close the PR.

jean-airoldie

comment created time in 2 months

issue closedjean-airoldie/humantime-serde

Update humantime to 2.0.0

That has better error reporting and removes unsafe.

And as we're here, you can also add #![forbid(unsafe)] also people like this.

Also what stops you from declaring version 1.0 for this crate?

closed time in 2 months

tailhook

issue commentjean-airoldie/humantime-serde

Update humantime to 2.0.0

Closed by ca314cfdf8862b8c96a10264f24ccad19ac96ad5. I also published the crate on crates.io as 1.0.

tailhook

comment created time in 2 months

push eventjean-airoldie/humantime-serde

jean-airoldie

commit sha d30a683ce5d1d0c3664eaff9d9f8b4cf0591035d

Prepare for release 1.0.0 Also added `#![forbid(unsafe_code)]` as per @tailhook's recommendation.

view details

push time in 2 months

issue commentjean-airoldie/humantime-serde

Update humantime to 2.0.0

Also, sorry for the delay, I didn't receive github notifications of your issue.

tailhook

comment created time in 2 months

push eventjean-airoldie/humantime-serde

jean-airoldie

commit sha 4dfddc328efb7c54f3dce7215b93d189a6bc0229

Prepare for release 1.0.0 Also added `#![forbid(unsafe_code)]` as per @tailhook's recommandation.

view details

push time in 2 months

push eventjean-airoldie/humantime-serde

mexus

commit sha ca314cfdf8862b8c96a10264f24ccad19ac96ad5

Update humentime to 2.0

view details

jean-airoldie

commit sha c5292f230ce90998604c19b79a362a1d1d842a20

Merge pull request #2 from mexus/master Update humantime to 2.0

view details

push time in 2 months

pull request commentjean-airoldie/humantime-serde

Update humentime to 2.0

Thanks mate. I'll bumb the dep and release this a 1.0 instead of 0.2.

mexus

comment created time in 2 months

issue commentjean-airoldie/humantime-serde

Update humantime to 2.0.0

Will do.

tailhook

comment created time in 2 months

startedtokio-rs/tracing

started time in 2 months

push eventjean-airoldie/tokio

Carl Lerche

commit sha a70f7203a46d471345128832987017612d8e4585

macros: add pin! macro (#2163) Used for stack pinning and based on `pin_mut!` from the pin-util crate. Pinning is used often when working with stream operators and the select! macro. Given the small size of `pin!` it makes more sense to include a version than re-export one from a separate crate or require the user to depend on `pin-util` themselves.

view details

Oleg Nosov

commit sha f9ddb93604a830d106475bd4c4cae436fafcc0da

docs: use third form in API docs (#2027)

view details

David Kellum

commit sha e35038ed79f31ed31050dbb4e16b8714014a63a4

rt: add feature flag for using `parking_lot` internally (#2164) `parking_lot` provides synchronization primitives that tend to be more efficient than the ones in `std`. However, depending on `parking_lot` pulls in a number of dependencies resulting in additional compilation time. Adding *optional* support for `parking_lot` allows the end user to opt-in when the trade offs make sense for their case.

view details

wqfish

commit sha 6fbaac91e01ca32de58a07d93bfe2f23580e7a2d

docs: typo fix in runtime doc (#2167)

view details

Daniel Fox Franke

commit sha 968c143acdde1905219880ba662cecb58c4aa82d

task: add methods for inspecting JoinErrors (#2051) Adds `is_cancelled()` and `is_panic()` methods to `JoinError`, as well as `into_panic()` and `try_into_panic()` methods which, when applicable, returns the payload of the panic.

view details

Dominic

commit sha f0bfebb7e1b1b7e86857781a6d730679b9761daf

fs: add fs::copy (#2079) Provides an asynchronous version of `std::fs::copy`. Closes: #2076

view details

Jon Gjengset

commit sha a16c9a5a018af21ce48895207564a74c7feacc8b

rt: test block_in_place followed by Pending (#2120)

view details

Avery Harnish

commit sha 9eca96aa214cb8e2fd695cbed179f93826b3ef46

rt: improve "no runtime" panic messages (#2145)

view details

Carl Lerche

commit sha 0d49e112b2a7fc3cc268c1c140d0130d865af760

sync: impl equality traits for oneshot::RecvError (#2168)

view details

Juan Alvarez

commit sha 12be90e3fff4041ea6398fc8cd834c3ec173bce5

stream: add StreamExt::timeout() (#2149)

view details

Carl Lerche

commit sha 5bf06f2b5a81ae7b5b8adfe4a44fab033f4156cf

future: provide try_join! macro (#2169) Provides a `try_join!` macro that supports concurrently driving multiple `Result` futures on the same task and await the completion of all the futures as `Ok` or the **first** `Err` future.

view details

daxpedda

commit sha 4996e276733a13fd16b2a08f617570ea201718ba

macros: fix skipping generics on #[tokio::main] (#2177) When using #[tokio::main] on a function with generics, the generics are skipped. Simply using #vis #sig instead of #vis fn #name(#inputs) #ret fixes the problem. Fixes #2176

view details

Carl Lerche

commit sha 71c47fabf4a8a450c3b41d58de304a6a49fb4061

chore: bump nightly version used in CI (#2178) This requires fixing a few warnings.

view details

Carl Lerche

commit sha bcba4aaa5414eeb12b57e86a3abaf61425cef22b

docs: write sync mod API docs (#2175) Fixes #2171

view details

Carl Lerche

commit sha 00e3c29e487ff05b6252be8052b7ba86f7c08202

chore: prepare v0.2.11 release (#2179) Also bumps: - tokio-macros: v0.2.4

view details

jean-airoldie

commit sha e016a3a471264d0c3c980999b3b5feb79e4e7ccd

Add split method to UnixDatagram This adds a split method for `UnixDatagram` as well as the SendHalf and RecvHalf types. This allows a user to both receive and send datagram at the same time on the same socket.

view details

jean-airoldie

commit sha 77908f13d91d623fa6d2dce21ab4755c6297b595

Added try_recv{,from} & try_send{,to} to UnixDatagram This allows nonblocking sync send & recv operations on the socket.

view details

jean-airoldie

commit sha 37ea8da14df5f4fad25f1ad3f07efa05fc7b564b

Added try_send{,_to} & try_recv{,_from} for split UnixDatagram

view details

push time in 2 months

pull request commentgoogle/flatbuffers

[Rust]: enum as Option<T>

In Rust generated code the Color enum doesn't declare None discriminant.

I mean't that you don't need an explicit value of the default (or unset) in an enum anymore.

Should we distinguish these cases? If not then we can remove NONE and replace it by Option::None.

I don't think we should distinguish these cases and I think it would make sense to remove NONE. This way the behavior of the union and the enum would be consistent.

vglavnyy

comment created time in 2 months

PR opened google/flatbuffers

rust: pub export the VectorIter type

Fix the missing import.

+2 -1

0 comment

2 changed files

pr created time in 2 months

create barnchjean-airoldie/flatbuffers

branch : iter-import

created branch time in 2 months

push eventjean-airoldie/flatbuffers

jean-airoldie

commit sha 16aef8ac0d97e8ddf880386550c3f8e0856e4842

[rust] Derive Eq + PartialEq on FieldLoc and FlatBufferBuilder (#5394)

view details

jean-airoldie

commit sha b80ad7e4398a5f3a5bcab76636d0b5f2c1cf8862

[rust] Use read_scalar_at where possible (#5385) This slightly improves readability.

view details

Will Stott

commit sha a807fa9567c7302b2c446efb3bc7ee79d1a462b4

Remove out-dated -S option from the flatc syntax. (#5398) Looks like it's an older syntax for --strict-json which was long-ago removed in https://github.com/google/flatbuffers/commit/d38b9af243d8dcfc53ab69c79e0ce404759240d4

view details

Michael Seifert

commit sha 4eb3efc221d66ef02928d1b1860e097ab2e4ce16

[flatc][docs] Document rounding behavior of floats in JSON output (#5397) * [docs] Added an example on how to convert a FlatBuffer binary to JSON Slightly adjusted section on "Using flatc as a conversion tool". Signed-off-by: Michael Seifert <m.seifert@digitalernachschub.de> * [docs] Updated obsolete JSON data in example showing how to convert JSON to FlatBuffer binaries. Signed-off-by: Michael Seifert <m.seifert@digitalernachschub.de>

view details

Tiger Caldwell

commit sha a7e20b1996f08bc8a4600acc9957b69b88d86e3d

Excluded crtdbg.h from non-MSVC compilation (#5401)

view details

John Luxford

commit sha a6be1d0d749ebf27ecddc423b6a30fc841699c0a

[Go] Fix namespaces on enums (#5406)

view details

John Luxford

commit sha a80db8538cf49953fbbf88ac380472655acc089e

[C#, Java, C++] Fixes issue #5399 by always including namespaces (#5404) * [C#] Fixes issue #5399 by always including namespaces * Updated tests for code generator changes * Fixed 'As' method names

view details

Vladimir Glavnyy

commit sha 0d2cebccfeffae9df998f3ac819bf17b7ec7a6d0

Add FLATBUFFERS_COMPATIBILITY string (#5381) * Add FLATBUFFERS_COMPATIBILITY string - Add a new __reset method NET/JAVA which hides internal state * Resolve PR notes * Use operator new() to __init of Struct and Table * Restrict visibility of C# Table/Struct to internal level

view details

svenk177

commit sha e635141d5bc66f056c90bcc9da5fdd766610492f

Add support for fixed-size arrays (#5313)

view details

Wouter van Oortmerssen

commit sha 123c7a48907c5080a41768066cbd48b2574cdc00

Updated missing generated code for PR #5313 (fixed arrays) Change-Id: I249140119e6241beb5aec5670d0e5ccddc8f5251

view details

Austin Schuh

commit sha 7836e65dd4310fc7a4c1f60d7adf3f5e2d736aa1

Fix compatability with Bazel 0.27 (#5412) rules_go was too old and using deprecated features. Upgrade it.

view details

Bryan Furia

commit sha 9fb195cce81028dd9c199f400b3dbdaade71d5d8

Fix generating nested Flatbuffer accessors when they cross namespaces (#5414)

view details

Wouter van Oortmerssen

commit sha ff1a22a05f21d667345fcc79b3b089bbecc1bf3f

Fixed broken Utf8Old.java This would not correctly encode/decode strings when substituted for the default Utf8Safe.java Change-Id: Ib303697663b5b8cbf6888492f5255b2a45384c04

view details

Bryan Furia

commit sha 92e9f330363af45972fbed68cca919f6ce86a36b

Don't check ForceDefaults when adding Offfset values (#5415)

view details

Vladimir Glavnyy

commit sha b7012484f3f9caa54389d43b93a80fa672ef37f1

Set C# Struct/Table visibility to public (#5381) (#5416)

view details

Adrian Perez

commit sha 5479adc80fb12db9bf0ca014c21409c6cc92b119

Fix for FLATBUFFERS_PREFER_PRINTF writing zero-length strings (#5418)

view details

Edward

commit sha 550b3869951ae9c21644b5acb189a26f87fbd663

Update Utf8.java: more detailed exception message (#5421) Provide more detailed exception message for malformed 2 byte utf8 character

view details

Andrew Noyes

commit sha 7d7d796cd09dfaed3aed5cd1f973b86f8c2d79bb

Fix undefined behavior. Closes #5422 (#5423) * Fix undefined behavior. Closes #5422 * Move check into callers of make_space

view details

Vladimir Glavnyy

commit sha 7a63792929e2ab6edbbc2b6270a48c74e7022f96

Remove unused variables (#5382) - Fix GenerateTextFromTable (aliasing typo) - Fix unused variable in idl_gen_dart.cpp - Fix std::string passing (should be non-const value or const-reference) - Remove unused variables

view details

Thanabodee Charoenpiriyakij

commit sha 47c7aa0361fec7708fae95c19a5cf8063d1a5df4

Fix echo not interpret \n in GoTest.sh (#5426) When running GoTest.sh, the last step that checking go format files are print \n instead of new line: These files are not well gofmt'ed:\n\nMyGame/Example/Color.go MyGame/Example/MonsterStorage_grpc.go This changes fix it by echo NOT_FMT_FILES in separate line.

view details

push time in 2 months

push eventjean-airoldie/tokio

Carl Lerche

commit sha 0ba6e9abdbe1b42997d183adf5a39488c9543200

stream: add `StreamExt::merge` (#2091) Provides an equivalent to stream `select()` from futures-rs. `merge` best describes the operation (vs. `select`). `futures-rs` named the operation "select" for historical reasons and did not rename it back to `merge` in 0.3. The operation is most commonly named `merge` else where as well (e.g. ReactiveX).

view details

Carl Lerche

commit sha 8471e0a0ee7f6c973fb517ccb7efcf6c7e2ddc6f

stream: add `empty()` and `pending()` (#2092) `stream::empty()` is the asynchronous equivalent to `std::iter::empty()`. `pending()` provides a stream that never becomes ready.

view details

Carl Lerche

commit sha 64d23899118dfc8f1d4d7a9b60c015e43260df80

stream: add stream::once (#2094) An async equivalent to `iter::once`

view details

Carl Lerche

commit sha 7c3f1cb4a3d6076cb5e1aedf2311f62c8a7a2fd7

stream: add `StreamExt::chain` (#2093) Asynchronous equivalent to `Iterator::chain`.

view details

John-John Tedro

commit sha 5b091fa3f0c3a06047d02ca6892f75c3e15040df

io: Drop AsyncBufRead bound on BufStream impl (#2108) fixes #2064, #2106

view details

Carl Lerche

commit sha eb1a8e1792b2c4b296be47a0681421c90bbdbf7a

stream: add `StreamExt::collect()` (#2109) Provides an asynchronous equivalent to `Iterator::collect()`. A sealed `FromStream` trait is added. Stabilization is pending Rust supporting `async` trait fns.

view details

Artem Vorotnikov

commit sha bd8971cd95b2c182c16d8bfc5ee43754ee4e9c96

chore: clippy fixes (#2110)

view details

Artem Vorotnikov

commit sha 476bf0084a7abb86ed2338095da4c7297453f00c

chore: minor fixes (#2121) * One more clippy fix, remove special instructions from CI * Fix Collect description

view details

Lucio Franco

commit sha 619d730d610bbfd0a13285178d6bf3d89a29d6a3

task: Introduce a new pattern for task-local storage (#2126) This PR introduces a new pattern for task-local storage. It allows for storage and retrieval of data in an asynchronous context. It does so using a new pattern based on past experience. A quick example: ```rust tokio::task_local! { static FOO: u32; } FOO.scope(1, async move { some_async_fn().await; assert_eq!(FOO.get(), 1); }).await; ``` ## Background of task-local storage The goal for task-local storage is to be able to provide some ambiant context in an asynchronous context. One primary use case is for distributed tracing style systems where a request identifier is made available during the context of a request / response exchange. In a synchronous context, thread-local storage would be used for this. However, with asynchronous Rust, logic is run in a "task", which is decoupled from an underlying thread. A task may run on many threads and many tasks may be multiplexed on a single thread. This hints at the need for task-local storage. ### Early attempt Futures 0.1 included a [task-local storage][01] strategy. This was based around using the "runtime task" (more on this later) as the scope. When a task was spawned with `tokio::spawn`, a task-local map would be created and assigned with that task. Any task-local value that was stored would be stored in this map. Whenever the runtime polled the task, it would set the task context enabling access to find the value. There are two main problems with this strategy which ultimetly lead to the removal of runtime task-local storage: 1) In asynchronous Rust, a "task" is not a clear-cut thing. 2) The implementation did not leverage the significant optimizations that the compiler provides for thread-local storage. ### What is a "task"? With synchronous Rust, a "thread" is a clear concept: the construct you get with `thread::spawn`. With asynchronous Rust, there is no strict definition of a "task". A task is most commonly the construct you get when calling `tokio::spawn`. The construct obtained with `tokio::spawn` will be referred to as the "runtime task". However, it is also possible to multiplex asynchronous logic within the context of a runtime task. APIs such as [`task::LocalSet`][local-set] , [`FuturesUnordered`][futures-unordered], [`select!`][select], and [`join!`][join] provide the ability to embed a mini scheduler within a single runtime task. Revisiting the primary use case, setting a request identifier for the duration of a request response exchange, here is a scenario in which using the "runtime task" as the scope for task-local storage would fail: ```rust task_local!(static REQUEST_ID: Cell<u64> = Cell::new(0)); let request1 = get_request().await; let request2 = get_request().await; let (response1, response2) = join!{ async { REQUEST_ID.with(|cell| cell.set(request1.identifier())); process(request1) }, async { REQUEST_ID.with(|cell| cell.set(request2.identifier())); process(request2) }, }; ``` `join!` multiplexes the execution of both branches on the same runtime task. Given this, if `REQUEST_ID` is scoped by the runtime task, the request ID would leak across the request / response exchange processing. This is not a theoretical problem, but was hit repeatedly in practice. For example, Hyper's HTTP/2.0 implementation multiplexes many request / response exchanges on the same runtime task. ### Compiler thread-local optimizations A second smaller problem with the original task-local storage strategy is that it required re-implementing "thread-local storage" like constructs but without being able to get the compiler to help optimize. A discussion of how the compiler optimizes thread-local storage is out of scope for this PR description, but suffice to say a task-local storage implementation should be able to leverage thread-locals as much as possible. ## A new task-local strategy Introduced in this PR is a new strategy for dealing with task-local storage. Instead of using the runtime task as the thread-local scope, the proposed task-local API allows the user to define any arbitrary scope. This solves the problem of binding task-locals to the runtime task: ```rust tokio::task_local!(static FOO: u32); FOO.scope(1, async move { some_async_fn().await; assert_eq!(FOO.get(), 1); }).await; ``` The `scope` function establishes a task-local scope for the `FOO` variable. It takes a value to initialize `FOO` with and an async block. The `FOO` task-local is then available for the duration of the provided block. `scope` returns a new future that must then be awaited on. `tokio::task_local` will define a new thread-local. The future returned from `scope` will set this thread-local at the start of `poll` and unset it at the end of `poll`. `FOO.get` is a simple thread-local access with no special logic. This strategy solves both problems. Task-locals can be scoped at any level and can leverage thread-local compiler optimizations. Going back to the previous example: ```rust task_local! { static REQUEST_ID: u64; } let request1 = get_request().await; let request2 = get_request().await; let (response1, response2) = join!{ async { let identifier = request1.identifier(); REQUEST_ID.scope(identifier, async { process(request1).await }).await }, async { let identifier = request2.identifier(); REQUEST_ID.scope(identifier, async { process(request2).await }).await }, }; ``` There is no longer a problem with request identifiers leaking. ## Disadvantages The primary disadvantage of this strategy is that the "set and forget" pattern with thread-locals is not possible. ```rust thread_local! { static FOO: Cell<usize> = Cell::new(0); } thread::spawn(|| { FOO.with(|cell| cell.set(123)); do_work(); }); ``` In this example, `FOO` is set at the start of the thread and automatically cleared when the thread terminates. While this is nice in some cases, it only really logically makes sense because the scope of a "thread" is clear (the thread). A similar pattern can be done with the proposed stratgy but would require an explicit setting of the scope at the root of `tokio::spawn`. Additionally, one should only do this if the runtime task is the appropriate scope for the specific task-local variable. Another disadvantage is that this new method does not support lazy initialization but requires an explicit `LocalKey::scope` call to set the task-local value. In this case since task-local's are different from thread-locals it is fine. [01]: https://docs.rs/futures/0.1.29/futures/task/struct.LocalKey.html [local-set]: # [futures-unordered]: https://docs.rs/futures/0.3.1/futures/stream/struct.FuturesUnordered.html [select]: https://docs.rs/futures/0.3.1/futures/macro.select.html [join]: https://docs.rs/futures/0.3.1/futures/macro.join.html

view details

Pen Tree

commit sha 1222d817410629ce0a6c73afe226538d6ba7d45e

tokio-tls: rename echo.rs to tls-echo.rs (#2133)

view details

Pen Tree

commit sha 7eb8d447ade771e985dd06add72f6e67d96c64e2

tokio-tls: rename echo.rs to tls-echo.rs (#2133)

view details

Pierre Krieger

commit sha 1475448bdfa5f0bed35abb6e3d5620a22cc27f53

runtime: add Handle::try_current (#2118) * runtime: add Handle::try_current Makes it possible to get a Handle only if a Runtime has been started, without panicing if that isn't the case * Use an error instead

view details

David Kellum

commit sha bb6c3839ef0491310f40e4570b465bcc6b09ae95

Yield now docs (#2129) * add subsections for the blocking and yielding examples in task mod * flesh out yield_now rustdoc * add a must_use for yield_now

view details

Vitor Enes

commit sha 3176d0a48a796c9d5437a76995bc11ca85390df4

io: add `BufStream::with_capacity` (#2125)

view details

Maarten de Vries

commit sha 5bf78d77ada685826b33fd62e69df179f3de8a1c

Add a method to test if split streams come from the same stream. (#1762) * Add a method to test if split streams come from the same stream. The exposed stream ID can also be used as key in associative containers. * Document the fact that split stream IDs can dangle.

view details

Lucio Franco

commit sha 5d82ac2d1e7739a789a6aa014301355d6c0550d9

readme: Add more related tokio projects (#2128)

view details

Carl Lerche

commit sha 9df805ff5449527d1fead3e9533152c4a357c24c

chore: do not depend on `loom` on windows (#2146) Loom currently does not compile on windows due to a transitive dependency on `generator`. The `generator` crate builds have started to fail on windows CI. Loom is not run under windows, however, so removing the loom dependency on windows is sufficient to fix CI. Refs: https://github.com/Xudong-Huang/generator-rs/issues/19

view details

Markus Westerlind

commit sha fbe143b142875977f49772d2905029b57b92e429

fix: Prevent undefined behaviour from malicious AsyncRead impl (#2030) `AsyncRead` is safe to implement but can be implemented so that it reports that it read more bytes than it actually did. `poll_read_buf` on the other head implicitly trusts that the returned length is actually correct which makes it possible to advance the buffer past what has actually been initialized. An alternative fix could be to avoid the panic and instead advance by `n.min(b.len())`

view details

Carl Lerche

commit sha 38bff0adda393f8121225727d93cb342d8363979

macros: fix `#[tokio::main]` without rt-core (#2139) The Tokio runtime provides a "shell" runtime when `rt-core` is not available. This shell runtime is enough to support `#[tokio::main`] and `#[tokio::test]. A previous change disabled these two attr macros when `rt-core` was not selected. This patch fixes this by re-enabling the `main` and `test` attr macros without `rt-core` and adds some integration tests to prevent future regressions.

view details

Lucio Franco

commit sha c7719a2d2962f83854193b4c9131ee275a5a4475

io: simplify split check (#2144) * io: Clean up split check * fix tests

view details

push time in 2 months

push eventjean-airoldie/tokio

Stephen Carman

commit sha 6ff4e349e28a4d89098f2587e70c86281c2ae182

doc: add additional Mutex example (#2019)

view details

Artem Vorotnikov

commit sha 101f770af33ae65820e1cc0e9b89d998b3c1317a

stream: add StreamExt::take (#2025)

view details

Gardner Vickers

commit sha 67bf9c36f347031ca05872d102a7f9abc8b465f0

rt: coalesce thread-locals used by the runtime (#1925) Previously, thread-locals used by the various drivers were situated with the driver code. This resulted in state being spread out and many thread-locals being required to run a runtime. This PR coalesces the thread-locals into a single struct.

view details

Carl Lerche

commit sha 50b91c024718c13a5d3608c63b62b2627dea5fd7

chore: move benches to separate crate (#2028) This allows the `benches` crate to depend on `tokio` with all feature flags. This is a similar strategy used for `examples`.

view details

Artem Vorotnikov

commit sha a515f9c459d662b9c93d962812dc1fd8d1b32e08

stream: add StreamExt::take_while (#2029)

view details

Artem Vorotnikov

commit sha e8fcf55881f0d97177d7f5b2d5c0b00704c26fbe

Refactor proc macros, add more knobs (#2022) * Refactor proc macros, add more knobs * make macros work with rt-core

view details

Artem Vorotnikov

commit sha 3cf91db4b6429c88a74428ac29b99d3b822f0790

stream: add StreamExt::all (#2035)

view details

Artem Vorotnikov

commit sha e43f28f6a8290d8b452c707de06e2d95236f59e3

macros: do not automatically pull rt-core (#2038)

view details

Artem Vorotnikov

commit sha 3736467dbb74ea6d14091cf1cac3ce08e1fcb911

stream: correct trait bounds for all (#2043)

view details

Carl Lerche

commit sha efcbf9613f2d5048550f9c828e3be422644f1391

sync: add batch op support to internal semaphore (#2004) Extend internal semaphore to support batch operations. With this PR, consumers of the semaphore are able to atomically request more than one permit. This is useful for implementing a RwLock.

view details

João Oliveira

commit sha 32e15b3a24ac177c92a78eb04e233534583eae17

sync: add RwLock (#1699) Provides a `RwLock` based on a semaphore. The semaphore is initialized with 32 permits. A read acquires a single permit and a write acquires all 32 permits. This ensures that reads (up to 32) may happen concurrently and writes happen exclusively.

view details

John Van Enk

commit sha 84ff73e687932d77a1163167b938631b1104d54f

tokio: remove documentation stating `Receiver` is clone-able. (#2037) * tokio: remove documentation stating `Receiver` is clone-able. The documentation for `broadcast` stated that both `Sender` and `Receiver` are clonable. This isn't the case: `Receiver`s cannot be cloned (and shouldn't be cloned). In addition, mention that `Receiver` is `Sync`, and mention that both `Receiver` and `Sender` are `Send`. Fixes: #2032 * Clarify that Sender and Receiver are only Send and Sync if T is Send or Sync.

view details

Carl Lerche

commit sha f0006006ed9938115011c42f26cff16842eb534f

time: advance frozen time in `park_timeout` (#2059) This patch improves the behavior of frozen time (a testing utility made available with the `test-util` feature flag). Instead of of requiring `time::advance` to be called in order to advance the value returned by `Instant::now`, calls to `time::Driver::park_timeout` will use the provided duration to advance the time. This is the desired behavior as the timeout is used to indicate when the next scheduled delay needs to be fired.

view details

Linus Färnstrand

commit sha dcfa895b512e3ed522b81b18baf3e33fd78a600c

chore: use just std instead of ::std in paths (#2049)

view details

Stepan Koltsov

commit sha d45f61c183b2e0bb0da196bdd13d77461dd03477

doc: document `from_std` functions panic (#2056) Document that conversion from `std` types must be done from within the Tokio runtime context.

view details

Ivan Petkov

commit sha 188fc6e0d24acf2cf1b51209e058a5c1a1d50dca

process: deprecate Child stdio accessors in favor of pub fields (#2014) Fixes #2009

view details

Artem Vorotnikov

commit sha 3540c5b9ee23e29eb04bfefcf4500741555f2141

stream: Add StreamExt::any (#2034)

view details

Tomasz Miąsko

commit sha 5930acef736d45733dc182e420a2417a164c71ba

rt: share vtable between waker and waker ref (#2045) The `Waker::will_wake` compares both a data pointer and a vtable to decide if wakers are equivalent. To avoid false negatives during comparison, use the same vtable for a waker stored in `WakerRef`.

view details

Benjamin Fry

commit sha 0193df3a593cb69d23414109118784de2948024c

rt: add a Handle::current() (#2040) Adds `Handle::current()` for accessing a handle to the runtime associated with the current thread. This handle can then be passed to other threads in order to spawn or perform other runtime related tasks.

view details

Eliza Weisman

commit sha 798e86821f6e06fba552bd670c5887ce3b6ff698

task: add ways to run a `LocalSet` from within a rt context (#1971) Currently, the only way to run a `tokio::task::LocalSet` is to call its `block_on` method with a `&mut Runtime`, like ```rust let mut rt = tokio::runtime::Runtime::new(); let local = tokio::task::LocalSet::new(); local.block_on(&mut rt, async { // whatever... }); ``` Unfortunately, this means that `LocalSet` doesn't work with the `#[tokio::main]` and `#[tokio::test]` macros, since the `main` function is _already_ inside of a call to `block_on`. **Solution** This branch adds a `LocalSet::run` method, which takes a future and returns a new future that runs that future on the `LocalSet`. This is analogous to `LocalSet::block_on`, except that it can be called in an async context. Additionally, this branch implements `Future` for `LocalSet`. Awaiting a `LocalSet` will run all spawned local futures until they complete. This allows code like ```rust #[tokio::main] async fn main() { let local = tokio::task::LocalSet::new(); local.spawn_local(async { // ... }); local.spawn_local(async { // ... tokio::task::spawn_local(...); // ... }); local.await; } ``` The `LocalSet` docs have been updated to show the usage with `#[tokio::main]` rather than with manually created runtimes, where applicable. Closes #1906 Closes #1908 Fixes #2057

view details

push time in 2 months

push eventjean-airoldie/bincode

Justin Starry

commit sha e2e5ce40e86c3567aae0d206695ebb98faee6efe

Fix emscripten build failures due to lack of i128 support

view details

Josh Mcguigan

commit sha 44d7dcf4c6933becb51abe5c81665edc34723ca1

improve safety of fill_buffer - see issue #260

view details

jean-airoldie

commit sha 12f1415b9ec5d2eec7712784da7690571b56a7d6

Added Clone impl to Config

view details

David Tolnay

commit sha 8ef223116d3736640511640320879baaa86f0f93

Merge pull request #281 from jstarry/fix-emscripten-builds Fix emscripten build failures due to lack of i128 support

view details

David Tolnay

commit sha b676754eeee41d169e96bcaf211ed49a331b0d87

Release 1.2.1

view details

Joonatan Saarhelo

commit sha 1994c72ecdb79c2290bb831a777d43b4ab005e50

improve documentation of BincodeRead

view details

Joonatan Saarhelo

commit sha 7201be3b1eb48e284eb3e2f3c8a55f00ad17d678

deduplicate slicing logic SliceReader

view details

Joonatan Saarhelo

commit sha 3a7b018f248ac041e49f87584e3c4f106ed1e0f5

remove unnecessary let in ReadReader

view details

jean-airoldie

commit sha e30e91e3a7d24925f09fdbd96773245a72fd7537

Add contraints to {Serializer,Deserializer}Acceptor This allows the user to retreive concrete types from the serializer & deserializer output.

view details

Leonard Kramer

commit sha 2809eb484d87d1e0f527badea11a5005ccdd8c94

Fix compile warnings caused by deprecated macros.

view details

Leonard Kramer

commit sha 3b653fa3ee5a2698dff3ca00d8b3b51359a2a228

Remove dyn

view details

jean-airoldie

commit sha 725773fb5b779c0557ab912368ea52fee8c24ee6

Added Debug impl to Config

view details

push time in 2 months

push eventjean-airoldie/bincode

Justin Starry

commit sha e2e5ce40e86c3567aae0d206695ebb98faee6efe

Fix emscripten build failures due to lack of i128 support

view details

David Tolnay

commit sha 8ef223116d3736640511640320879baaa86f0f93

Merge pull request #281 from jstarry/fix-emscripten-builds Fix emscripten build failures due to lack of i128 support

view details

David Tolnay

commit sha b676754eeee41d169e96bcaf211ed49a331b0d87

Release 1.2.1

view details

Joonatan Saarhelo

commit sha 1994c72ecdb79c2290bb831a777d43b4ab005e50

improve documentation of BincodeRead

view details

Joonatan Saarhelo

commit sha 7201be3b1eb48e284eb3e2f3c8a55f00ad17d678

deduplicate slicing logic SliceReader

view details

Joonatan Saarhelo

commit sha 3a7b018f248ac041e49f87584e3c4f106ed1e0f5

remove unnecessary let in ReadReader

view details

jean-airoldie

commit sha e30e91e3a7d24925f09fdbd96773245a72fd7537

Add contraints to {Serializer,Deserializer}Acceptor This allows the user to retreive concrete types from the serializer & deserializer output.

view details

Leonard Kramer

commit sha 2809eb484d87d1e0f527badea11a5005ccdd8c94

Fix compile warnings caused by deprecated macros.

view details

Leonard Kramer

commit sha 3b653fa3ee5a2698dff3ca00d8b3b51359a2a228

Remove dyn

view details

push time in 2 months

pull request commentgoogle/flatbuffers

[Rust]: enum as Option<T>

Nice, the generated code looks good.

As an aside this makes me realize a flatbuffer enum that would previously be expressed as:

enum Color: uint8 {
    None,
    Red,
    Blue,
    Green,
}

Can now be simply be expressed as:

enum Color: uint8 {
    Red = 1,
    Blue,
    Green,
}

which removes the need for special logic is required in the Color::None case.

vglavnyy

comment created time in 2 months

issue commentgoogle/flatbuffers

[rust] Viability of using c++ verifier via ffi

This certainly seems more complicated to me, and I can also see advantages to not having the project rely on something like rust-bindgen.

I agree that directly using the generated c++ code via a cffi is way easier, but its not really user friendly, which was my concern. I feel like its more of a short-term solution until a pure-rust verifier is implemented.

jean-airoldie

comment created time in 2 months

startedantifuchs/nonzero_ext

started time in 2 months

startedantifuchs/governor

started time in 2 months

pull request commentjean-airoldie/libzmq-rs

WIP: Fix compilation for windows

I think somebody who is seriously interested in using libzmq on Windows should work with you on fixing the issue.

Indeed, I don't use windows personally nor do I own a windows machine. Realistically a windows CI env should be setup etc.

I am not even sure if I will be using ZeroMQ for my own project so for now I will try out rust-zmq to learn more about ZeroMQ.

I'm personally not using ZeroMQ anymore due to the many issues it has. Checkout my comment in this issue: https://github.com/jean-airoldie/libzmq-rs/issues/125#issuecomment-570551319. This might save you some time.

jean-airoldie

comment created time in 3 months

push eventjean-airoldie/libzmq-rs

jean-airoldie

commit sha 239d3c27439204ed4b16e5890fc2a1e7d46d3888

f

view details

push time in 3 months

more