profile
viewpoint

estk/log4rs 428

A highly configurable logging framework for Rust

a8m/pb 376

Console progress bar for Rust

env-logger-rs/env_logger 281

A logging implementation for `log` which is configured via an environment variable.

pyfisch/cbor 223

CBOR support for serde.

gnzlbg/jemallocator 208

Rust allocator using jemalloc as a backend

carllerche/syncbox 129

Concurrency utilities for Rust

kornelski/rust-security-framework 95

Bindings to the macOS Security.framework

rust-lang-nursery/unix-socket 51

Unix socket support for Rust

pull request commentrust-lang/rfcs

Allow "artifact dependencies" on bin, cdylib, and staticlib crates

@alexcrichton, I responded inline to several of your comments, but I also wanted to respond to your top-level comment:

Thanks for writing this up! I feel like a feature along these lines is good to add to Cargo, but I do have some concerns about the motivation and some of the specifics below.

One concern I have is related to depending on binaries. I'm personally not convinced that what's proposed in this RFC is necessary or necessarily something that Cargo needs. One of the major drawback I see of "simply depend on a binary" is that running binaries is often quite involved and nontrivial. Fiddling with Command, having good error reporting, and dealing with subprocess I/O is not always trivial and if a widely-used package simply provides a binary it means that all consumers are duplicating all this logic for handling the subprocess management. The alternative, however, having a library, provides an author an opportunity to make a much easier to use Rust API, document it, provide examples, etc. An author providing the binary would still be able to build the binary itself in the build script and reference it from the crate that runs the binary.

Some crates may want to wrap the usage of the binary in an API, but that isn't always the case. That assumes your primary use case is to invoke the binary in some standardized way from Rust code via Command or similar. You may, instead, want to:

  • Invoke a different binary (such as make or ./configure or ./do-build.sh) that expects to find the binary you depend on via the $PATH.
  • Embed the binary, rather than running it: firmware, plugins, etc.
  • Run the binary in some non-standard way, for which wrapping it in an API would be non-trivial.
  • Have multiple different crates providing different APIs on top of the same binary, but only one crate with the knowledge of how to build and supply that binary.

An additional concern I would have about binaries is that it sort of feels like we're pushing Cargo towards an entire package manager. The example given in the RFC is related to cmake, but as the maintainer of the cmake crate I would not want to be responsible for shipping cmake's source code and building it if it's not already in the environment (or somehow handling "the environment is too old a version from what you requested). In general I can't ever really see myself managing build tools like that, especially for ubiquitous ones like make, cmake, clang, etc.

We can certainly change the example to a more hypothetical build tool, if you'd prefer. I'd like to talk to you at more length about the idea of building cmake from source, though. (I'd expect to put that in something like a cmake-sys or cmake-bin crate, rather than directly in the cmake crate; cmake could just have a vendored feature or similar, to optionally make use of that.) This would help many Windows users, for which it'd be substantially easier than installing cmake on the host system.

For something like clang, I'd love to provide a sysroot crate that supplies clang and related binaries from precisely the version of LLVM used to build Rust, which would make things like cross-language LTO trivial.

This would be even more helpful for less ubiquitous build tools that not everyone already has, particularly in environments where installing tools is not trivial.

Overall I feel that the most useful part of this RFC is being able to build multiple artifacts for possibly different targets, but I'm not sure that it's best served with a depencency-level directive rather than a package-level directive (commented more below). Otherwise I can't personally think of a concrete use case where this would be an improvement to a build that's otherwise quite complicated to do today (e.g. would presumably be multiple cargo invocations driven by some sort of other build system)

Several of the use cases proposed in the RFC are specifically cases where today someone would have to drive Cargo with some other build system, and this RFC would allow just using cargo build instead.

jyn514

comment created time in 23 minutes

Pull request review commentrust-lang/rfcs

Allow "artifact dependencies" on bin, cdylib, and staticlib crates

+- Feature Name: (`bindeps`)+- Start Date: 2020-11-30+- RFC PR: [rust-lang/rfcs#3028](https://github.com/rust-lang/rfcs/pull/3028)+- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)++# Summary+[summary]: #summary++Allow Cargo packages to depend on `bin`, `cdylib`, and `staticlib` crates, and use the artifacts built by those crates.++# Motivation+[motivation]: #motivation++There are many different possible use cases.++- [Testing the behavior of a binary](https://github.com/rust-lang/cargo/issues/4316#issuecomment-361641639). Currently, this requires invoking `cargo build` recursively, or running `cargo build` before running `cargo test`. +- [Running a binary that depends on another](https://github.com/rust-lang/rustc-perf/tree/master/collector#how-to-benchmark-a-change-on-your-own-machine). Currently, this requires running `cargo build`, making it difficult to keep track of when the binary was rebuilt. The use case for `rustc-perf` is to have a main binary that acts as an 'executor', which executes `rustc` many times, and a smaller 'shim' which wraps `rustc` with additional environment variables and arguments.+- [Building tools needed at build time](https://github.com/rust-lang/rust/pull/79540#unresolved-questions). Currently, this requires either splitting the tool into a library crate (if written in Rust), or telling the user to install the tool on the host and detecting the availability of it. This feature would allow building the necessary tool from source and then invoking it from a `build.rs` script later in the build.+- Building and embedding binaries for another target, such as firmware or WebAssembly. This feature would allow a versioned dependency on an appropriate crate providing the firmware or WebAssembly binary, and then embedding the binary (or a compressed or otherwise transformed version of it) into the final crate. For instance, a virtual machine could build its system firmware, or a WebAssembly runtime could build helper libraries.+- Building and embedding a shared library for use at runtime. For instance, a binary could depend on a shared library used with [`LD_PRELOAD`](https://man7.org/linux/man-pages/man8/ld.so.8.html#ENVIRONMENT), or used in the style of the Linux kernel's [VDSO](https://man7.org/linux/man-pages/man7/vdso.7.html).++# Guide-level explanation+[guide-level-explanation]: #guide-level-explanation++Cargo allows you to depend on binary or C ABI artifacts of another package; this is known as a "binary dependency" or "artifact dependency". For example, you can depend on the `cmake` binary in your `build.rs` like so:++```toml+[build-dependencies]+cmake = { version = "1.0", type = "bin" }+```++Cargo will build the `cmake` binary, then make it available to your `build.rs` through an environment variable:++```rust+// build.rs+use std::{env, process::Command};++fn main() {+    let cmake_path = env::var_os("CARGO_BIN_FILE_CMAKE_cmake").expect("cmake binary");+    let mut cmake = Command::new(cmake_path).arg("--version");+    assert!(cmake.status().expect("cmake --version failed").success());+}+```++You can optionally specify specific binaries to depend on using `bins`:++```toml+[build-dependencies]+cmake = { version = "1.0", type = "bin", bins = ["cmake"] }+```++If no binaries are specified, all the binaries in the package will be built and made available.++You can obtain the directory containing all binaries built by the `cmake` crate with `CARGO_BIN_DIR_CMAKE`, such as to add it to `$PATH` before invoking another build system or a script.++Cargo also allows depending on `cdylib` or `staticlib` artifacts. For example, you can embed a dynamic library in your binary:++```rust+// main.rs+const MY_PRELOAD_LIB: &[u8] = include_bytes!(env!("CARGO_CDYLIB_FILE_MYPRELOAD"));+```++Note that cargo cannot help you ensure these artifacts are available at runtime for an installed version of a binary; cargo can only supply these artifacts at build time. Runtime requirements for installed crates are out of scope for this change.++If you need to depend on multiple variants of a crate, such as both the binary and library of a crate, you can supply an array of strings for `type`: `type = ["bin", "lib"]`.++# Reference-level explanation+[reference-level-explanation]: #reference-level-explanation++There are four `type`s available:+1. `"lib"`, the default+2. `"bin"`, a crate building one or more binaries+3. `"cdylib"`, a C-compatible dynamic library+4. `"staticlib"`, a C-compatible static library++`"lib"` corresponds to all crates that can be depended on currently,+including `lib`, `rlib`, and `proc-macro` libraries.+See [linkage](https://doc.rust-lang.org/reference/linkage.html) for more information.++Artifact dependencies can appear in any of the three sections of dependencies (or in target-specific versions of these sections):+- `[build-dependencies]`+- `[dependencies]`+- `[dev-dependencies]`++By default, `build-dependencies` are built for the host, while  `dependencies` and `dev-dependencies` are built for the target. You can specify the `target` attribute to build for a specific target, such as `target = "wasm32-wasi"`; a literal `target = "target"` will build for the target even if specifing a build dependency. (If the target is not available, this will result in an error at build time, just as if building the specified crate with a `--target` option for an unavailable target.)

@alexcrichton A package-level directive for the default also makes sense, but that would be a separate proposal. If a package truly supports only one target, a package-level directive would suffice. The target attribute on a dependency is intended for cases where the dependency supports multiple targets.

For example, consider a crate that builds virtual machine firmware, and supports multiple architectures. If you're building a virtual machine for a specific architecture, you need firmware for that architecture.

Or, consider a kernel that supports both 64-bit and 32-bit userspace applications, and needs to build different versions of a VDSO for each case.

I also expect target = "target" on build dependencies to be quite common: "no, I don't want to run this as part of the build process, I want this to be runnable on the target system instead".

I would also expect to see cases like this:

[build-dependencies]
thing32 = { package = "thing", version = "...", target = "i686-unknown-linux-gnu" }
thing64 = { package = "thing", version = "...", target = "x86_64-unknown-linux-gnu" }
jyn514

comment created time in 38 minutes

Pull request review commentrust-lang/rfcs

Allow "artifact dependencies" on bin, cdylib, and staticlib crates

+- Feature Name: (`bindeps`)+- Start Date: 2020-11-30+- RFC PR: [rust-lang/rfcs#3028](https://github.com/rust-lang/rfcs/pull/3028)+- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)++# Summary+[summary]: #summary++Allow Cargo packages to depend on `bin`, `cdylib`, and `staticlib` crates, and use the artifacts built by those crates.++# Motivation+[motivation]: #motivation++There are many different possible use cases.++- [Testing the behavior of a binary](https://github.com/rust-lang/cargo/issues/4316#issuecomment-361641639). Currently, this requires invoking `cargo build` recursively, or running `cargo build` before running `cargo test`. +- [Running a binary that depends on another](https://github.com/rust-lang/rustc-perf/tree/master/collector#how-to-benchmark-a-change-on-your-own-machine). Currently, this requires running `cargo build`, making it difficult to keep track of when the binary was rebuilt. The use case for `rustc-perf` is to have a main binary that acts as an 'executor', which executes `rustc` many times, and a smaller 'shim' which wraps `rustc` with additional environment variables and arguments.+- [Building tools needed at build time](https://github.com/rust-lang/rust/pull/79540#unresolved-questions). Currently, this requires either splitting the tool into a library crate (if written in Rust), or telling the user to install the tool on the host and detecting the availability of it. This feature would allow building the necessary tool from source and then invoking it from a `build.rs` script later in the build.+- Building and embedding binaries for another target, such as firmware or WebAssembly. This feature would allow a versioned dependency on an appropriate crate providing the firmware or WebAssembly binary, and then embedding the binary (or a compressed or otherwise transformed version of it) into the final crate. For instance, a virtual machine could build its system firmware, or a WebAssembly runtime could build helper libraries.+- Building and embedding a shared library for use at runtime. For instance, a binary could depend on a shared library used with [`LD_PRELOAD`](https://man7.org/linux/man-pages/man8/ld.so.8.html#ENVIRONMENT), or used in the style of the Linux kernel's [VDSO](https://man7.org/linux/man-pages/man7/vdso.7.html).++# Guide-level explanation+[guide-level-explanation]: #guide-level-explanation++Cargo allows you to depend on binary or C ABI artifacts of another package; this is known as a "binary dependency" or "artifact dependency". For example, you can depend on the `cmake` binary in your `build.rs` like so:++```toml+[build-dependencies]+cmake = { version = "1.0", type = "bin" }+```++Cargo will build the `cmake` binary, then make it available to your `build.rs` through an environment variable:++```rust+// build.rs+use std::{env, process::Command};++fn main() {+    let cmake_path = env::var_os("CARGO_BIN_FILE_CMAKE_cmake").expect("cmake binary");+    let mut cmake = Command::new(cmake_path).arg("--version");+    assert!(cmake.status().expect("cmake --version failed").success());+}+```++You can optionally specify specific binaries to depend on using `bins`:++```toml+[build-dependencies]+cmake = { version = "1.0", type = "bin", bins = ["cmake"] }+```++If no binaries are specified, all the binaries in the package will be built and made available.++You can obtain the directory containing all binaries built by the `cmake` crate with `CARGO_BIN_DIR_CMAKE`, such as to add it to `$PATH` before invoking another build system or a script.++Cargo also allows depending on `cdylib` or `staticlib` artifacts. For example, you can embed a dynamic library in your binary:++```rust+// main.rs+const MY_PRELOAD_LIB: &[u8] = include_bytes!(env!("CARGO_CDYLIB_FILE_MYPRELOAD"));+```++Note that cargo cannot help you ensure these artifacts are available at runtime for an installed version of a binary; cargo can only supply these artifacts at build time. Runtime requirements for installed crates are out of scope for this change.++If you need to depend on multiple variants of a crate, such as both the binary and library of a crate, you can supply an array of strings for `type`: `type = ["bin", "lib"]`.++# Reference-level explanation+[reference-level-explanation]: #reference-level-explanation++There are four `type`s available:+1. `"lib"`, the default+2. `"bin"`, a crate building one or more binaries+3. `"cdylib"`, a C-compatible dynamic library

@alexcrichton This isn't intended for the use case where you just want to link your binary (or library) to the produced library. For that use case, I absolutely agree that cargo should provide more integration. This is intended for the use case where the library serves some other function, such as a runtime library for an interpreter (linked into interpreted programs rather than just the interpreter), or a VDSO for a kernel (linked into userspace programs run on that kernel), or a preloaded library for use with LD_PRELOAD.

jyn514

comment created time in an hour

issue commentrust-lang/project-error-handling

Tiny Box optimization

I donno enough about this. We discussed alloc-based gates for Backtrace elsewhere, some options existed, but not confident about their suitability across the board.

ooh well the good news is I'm already working on this issue as part of moving Error to core. the backtrace type will be backed by a hook(or maybe hooks) similar to the panic hook functionality which will let core own the interface but std provide the definition for how backtraces are handled, and that should work with no_std and no_alloc.

I probably need to take some time to do a better PoC and then we can more meaningfully discuss how it should align with everything else.

Sounds good.

Of course, this is very much going the opposite direction from most of the goals of this group, but if we could make this work along side those, then we'd probably make everyone pretty happy.. well except folk who want to worry about fat pointer stability. ;)

I don't really feel like this is going against the goals of the group, though the tie in to error handling is a little tenuous. Either way we're happy to help with brainstorming and providing feedback. If you're interested you should definitely attend our biweekly meetings and give status updates on your progress here.

burdges

comment created time in an hour

PR closed tokio-rs/tokio

Add unwind feature flag

Specifying the unwind feature flag removes the catch_unwind from the task harness. This is especially useful for testing a program with detached joinhandles.

Motivation

Solving https://github.com/tokio-rs/tokio/issues/2699 to make testing easier to perform. Especially system tests where we have no control over spawned and detached tasks. Not all systems are going to be running with catch_unwind on each task. Some want to fail fast.

Solution

Adds a feature unwind to disable the usage of catch_unwind.

+48 -8

3 comments

5 changed files

BourgondAries

pr closed time in an hour

pull request commenttokio-rs/tokio

Add unwind feature flag

If I understand the PR correctly, I don't think the feature flag is additive. feature flags should not change the behavior of code, only add new APIs or change internal runtime details.

I would suggest writing up a proposal considering how unwinding fits with the various schedulers. I added additional thoughts here: https://github.com/tokio-rs/tokio/issues/2699#issuecomment-738458698

I appreciate the PR, there is definitely unexpected complexity. I am going to close this PR. If you want to continue the effort, I would recommend opening an issue containing a proposal addressing the edge cases. You can also ping people in #tokio-dev on Discord (https://discord.gg/tokio).

BourgondAries

comment created time in an hour

issue openedhyperium/hyper

v0.14 Release Checklist

The v0.14 milestone is complete, meaning all major features are merged. This is a checklist of some administrata to have a smooth release!

  • [ ] Release h2 v0.3
  • [ ] Release http-body v0.4
  • [ ] Release http with updated bytes (can be a minor version)
  • [ ] Blog post
  • [ ] Blast off 🚀

created time in an hour

pull request commenttokio-rs/tokio

Add unwind feature flag

@carllerche no idea, I suppose the thread just swallows the panic. This is not intended for use with the multithreaded scheduler tho. So it can be somehow disabled for that.

BourgondAries

comment created time in an hour

pull request commenttokio-rs/tokio

Add unwind feature flag

How does this work with the multi-threaded scheduler?

BourgondAries

comment created time in an hour

issue commenttokio-rs/tokio

Detached tasks in tests

In theory, I am OK w/ the idea of adding a catch_unwind configuration to the runtime builder. However, implementing this would not be trivial for the multi-threaded scheduler. The scheduler must maintain correctness, so the panic must be caught, the runtime cleanly shutdown, and the panic bubbled up to the caller. Of course, what is the "caller"? Is it block_on? If so, which block_on is it? There can be multiple concurrent calls to Runtime::block_on.

I would read a proposal.

Darksonn

comment created time in an hour

issue commentrust-lang/cargo

MacOS deployment target

Using MACOSX_DEPLOYMENT_TARGET in any way results in messages like this (visible if the compile fails for other reasons):

  = note: ld: warning: object file (/Users/simon/code/agent-ui-test/target/debug/deps/libobjc_exception-fd1aed02df04e3f9.rlib(exception.o)) was built for newer macOS version (11.0) than being linked (10.10)

Not sure if that could be a problem.

pronebird

comment created time in 2 hours

PR opened tokio-rs/tokio

Add unwind feature flag

Specifying the unwind feature flag removes the catch_unwind from the task harness. This is especially useful for testing a program with detached joinhandles.

Motivation

Solving https://github.com/tokio-rs/tokio/issues/2699 to make testing easier to perform. Especially system tests where we have no control over spawned and detached tasks. Not all systems are going to be running with catch_unwind on each task. Some want to fail fast.

Solution

Adds a feature unwind to disable the usage of catch_unwind.

+48 -8

0 comment

5 changed files

pr created time in 2 hours

issue commentrust-lang/project-error-handling

Dealing with process exit codes

fwiw, I think this is also true of ZST error types plus Box, tho im pretty sure you'd still need an alloc dependency even if it doesn't allocate a real pointer.

The idea is the tinybox crate would have an alloc feature. If the alloc feature is enabled then it invokes box for types with alignment or size larger than usize. If the alloc feature is disabled, then tinybox panics for types with alignment or size larger than usize, meaning it still builds but pushes the error into runtime. If a type has alignment and size no larger than usize then tinybox always works with or without the alloc feature.

epage

comment created time in 2 hours

issue commentrust-lang/project-error-handling

Tiny Box optimization

Would it be possible to implement TinyBox in a 3rd party library similar to smallvec?

Yes. It depends upon dyn Trait fat pointer layout stability, but afaik they'll never changed even if they're not technically stable.

Also I'm very apprehensive about adding a new error trait, this was a huge problem with the Fail trait. Adding a new trait will bring back error incompatibility issues all over again, though it might be avoidable if it can be a subtrait of std::error::Error.

I donno enough about this. We discussed alloc-based gates for Backtrace elsewhere, some options existed, but not confident about their suitability across the board.

I'm unsure if TinyBox<T: !Sized> makes sense, but initial experiments ran into hiccups there.

This seems like it might be pretty important, since otherwise it wouldn't be possible to create a TinyBox<dyn Trait>.

I missspoke.. TinyBox<dyn Trait> makes sense. There also exist TinyBox<T> formulations that make sense when fat pointers to T contain a size, not a vtable, ala TinyBox<[T]>. I encountered preliminary hiccup with a doing a formulation compatible with both use cases however, but so maybe TinyBox::<T>::new requires a T: Sized bound.

We'd need T: Sized for the conversion from TinyBox<T> to TinyBox<dyn Trait> no matter what, so this has zero impact upon error handing. It's just mildly annoying to tell people "oh here is your small box type for rust that does not wast space like the smallbox crate does", and then have it not work for slices.

Also, what does the SimpleError trait do in this scenario? I'm not sure how it changes things beyond what TinyBox is doing.

In my mind, the TinyError aka SimpleError trait would serve mostly to make this work entirely outside core/std.

I probably need to take some time to do a better PoC and then we can more meaningfully discuss how it should align with everything else.

Of course, this is very much going the opposite direction from most of the goals of this group, but if we could make this work along side those, then we'd probably make everyone pretty happy.. well except folk who want to worry about fat pointer stability. ;)

burdges

comment created time in 2 hours

push eventtokio-rs/tokio

github-action-benchmark

commit sha 58ec2c4a9c7cb64a8a26c291d4a856a777187780

add sync_semaphore (cargo) benchmark result for 00500d1b35f00c68117d8f4e7320303e967e92e3

view details

push time in 2 hours

push eventtokio-rs/tokio

github-action-benchmark

commit sha 37b51dda607b1d545f64d358378546a2be2ced17

add rt_multi_threaded (cargo) benchmark result for 00500d1b35f00c68117d8f4e7320303e967e92e3

view details

push time in 2 hours

created tagtokio-rs/tokio

tagtokio-util-0.5.1

A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...

created time in 2 hours

push eventtokio-rs/tokio

Eliza Weisman

commit sha 00500d1b35f00c68117d8f4e7320303e967e92e3

util: prepare v0.5.1 release (#3210) ### Added - io: `poll_read_buf` util fn (#2972). - io: `poll_write_buf` util fn with vectored write support (#3156). Signed-off-by: Eliza Weisman <eliza@buoyant.io>

view details

push time in 2 hours

delete branch tokio-rs/tokio

delete branch : eliza/tokio-util-0.5.1

delete time in 2 hours

PR merged tokio-rs/tokio

Reviewers
util: prepare v0.5.1 release A-tokio-util C-maintenance

Added

  • io: poll_read_buf util fn (#2972).
  • io: poll_write_buf util fn with vectored write support (#3156).
+6 -3

1 comment

3 changed files

hawkw

pr closed time in 2 hours

issue commenttokio-rs/tokio

stream: coordinating Tokio 1.0 with the availability of Stream in std

As of this comment, Stream trait RFC is not yet merged. I expect it is mostly ready to go, but I am not exactly clear on the expected timeline.

If Stream lands in stable sometime in January 2021, I think we could consider delaying the 1.0 release, but probably not more than that. However, I would expect Stream to not be available in std until later.

Assuming that Stream lands in stable February 2021 or later, I propose:

  • Extract the contents of tokio::stream into the tokio-stream crate.
  • Leave an empty tokio::stream module documenting that Stream utilities exist in tokio-stream and will be merged back into tokio once the Stream trait lands in std.
  • Remove Stream impls for misc types and document how to use async-stream to get your own stream implementation. These types already have inherent fns to consume and do not depend on their Stream implementations.
    • tokio::fs::ReadDir
    • tokio::io::Lines
    • tokio::net::{TcpListener, UnixListener}
    • tokio::signal::{CtrlC, CtrlBreak}
    • tokio::sync::broadcast
    • tokio::time::Interval

Once Stream lands in std, we add tokio::stream back. However, we must do this in a way that preserves the Tokio's MSRV guarantee of ~6 months. To do this, we can add a build script that detects the Rust version. If using a version of Rust that includes Stream, then re-export that Stream and use it. Otherwise, define a Stream trait that matches the version in std and use that one. This is a similar strategy as the one proposed for futures-rs but does not introduce Stream until after Stream is in the std stable channel. This prevents potential breaking changes with mismatched Stream versions

carllerche

comment created time in 2 hours

issue commenttokio-rs/tokio

Detached tasks in tests

Can we remove the catch_unwind from the harness if we set up a runtime with something like builder.catch_unwind(false)?

Darksonn

comment created time in 2 hours

issue commentpyfisch/cbor

Looking for Maintainers

@npmccallum except for the no_std no alloc use case it seems

The readme discusses a potential low level library which could target that case. I would merge patches for that. But it isn't my primary use case.

pyfisch

comment created time in 2 hours

issue commentpyfisch/cbor

Looking for Maintainers

@npmccallum except for the no_std no alloc use case it seems

pyfisch

comment created time in 2 hours

Pull request review commentpalantir/conjure-rust-runtime

Implement simulations

+// Copyright 2020 Palantir Technologies, Inc.+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+// http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.+use crate::metrics;+use crate::recorder::{MetricsRecord, SimulationMetricsRecorder};+use crate::server::{+    Endpoint, Server, ServerBuilder0, ServerBuilder1, SimulationRawClient,+    SimulationRawClientBuilder,+};+use conjure_runtime::errors::{RemoteError, ThrottledError, UnavailableError};+use conjure_runtime::{Agent, Builder, Client, NodeSelectionStrategy, UserAgent};+use futures::stream::{Stream, StreamExt};+use http::StatusCode;+use parking_lot::Mutex;+use rand::seq::SliceRandom;+use std::cell::{Cell, RefCell};+use std::collections::BTreeMap;+use std::fmt;+use std::pin::Pin;+use std::sync::Arc;+use tokio::runtime::{self, Runtime};+use tokio::time::{self, Duration, Instant};+use witchcraft_metrics::{Clock, MetricRegistry};++const SERVICE: &str = "simulation";++pub struct SimulationBuilder0;++impl SimulationBuilder0 {+    pub fn strategy(self, strategy: Strategy) -> SimulationBuilder1 {+        let mut runtime = runtime::Builder::new()+            .enable_time()+            .basic_scheduler()+            .build()+            .unwrap();+        runtime.block_on(async {+            time::pause();+            // https://github.com/tokio-rs/tokio/issues/3179+            time::delay_for(Duration::from_millis(1)).await;+        });++        let mut metrics = MetricRegistry::new();+        metrics.set_clock(Arc::new(TokioClock));+        let metrics = Arc::new(metrics);+        // initialize the responses timer with our custom reservoir+        runtime.enter(|| {+            metrics::responses_timer(&metrics, SERVICE);+        });++        let mut recorder = runtime.enter(|| SimulationMetricsRecorder::new(&metrics));+        recorder.filter_metrics(|id| {+            id.name().ends_with("activeRequests") || id.name().ends_with("request")+        });++        SimulationBuilder1 {+            runtime,+            servers: vec![],+            metrics,+            recorder,+            endpoints: vec![Endpoint::DEFAULT],+            strategy,+        }+    }+}++pub struct SimulationBuilder1 {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    recorder: SimulationMetricsRecorder,+    servers: Vec<Server>,+    endpoints: Vec<Endpoint>,+    strategy: Strategy,+}++impl SimulationBuilder1 {+    pub fn server<F>(mut self, f: F) -> Self+    where+        F: FnOnce(ServerBuilder0) -> ServerBuilder1,+    {+        let server = self.runtime.enter({+            let metrics = &self.metrics;+            let recorder = &mut self.recorder;+            move || f(ServerBuilder0 { metrics, recorder })+        });+        self.servers.push(server.build());+        self+    }++    pub fn endpoints(mut self, endpoints: Vec<Endpoint>) -> Self {+        self.endpoints = endpoints;+        self+    }++    pub fn requests_per_second(self, requests_per_second: f64) -> SimulationBuilder2 {+        let recorder = Arc::new(Mutex::new(self.recorder));++        let mut builder = Builder::new();+        self.strategy.apply(&mut builder);+        builder+            .service(SERVICE)+            .user_agent(UserAgent::new(Agent::new("simulation", "0.0.0")))+            .metrics(self.metrics.clone())+            .request_timeout(Duration::from_secs(1_000_000))+            .deterministic(true);+        for server in &self.servers {+            builder.uri(format!("http://{}", server.name()).parse().unwrap());+        }++        let raw_client_builder =+            SimulationRawClientBuilder::new(self.servers, &self.metrics, &recorder);+        let builder = builder.with_raw_client_builder(raw_client_builder);++        SimulationBuilder2 {+            runtime: self.runtime,+            builder,+            metrics: self.metrics,+            recorder,+            endpoints: self.endpoints,+            delay_between_requests: Duration::from_secs_f64(1. / requests_per_second),+        }+    }+}++pub struct SimulationBuilder2 {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    builder: Builder<SimulationRawClientBuilder>,+    recorder: Arc<Mutex<SimulationMetricsRecorder>>,+    endpoints: Vec<Endpoint>,+    delay_between_requests: Duration,+}++impl SimulationBuilder2 {+    pub fn send_until(self, cutoff: Duration) -> SimulationBuilder3 {+        let num_requests = cutoff.as_nanos() as u64 / self.delay_between_requests.as_nanos() as u64;+        self.num_requests(num_requests)+    }++    pub fn num_requests(self, num_requests: u64) -> SimulationBuilder3 {+        let mut rng = crate::rng();++        let stream = self.runtime.enter({+            let endpoints = self.endpoints;+            let delay_between_requests = self.delay_between_requests;+            move || {+                time::interval(delay_between_requests)+                    .take(num_requests as usize)+                    .map(move |_| endpoints.choose(&mut rng).unwrap().clone())+            }+        });++        SimulationBuilder3 {+            runtime: self.runtime,+            metrics: self.metrics,+            recorder: self.recorder,+            builder: self.builder,+            requests: Box::pin(stream),+            abort_after: None,+        }+    }+}++pub struct SimulationBuilder3 {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    recorder: Arc<Mutex<SimulationMetricsRecorder>>,+    builder: Builder<SimulationRawClientBuilder>,+    requests: Pin<Box<dyn Stream<Item = Endpoint>>>,+    abort_after: Option<Duration>,+}++impl SimulationBuilder3 {+    pub fn abort_after(mut self, cutoff: Duration) -> Self {+        self.abort_after = Some(cutoff);+        self+    }++    pub fn clients(self, clients: u32) -> Simulation {+        let clients = (0..clients)+            .map(|_| self.runtime.enter(|| self.builder.build().unwrap()))+            .collect::<Vec<_>>();+        let mut rng = crate::rng();++        let client_provider = move || clients.choose(&mut rng).unwrap().clone();++        Simulation {+            runtime: self.runtime,+            metrics: self.metrics,+            recorder: self.recorder,+            requests: self.requests,+            client_provider: Box::new(client_provider),+            abort_after: self.abort_after,+        }+    }+}++pub struct Simulation {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    recorder: Arc<Mutex<SimulationMetricsRecorder>>,+    client_provider: Box<dyn FnMut() -> Client<Arc<SimulationRawClient>>>,+    requests: Pin<Box<dyn Stream<Item = Endpoint>>>,+    abort_after: Option<Duration>,+}++impl Simulation {+    pub fn builder() -> SimulationBuilder0 {+        SimulationBuilder0+    }++    pub fn run(mut self) -> SimulationReport {+        self.runtime.block_on({+            let metrics = self.metrics;+            let recorder = self.recorder;+            let mut client_provider = self.client_provider;+            let requests = self.requests;+            let abort_after = self.abort_after;+            async move {+                let start = Instant::now();++                let status_codes = RefCell::new(BTreeMap::new());+                let num_sent = Cell::new(0);+                let num_received = Cell::new(0);++                let run_requests = requests.for_each_concurrent(None, {+                    let status_codes = &status_codes;+                    let num_sent = &num_sent;+                    let num_received = &num_received;+                    move |endpoint| {+                        let client = client_provider();+                        async move {+                            num_sent.set(num_sent.get() + 1);+                            let response = client+                                .request(endpoint.method().clone(), endpoint.path())+                                .send()+                                .await;+                            num_received.set(num_received.get() + 1);++                            let status = match response {+                                Ok(response) => response.status(),+                                Err(error) => {+                                    if let Some(error) = error.cause().downcast_ref::<RemoteError>()+                                    {+                                        *error.status()+                                    } else if error.cause().is::<UnavailableError>() {+                                        StatusCode::SERVICE_UNAVAILABLE+                                    } else if error.cause().is::<ThrottledError>() {+                                        StatusCode::TOO_MANY_REQUESTS+                                    } else {+                                        panic!("unexpected client error {:?}", error);+                                    }+                                }+                            };++                            *status_codes+                                .borrow_mut()+                                .entry(status.as_u16())+                                .or_insert(0) += 1;+                        }+                    }+                });++                match abort_after {+                    Some(abort_after) => {+                        let _ = time::timeout(abort_after, run_requests).await;+                    }+                    None => run_requests.await,+                }++                recorder.lock().record();++                let status_codes = status_codes.into_inner();+                SimulationReport {+                    end_time: start.elapsed(),+                    client_mean: Duration::from_nanos(+                        metrics::responses_timer(&metrics, SERVICE)+                            .snapshot()+                            .mean() as u64,+                    ),+                    success_percentage: f64::round(+                        status_codes.get(&200).copied().unwrap_or(0) as f64 * 1000.+                            / num_sent.get() as f64,+                    ) / 10.,+                    server_cpu: Duration::from_nanos(+                        metrics::global_server_time_nanos(&metrics).count() as u64,+                    ),+                    status_codes,+                    num_sent: num_sent.get(),+                    num_received: num_received.get(),+                    num_global_responses: metrics::global_responses(&metrics).count(),+                    record: recorder.lock().finish(),+                }+            }+        })+    }+}++#[derive(Copy, Clone)]+pub enum Strategy {+    ConcurrencyLimiterRoundRobin,+    ConcurrencyLimiterPinUntilError,+    UnlimitedRoundRobin,+}++impl fmt::Display for Strategy {+    fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {+        let s = match self {+            Strategy::ConcurrencyLimiterRoundRobin => "CONCURRENCY_LIMITER_ROUND_ROBIN",+            Strategy::ConcurrencyLimiterPinUntilError => "CONCURRENCY_LIMITER_PIN_UNTIL_ERROR",+            Strategy::UnlimitedRoundRobin => "UNLIMITED_ROUND_ROBIN",+        };++        fmt::Display::fmt(s, fmt)+    }+}++impl Strategy {+    fn apply(&self, builder: &mut Builder) {+        match self {+            Strategy::ConcurrencyLimiterRoundRobin => {+                builder.node_selection_strategy(NodeSelectionStrategy::Balanced);+            }+            Strategy::ConcurrencyLimiterPinUntilError => {+                builder.node_selection_strategy(NodeSelectionStrategy::PinUntilError);+            }+            Strategy::UnlimitedRoundRobin => {+                // FIXME disable qos

What's preventing us from disabling qos here?

sfackler

comment created time in 2 hours

Pull request review commentpalantir/conjure-rust-runtime

Implement simulations

+// Copyright 2020 Palantir Technologies, Inc.+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+// http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.+use crate::metrics;+use crate::recorder::{MetricsRecord, SimulationMetricsRecorder};+use crate::server::{+    Endpoint, Server, ServerBuilder0, ServerBuilder1, SimulationRawClient,+    SimulationRawClientBuilder,+};+use conjure_runtime::errors::{RemoteError, ThrottledError, UnavailableError};+use conjure_runtime::{Agent, Builder, Client, NodeSelectionStrategy, UserAgent};+use futures::stream::{Stream, StreamExt};+use http::StatusCode;+use parking_lot::Mutex;+use rand::seq::SliceRandom;+use std::cell::{Cell, RefCell};+use std::collections::BTreeMap;+use std::fmt;+use std::pin::Pin;+use std::sync::Arc;+use tokio::runtime::{self, Runtime};+use tokio::time::{self, Duration, Instant};+use witchcraft_metrics::{Clock, MetricRegistry};++const SERVICE: &str = "simulation";++pub struct SimulationBuilder0;++impl SimulationBuilder0 {+    pub fn strategy(self, strategy: Strategy) -> SimulationBuilder1 {+        let mut runtime = runtime::Builder::new()+            .enable_time()+            .basic_scheduler()+            .build()+            .unwrap();+        runtime.block_on(async {+            time::pause();+            // https://github.com/tokio-rs/tokio/issues/3179+            time::delay_for(Duration::from_millis(1)).await;+        });++        let mut metrics = MetricRegistry::new();+        metrics.set_clock(Arc::new(TokioClock));+        let metrics = Arc::new(metrics);+        // initialize the responses timer with our custom reservoir+        runtime.enter(|| {+            metrics::responses_timer(&metrics, SERVICE);+        });++        let mut recorder = runtime.enter(|| SimulationMetricsRecorder::new(&metrics));+        recorder.filter_metrics(|id| {+            id.name().ends_with("activeRequests") || id.name().ends_with("request")+        });++        SimulationBuilder1 {+            runtime,+            servers: vec![],+            metrics,+            recorder,+            endpoints: vec![Endpoint::DEFAULT],+            strategy,+        }+    }+}++pub struct SimulationBuilder1 {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    recorder: SimulationMetricsRecorder,+    servers: Vec<Server>,+    endpoints: Vec<Endpoint>,+    strategy: Strategy,+}++impl SimulationBuilder1 {+    pub fn server<F>(mut self, f: F) -> Self+    where+        F: FnOnce(ServerBuilder0) -> ServerBuilder1,+    {+        let server = self.runtime.enter({+            let metrics = &self.metrics;+            let recorder = &mut self.recorder;+            move || f(ServerBuilder0 { metrics, recorder })+        });+        self.servers.push(server.build());+        self+    }++    pub fn endpoints(mut self, endpoints: Vec<Endpoint>) -> Self {+        self.endpoints = endpoints;+        self+    }++    pub fn requests_per_second(self, requests_per_second: f64) -> SimulationBuilder2 {+        let recorder = Arc::new(Mutex::new(self.recorder));++        let mut builder = Builder::new();+        self.strategy.apply(&mut builder);+        builder+            .service(SERVICE)+            .user_agent(UserAgent::new(Agent::new("simulation", "0.0.0")))+            .metrics(self.metrics.clone())+            .request_timeout(Duration::from_secs(1_000_000))+            .deterministic(true);+        for server in &self.servers {+            builder.uri(format!("http://{}", server.name()).parse().unwrap());+        }++        let raw_client_builder =+            SimulationRawClientBuilder::new(self.servers, &self.metrics, &recorder);+        let builder = builder.with_raw_client_builder(raw_client_builder);++        SimulationBuilder2 {+            runtime: self.runtime,+            builder,+            metrics: self.metrics,+            recorder,+            endpoints: self.endpoints,+            delay_between_requests: Duration::from_secs_f64(1. / requests_per_second),+        }+    }+}++pub struct SimulationBuilder2 {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    builder: Builder<SimulationRawClientBuilder>,+    recorder: Arc<Mutex<SimulationMetricsRecorder>>,+    endpoints: Vec<Endpoint>,+    delay_between_requests: Duration,+}++impl SimulationBuilder2 {+    pub fn send_until(self, cutoff: Duration) -> SimulationBuilder3 {+        let num_requests = cutoff.as_nanos() as u64 / self.delay_between_requests.as_nanos() as u64;+        self.num_requests(num_requests)+    }++    pub fn num_requests(self, num_requests: u64) -> SimulationBuilder3 {+        let mut rng = crate::rng();++        let stream = self.runtime.enter({+            let endpoints = self.endpoints;+            let delay_between_requests = self.delay_between_requests;+            move || {+                time::interval(delay_between_requests)+                    .take(num_requests as usize)+                    .map(move |_| endpoints.choose(&mut rng).unwrap().clone())+            }+        });++        SimulationBuilder3 {+            runtime: self.runtime,+            metrics: self.metrics,+            recorder: self.recorder,+            builder: self.builder,+            requests: Box::pin(stream),+            abort_after: None,+        }+    }+}++pub struct SimulationBuilder3 {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    recorder: Arc<Mutex<SimulationMetricsRecorder>>,+    builder: Builder<SimulationRawClientBuilder>,+    requests: Pin<Box<dyn Stream<Item = Endpoint>>>,+    abort_after: Option<Duration>,+}++impl SimulationBuilder3 {+    pub fn abort_after(mut self, cutoff: Duration) -> Self {+        self.abort_after = Some(cutoff);+        self+    }++    pub fn clients(self, clients: u32) -> Simulation {+        let clients = (0..clients)+            .map(|_| self.runtime.enter(|| self.builder.build().unwrap()))+            .collect::<Vec<_>>();+        let mut rng = crate::rng();++        let client_provider = move || clients.choose(&mut rng).unwrap().clone();++        Simulation {+            runtime: self.runtime,+            metrics: self.metrics,+            recorder: self.recorder,+            requests: self.requests,+            client_provider: Box::new(client_provider),+            abort_after: self.abort_after,+        }+    }+}++pub struct Simulation {+    runtime: Runtime,+    metrics: Arc<MetricRegistry>,+    recorder: Arc<Mutex<SimulationMetricsRecorder>>,+    client_provider: Box<dyn FnMut() -> Client<Arc<SimulationRawClient>>>,+    requests: Pin<Box<dyn Stream<Item = Endpoint>>>,+    abort_after: Option<Duration>,+}++impl Simulation {+    pub fn builder() -> SimulationBuilder0 {+        SimulationBuilder0+    }++    pub fn run(mut self) -> SimulationReport {

nit: this method is long and deeply nested. Can we factor out some of the logic into separate functions?

sfackler

comment created time in 2 hours

pull request commentrust-lang/cargo

Implement experimental registry HTTP API from RFC

I'll also note to the above that not all of the time in the diagram is spent accessing the index. For rerun-locked for example, it looks like the time is going to various resolver things, not into the actual index access.

jonhoo

comment created time in 2 hours

issue commentpyfisch/cbor

Looking for Maintainers

This is just a note to everyone that there is now a new serde-enabled CBOR crate named ciborium (documentation here).

This was built by the Enarx project which is actively using CBOR and needed a robust, flexible implementation. I believe that I've hit many of the redesign points that @pyfisch wanted for this crate. So please check us out and let us know if you spot any problems. It should be mostly a drop-in replacement for serde_cbor and will be actively maintained.

pyfisch

comment created time in 3 hours

pull request commenttokio-rs/tokio

util: prepare v0.5.1 release

@Darksonn

You might need to remove a path dependency.

Hmm, not sure if that's the case --- the examples that use both tokio and tokio-util no longer compile with the path dep removed. Without the path dep, tokio-util depends on a different version of tokio (the crates.io version) than the examples themselves do (a path dep), and the traits don't line up.

I believe cargo just ignores path deps when publishing?

hawkw

comment created time in 3 hours

pull request commentrust-lang/rfcs

RFC: Serve crates-io registry over HTTP as static files

For concrete evidence of the complexity required on the client side, here's the diff removing support for the changelog in my experimental implementation: https://github.com/rust-lang/cargo/pull/8890/commits/bda120ad837e6e71edb334a44e64533119402dee

kornelski

comment created time in 3 hours

more