profile
viewpoint
jean-airoldie Sessions should be nested with care.

jean-airoldie/abi_stable_crates 0

Rust-to-Rust ffi,ffi-safe equivalents of std types,and creating libraries loaded at startup.

jean-airoldie/alexandrie 0

An alternative crate registry, implemented in Rust.

jean-airoldie/async-compression 0

Adaptors between compression crates and Rust's async IO types

jean-airoldie/async-tungstenite 0

Async binding for Tungstenite, the Lightweight stream-based WebSocket implementation

jean-airoldie/bincode 0

A binary encoder / decoder implementation in Rust.

jean-airoldie/caps-rs 0

A pure-Rust library to work with Linux capabilities

jean-airoldie/cargo 0

The Rust package manager

jean-airoldie/caring 0

The `Shared<T>` struct for cross-process shared-memory objects of type `T`

jean-airoldie/chrono 0

Date and time library for Rust

issue commentpicoHz/lzzzz

Move the async lz4f compression & decompression to `async-compression`

That would require some major API rework since currently the Compressor uses its own internal buffer so that capacity is never a problem.

jean-airoldie

comment created time in 8 days

issue openedpicoHz/lzzzz

Move the async lz4f compression & decompression to `async-compression`

As discussed in #4, the async-io part of the crate could be moved to async-compression directly.

I looked into it and this would work, but we would need to expose the raw Compressor & Decompressor so that they can be used to fit the following traits.

pub trait Encode {
    fn encode(
        &mut self,
        input: &mut PartialBuffer<impl AsRef<[u8]>>,
        output: &mut PartialBuffer<impl AsRef<[u8]> + AsMut<[u8]>>,
    ) -> Result<()>;

    /// Returns whether the internal buffers are flushed
    fn flush(&mut self, output: &mut PartialBuffer<impl AsRef<[u8]> + AsMut<[u8]>>)
        -> Result<bool>;

    /// Returns whether the internal buffers are flushed and the end of the stream is written
    fn finish(
        &mut self,
        output: &mut PartialBuffer<impl AsRef<[u8]> + AsMut<[u8]>>,
    ) -> Result<bool>;
}

pub trait Decode {
    /// Reinitializes this decoder ready to decode a new member/frame of data.
    fn reinit(&mut self) -> Result<()>;

    /// Returns whether the end of the stream has been read
    fn decode(
        &mut self,
        input: &mut PartialBuffer<impl AsRef<[u8]>>,
        output: &mut PartialBuffer<impl AsRef<[u8]> + AsMut<[u8]>>,
    ) -> Result<bool>;

    /// Returns whether the internal buffers are flushed
    fn flush(&mut self, output: &mut PartialBuffer<impl AsRef<[u8]> + AsMut<[u8]>>)
        -> Result<bool>;

    /// Returns whether the internal buffers are flushed
    fn finish(
        &mut self,
        output: &mut PartialBuffer<impl AsRef<[u8]> + AsMut<[u8]>>,
    ) -> Result<bool>;
}

So I'll work on a fork that expose this enough functions to fit this API, and I'll use it in async-compression crate.

created time in 8 days

fork jean-airoldie/async-compression

Adaptors between compression crates and Rust's async IO types

https://docs.rs/async-compression

fork in 8 days

issue commentpicoHz/lzzzz

AsyncReadDecompressor reads `Ok(0)` before EOF is reached

I mean indirectly. We could base the code on their generic state machine to correctly implement the async encoding & decoding. But I'm wondering of the benefits this might have over submitting a patch to add lz4 support.

jean-airoldie

comment created time in 9 days

issue commentpicoHz/lzzzz

AsyncReadDecompressor reads `Ok(0)` before EOF is reached

I found this crate that handles the encoding & decoding. I'm wondering whether submitting a patch would be simpler.

jean-airoldie

comment created time in 9 days

startedsalsa-rs/salsa

started time in 10 days

startedNemo157/async-compression

started time in 12 days

create barnchjean-airoldie/lzzzz

branch : read_zero

created branch time in 12 days

push eventjean-airoldie/lzzzz

picoHz

commit sha 335d595621701f084a7a931f86e765f0d91e8ad7

lz4f: implement Debug trait for compressors and decompressors

view details

picoHz

commit sha 066a097ab38d77f4104d259b3ba4b37ed0755796

add justfile

view details

picoHz

commit sha 47b599a3ecdf7bf6e23adc2b08a0a1c0292e6bc3

workflow: add clippy check

view details

picoHz

commit sha 8b58d544db8b857907d6a75f64f9b5cab3924890

lz4f: use consistent field names

view details

jean-airoldie

commit sha bdd4f92ef5ae4f31bed491ffe9be979a22acacd1

Added get_ref & get_mut to all compressors & decompressors

view details

picoHz

commit sha ef403954536dab87cdca90aee72b7b0e7e5cd0d8

Merge pull request #7 from jean-airoldie/get_mut Added get_ref & get_mut to all compressors & decompressors

view details

picoHz

commit sha 3ef35a7b0b99b196d5ab9ea6aeff83586268789b

bump v0.4.1

view details

picoHz

commit sha 012f8e0b074abcf3f9b2cb72ba1bffcca53a2fdd

workflow: add nightly toolchain

view details

picoHz

commit sha 19d3d6d06d99c90d3dc7dc754b4a3241795cb877

add rustfmt.toml

view details

picoHz

commit sha ad978a870e83e8e8b8f4b082ca5de362b355d41a

FrameInfo: fix typo

view details

picoHz

commit sha c740ff83a1aef40a361fa5be9636f3ba04e36fd4

Merge pull request #9 from picoHz/frameinfo-fix-typo FrameInfo: fix typo

view details

picoHz

commit sha 79840c061d96fbb26a7f2ce63cde1dbc3e496f07

FrameInfo: fix typo

view details

jean-airoldie

commit sha 3c74902ee8248a2432712c9071ca42e8d4e95eec

Fix AsyncWrite{Compressor,Decompressor}'s AsyncWrite impl * Fixed poll_{write,flush,close} impls. * Added new tests.

view details

picoHz

commit sha 45abd1d6eb063ba263bb478e197ba2890dafb6cb

Merge pull request #10 from picoHz/frameinfo-fix-typo-2 FrameInfo: fix typo

view details

picoHz

commit sha a302dc9a1fae4667aa3fb77a601400813ff31bb5

Merge branch 'master' into small_buf

view details

picoHz

commit sha dabc3a5e3fb8d21a33841923a6b56a8d69e0eec4

Merge pull request #8 from jean-airoldie/small_buf Fix async writers wakeups, flushing and closing

view details

push time in 12 days

pull request commentpicoHz/lzzzz

Fix async writers wakeups, flushing and closing

Also I shortened some of unit tests by truncating the src dataset for the small_buffer tests since they took forever.

jean-airoldie

comment created time in 12 days

pull request commentpicoHz/lzzzz

Fix async writers wakeups, flushing and closing

I rebased & merge with master, so this should be good now. Also I can confirmed that #5 is now fixed. However #4 is still an issue, which suggests that its caused by the async reader impl. I'll look into it later.

jean-airoldie

comment created time in 12 days

push eventjean-airoldie/lzzzz

picoHz

commit sha ad978a870e83e8e8b8f4b082ca5de362b355d41a

FrameInfo: fix typo

view details

picoHz

commit sha c740ff83a1aef40a361fa5be9636f3ba04e36fd4

Merge pull request #9 from picoHz/frameinfo-fix-typo FrameInfo: fix typo

view details

jean-airoldie

commit sha 3c74902ee8248a2432712c9071ca42e8d4e95eec

Fix AsyncWrite{Compressor,Decompressor}'s AsyncWrite impl * Fixed poll_{write,flush,close} impls. * Added new tests.

view details

push time in 12 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha 30d1d362a4d21bd5fcaa7b9e39776fa9bf000ee1

Fix AsyncWrite{Compressor,Decompressor}'s AsyncWrite impl * Fixed poll_{write,flush,close} impls. * Added new tests.

view details

push time in 12 days

issue commentpicoHz/lzzzz

AsyncReadDecompressor reads `Ok(0)` before EOF is reached

This is not fixed. I'll open a PR later today.

jean-airoldie

comment created time in 12 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha 644858fa270e9b25d04528582de1303cff1e3acf

More AsyncWriteCompressor improvements

view details

jean-airoldie

commit sha 73d7757b709d40c4e4e9d09efc0662592513a8d3

Reworked AsyncWriteDecompressor

view details

jean-airoldie

commit sha 9c160085294ccce1b15ef93f36d7a4c66df60780

Speedup async bufread compressor small_buf test

view details

push time in 12 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha 558e2262f58d71ab8c700fa50a5e328b73c1475e

Improved AsyncWriteCompressor state machine

view details

push time in 12 days

push eventjean-airoldie/lzzzz

picoHz

commit sha 012f8e0b074abcf3f9b2cb72ba1bffcca53a2fdd

workflow: add nightly toolchain

view details

picoHz

commit sha 19d3d6d06d99c90d3dc7dc754b4a3241795cb877

add rustfmt.toml

view details

jean-airoldie

commit sha 7e10dc10ce7eae75ef05c25cebd9dba003330275

Added failing compressor test for small buffers

view details

jean-airoldie

commit sha e0a4d7fc6ddaef849465fb679b7e1b9f77aea45e

Added AsyncWriteDecompressor small buffer test Also fixed get_ref method signatures.

view details

jean-airoldie

commit sha 1444b38f1f3e981773a9d95438a35a6eb1aa43c6

Reduced dataset size for tests

view details

push time in 12 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha 76c66520966b7c6cb63bb56edff9953489787ce1

Added AsyncWriteDecompressor small buffer test Also fixed get_ref method signatures.

view details

push time in 12 days

pull request commentpicoHz/lzzzz

Added failing compressor test for small buffers

I'll start working on a fix tomorrow if I get the time.

jean-airoldie

comment created time in 13 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha b2d9148b2628b08f21d5188c0db1722060427c49

Added failing compressor test for small buffers

view details

push time in 13 days

PR opened picoHz/lzzzz

Added failing compressor test for small buffers

This is the simplest test I could think off. I tested without a AsyncWriteCompressor to make sure there wasn't any bugs in the test, so it should be good.

+115 -1

0 comment

2 changed files

pr created time in 13 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha 7010be2772c427ea3ab4849a3861de55b44423c0

Added failing compressor test for small buffers

view details

push time in 13 days

create barnchjean-airoldie/lzzzz

branch : small_buf

created branch time in 13 days

pull request commentpicoHz/lzzzz

Added get_ref & get_mut to all compressors & decompressors

I'm guessing that the Write trait is pretty legacy and they forgot to add a close method.

jean-airoldie

comment created time in 13 days

delete branch jean-airoldie/lzzzz

delete branch : get_mut

delete time in 13 days

PR opened picoHz/lzzzz

Added get_ref & get_mut to all compressors & decompressors

Now that I think about it, I'm not sure how I feel about CompressorWriter::into_inner because its possible that the writer is blocking. This would make into_inner block, which would be unexpected for the user. https://github.com/picoHz/lzzzz/blob/8b58d544db8b857907d6a75f64f9b5cab3924890/src/lz4f/stream/comp/write.rs#L50-L54

I kind feel the same for the Drop impl of CompressorWriter which could also potentially block. https://github.com/picoHz/lzzzz/blob/8b58d544db8b857907d6a75f64f9b5cab3924890/src/lz4f/stream/comp/write.rs#L95-L99

A potential solution would be to expose a separate fn finish(self) -> Result<(), io::Error> method, however it would be easy for the user to misuse the API since Write doesn't implement a close method like AsyncWrite does.

+150 -0

0 comment

12 changed files

pr created time in 13 days

create barnchjean-airoldie/lzzzz

branch : get_mut

created branch time in 13 days

issue commentpicoHz/lzzzz

AsyncReadCompressor is never waked up when buffer too small

Even with the latest version, the issue still persists. I'll submit PR with some reproduction code tomorrow if I get the time.

jean-airoldie

comment created time in 14 days

push eventjean-airoldie/lzzzz_async_read_decompressor

jean-airoldie

commit sha e4020cd7634be3037bc82f2ef1997a203f8aa972

Updated to latest version

view details

push time in 14 days

push eventjean-airoldie/lzzzz

picoHz

commit sha a96482f80002a69c31193e48252428a98b9cf3ab

trigger workflows on all branches

view details

picoHz

commit sha 943aa16f9de901916d781992eb6dda99b3457786

Merge pull request #6 from picoHz/trigger-workflow trigger workflows on all branches

view details

picoHz

commit sha a318259516929ac9894b968c7f7d4784497d7e31

fix Async{BufRead,Read}Decompressor not working properly with small buffer

view details

picoHz

commit sha 5bfff1976a380e2bd896ca303f3df03afb0a44de

bump v0.3.3

view details

jean-airoldie

commit sha faf69e437376227a3a29ee7db8e244379d10daeb

Replaced tokio dependency by async_std & futures_io

view details

picoHz

commit sha 9d5e0c93b257e841d45c2cb6fe64ee0a33f2abb4

update dependency

view details

picoHz

commit sha 612574ce6f99e9ea744bb6f709a177af2c5ea433

bump v0.4.0

view details

push time in 14 days

delete branch jean-airoldie/lzzzz

delete branch : async_std

delete time in 15 days

push eventjean-airoldie/lzzzz

picoHz

commit sha a96482f80002a69c31193e48252428a98b9cf3ab

trigger workflows on all branches

view details

picoHz

commit sha 943aa16f9de901916d781992eb6dda99b3457786

Merge pull request #6 from picoHz/trigger-workflow trigger workflows on all branches

view details

picoHz

commit sha a318259516929ac9894b968c7f7d4784497d7e31

fix Async{BufRead,Read}Decompressor not working properly with small buffer

view details

picoHz

commit sha 5bfff1976a380e2bd896ca303f3df03afb0a44de

bump v0.3.3

view details

jean-airoldie

commit sha d560c84bbe04594842796bc32b072d18b5eea613

Merge branch 'master' of https://github.com/picoHz/lzzzz into async_std

view details

push time in 15 days

push eventjean-airoldie/lzzzz

push time in 15 days

push eventjean-airoldie/lzzzz

picoHz

commit sha b02c12a23f804a17a1ea1bfc79d3d0ff240fef3a

trigger workflows on all branches

view details

picoHz

commit sha 8ed6ecaf8a3f3c6461491cecc7f81de6ccd6c0f1

fix Async{BufRead,Read}Decompressor not working properly with small buffer

view details

push time in 15 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha e7f55ba7cf640b4f133a6d3b365e4eca842fd157

Added into_inner method to the sync compressors

view details

picoHz

commit sha e47f4996e133d5610b4b5346dca9814658d86725

Fix typos

view details

picoHz

commit sha 888a3b4cd879cc86627dfd948acf4f5d4a17acef

Merge pull request #2 from jean-airoldie/into_inner Added into_inner method to the sync compressors

view details

picoHz

commit sha 8d6dae882d8b3e605bff40463308647107664f96

bump v0.3.0

view details

picoHz

commit sha 9cf7f71be649d6874baf0debcbf19edcab819ec0

lz4_hc: fix documentation

view details

picoHz

commit sha fac43585620b9da9beb713847a29341770430e13

better crate description

view details

picoHz

commit sha 14841d41ff85cbea4b2dda22a843cb725376f5b0

bump v0.3.1

view details

push time in 15 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha 80047a8b32bad42ab1c9177fa11d371f726c9bfe

Merge branch 'master' of https://github.com/picoHz/lzzzz into async_std

view details

jean-airoldie

commit sha b2ab60026b3a7ae2ac70600fb2c6d0bd61a52ccd

Merge branch 'async_std' of https://github.com/jean-airoldie/lzzzz into async_std

view details

push time in 15 days

PR closed djc/quinn

WIP: Make quinn runtime agnostic

This is my WIP attempt to fix #502, as outlined in my comment.

+99 -9

0 comment

4 changed files

jean-airoldie

pr closed time in 15 days

issue commentdjc/quinn

async_std support

Unless I misunderstand how smol works these days? I'd prefer to be genuinely runtime-agnostic as outlined above.

Right. The only real benefit is that you can run your quinn sockets in a non-tokio runtime without crashing.

Demi-Marie

comment created time in 15 days

PR opened djc/quinn

WIP: Make quinn runtime agnostic

This is my WIP attempt to fix #502, as outlined in my comment.

+99 -9

0 comment

4 changed files

pr created time in 16 days

create barnchjean-airoldie/quinn

branch : smol

created branch time in 16 days

issue commentdjc/quinn

async_std support

From what i understand you could replace the tokio dependency with smol & futures-io:

  • tokio::io::Async{Read,Write} becomes futures_io::Async{Read,Write}
  • tokio-rustls becomes async-tls
  • PollEventedmio::net::UdpSocket becomes smol::Asyncstd::net::UdpSocket
  • tokio::time becomes either futures-timer or smol::Timer

However you would need to find a non tokio alternative for tokio_util::codec, although it wouldn't be too hard to adapt the code.

I think this should make quinn abstract over runtime.

Demi-Marie

comment created time in 16 days

push eventjean-airoldie/quinn

Dirkjan Ochtman

commit sha afc18262afc52a7ac00bc35f0e639fb742967d5c

Upgrade to err-derive 0.2

view details

jean-airoldie

commit sha 6d069ca8eb58de93be92f3c0496afcf630e81195

Impl Debug for various types Impl fmt::Debug for * {Send,Recv}Stream * ConnectionRef * ConnectionDriver

view details

Benjamin Saunders

commit sha a0d4b36a17afe578d6531c6b1bfc8a790ab914d4

Fix stream ID leak when stream opening futures are dropped By deferring allocation of the stream ID until the futures synchronously completes, we guarantee that either no ID is allocated or ownership is successfully taken by the Send/RecvStream struct(s).

view details

jean-airoldie

commit sha 1d59323eb6d9944f63a6d4930ae7f829676a92a0

Added debug impl to various structs

view details

dependabot-preview[bot]

commit sha 89cd5d06e5b59b2b8774dfa5a1508e572b149f78

Update webpki-roots requirement from 0.17 to 0.18 Updates the requirements on [webpki-roots](https://github.com/ctz/webpki-roots) to permit the latest version. - [Release notes](https://github.com/ctz/webpki-roots/releases) - [Commits](https://github.com/ctz/webpki-roots/compare/v/0.17.0...v/0.18.0) Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

view details

Benjamin Saunders

commit sha 458613e7bb1052d787085ac00a5b33c207ec076b

Allow PING at every encryption level See https://github.com/quicwg/base-drafts/pull/3035

view details

Benjamin Saunders

commit sha 06f0a3e292e63d93372483f7e140928bff33ec13

Fix stream ID flow control deadlock

view details

Benjamin Saunders

commit sha bc3e5af68b31e4b092150bb4d42f5079e06c4578

Refactor benchmarks

view details

Benjamin Saunders

commit sha 5fbbf4c8f30cba779135b269b1adceecaa9c4e99

Limit outgoing datagram buffer size

view details

Benjamin Saunders

commit sha 6ddcc51a58af166ed49259613c2061c98ab3990b

More consistent datagram-related parameter naming

view details

Benjamin Saunders

commit sha eba7b65bfff2a6ead1b3e9a51dfabf26f2b5edef

Benchmark datagrams

view details

Benjamin Saunders

commit sha 84d44e40a14a8caf0f40d34f5f8dbf082ab09db9

Clarify small stream benchmark character Hopefully this will reduce apples-to-oranges comparisons with datagrams.

view details

Benjamin Saunders

commit sha a957bf4aa2b08fcfecc99b72afe58d458775f5dd

Replace slog with tracing

view details

stammw

commit sha 0bec6eac8eb6458c950d9153ff40c7587b454197

H3: immediately close quic connection on fatal errors

view details

stammw

commit sha 46102d07f2986dd349792308a74d3786de0f6aef

H3: throw an error when client recieves a BiStream

view details

stammw

commit sha a570f17daab80c61ca6763aceb6d3241ff811cc5

H3: track ongoing requests

view details

stammw

commit sha f170a8992266abc6aae07b3aaa8fb268606e6175

H3: GO_AWAY implementation

view details

stammw

commit sha 0972700dab8681146abd3b6e43de4c8d1c20bd9e

H3: rename RecvRequestState finished variant

view details

stammw

commit sha 51e5aaebbce864a06e100e04a952658a1d78f41d

H3: typo in ResponseBuilder name

view details

stammw

commit sha d4caaf5cd4a6b7b4eee5633ac3244ec0ca0410a1

H3: issue quic stream errors and reset streams

view details

push time in 16 days

push eventjean-airoldie/lzzzz_async_read_decompressor

jean-airoldie

commit sha 60c318f1950845d0058c97866692c9d1f20f8471

Fixed formatting

view details

push time in 16 days

push eventjean-airoldie/lzzzz_async_read_decompressor

jean-airoldie

commit sha 6e4aa23b22e36c9d70d38702687e9c229e095605

Improved never_reads example

view details

jean-airoldie

commit sha 04bc122076adf4e278893b9915f975ca19ee4d6b

Added output to never_reads example

view details

push time in 16 days

issue openedpicoHz/lzzzz

AsyncReadCompressor is never waked up when buffer too small

It seems that when the read / write buffer is smaller than the total size of the lz4f frame, the AsyncReadCompressor is not waked up properly.

Here is my repoduction example. I'm using my async_std branch for simplicity.

created time in 16 days

push eventjean-airoldie/lzzzz_async_read_decompressor

jean-airoldie

commit sha 285fa774505fdd233728b23cb3ecf8c696ee455f

Added examples

view details

push time in 16 days

push eventjean-airoldie/lzzzz_async_read_decompressor

jean-airoldie

commit sha ddddd4e65e3c54644ddaff1de4078665e14d258a

Added rng bytes

view details

push time in 16 days

issue openedpicoHz/lzzzz

AsyncReadDecompressor reads `Ok(0)` before EOF is reached

Basically if the read / write buffer is small enough (less than 54 bytes), AsyncReadCompressor::poll_read returns Poll::Ready(Ok(0)), which is the sentinel for EOF, even though we haven't actually reached end of file.

Here is the reproduction repo. It uses my async_std branch because it makes the code simpler, but its also an issue on master.

created time in 16 days

push eventjean-airoldie/lzzzz_async_read_decompressor

jean-airoldie

commit sha 23bc8a266d8ea218e0f51732fcbd4894a1c872b1

determined min buffer size to reproduce issue

view details

push time in 16 days

create barnchjean-airoldie/lzzzz_async_read_decompressor

branch : master

created branch time in 16 days

created repositoryjean-airoldie/lzzz_async_read_decompressor

created time in 16 days

PR opened picoHz/lzzzz

Replaced tokio dependency by async_std & futures_io

This is arguable but i think that using async_std and futures is preferable over tokio for the two following reasons:

Tokio futures are designed to be ran in a tokio executor and tend to panic / not work properly which makes them pretty hard to use. For instance the smol runtime creates its own tokio executor to circumvent this issue.

The tokio crate defines its own io traits which are incompatible the futures-io trais and therefore incompatible with any async library that isn't tokio (or based on). However, this gap can be bridge via a tokio adapter. I'm wondering if it would make sense to add another feature flag that also implements the tokio specific traits (we could reuse the tokio-io flag).

Also should I fill this PR against dev or master?

+508 -473

0 comment

12 changed files

pr created time in 17 days

create barnchjean-airoldie/lzzzz

branch : async_std

created branch time in 17 days

PR opened picoHz/lzzzz

Added into_inner method to the sync compressors

This useful when dealing with owned readers & writers.

+45 -10

0 comment

6 changed files

pr created time in 18 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha e7f55ba7cf640b4f133a6d3b365e4eca842fd157

Added into_inner method to the sync compressors

view details

push time in 18 days

create barnchjean-airoldie/lzzzz

branch : into_inner

created branch time in 18 days

push eventjean-airoldie/lzzzz

push time in 18 days

push eventjean-airoldie/lzzzz

jean-airoldie

commit sha d127e9fb36112ab911c29e57c2eef50b981b594f

lzzzz: Added AsRawFd to ReadDecompressor & WriteCompressor This allows `ReadDecompressor` and `WriteCompressor` to be used along with `smol::Async` for async io, instead of tokio.

view details

picoHz

commit sha a54d119f9d6904bb20ecab60d8ce0d4f2f0dcf52

remove license-file field

view details

picoHz

commit sha 703a70a0f2354e054f4700b069e342146c441ad5

better documentation

view details

picoHz

commit sha a75423ba99fa5ed2a4101773b27e29c50a2c7f3d

add badges to README.md

view details

picoHz

commit sha c7e5846e60766c5964192fff465741339dc0673b

Create rust.yml

view details

picoHz

commit sha ba88aeb9bb10d69e32999afb3823d61bf8f03fb8

ensure that all the compressors and decompressors implement Send trait

view details

picoHz

commit sha 26c7bfe5a195a2cf22c4cdbdab67811f7f4a85f8

bump v0.2.0

view details

picoHz

commit sha d63b401625f6cf944900b874d7dfe1a9d3e89730

Update README.md

view details

picoHz

commit sha 4f6e18848c9a470e7fdda15180370621615bf9ef

Update rust.yml

view details

picoHz

commit sha a0d635362215c847fc075474eeaf00e13c9826b2

Update README.md

view details

picoHz

commit sha 22962561c473a5162c69ea00de2ecd5badad7464

Update rust.yml

view details

picoHz

commit sha 447c767e113d848e3c5d7f837eea94d8c5c0a3be

Update README.md

view details

picoHz

commit sha d286d20ec94c93d4d0043e0e5f74873613605226

Update Cargo.toml

view details

picoHz

commit sha f32293306f5c28c174c3616d02f26977f743fe5d

Update Cargo.toml

view details

picoHz

commit sha 208090b8753f2df50dec693af3fb243b5e9b7a9c

bump v0.2.1

view details

picoHz

commit sha 4e337ee4456eeb58601df4c20e19a7e4b9da06dd

better documentation

view details

picoHz

commit sha 902c642ddc000c313cb8e3b530dae1c5a2bc64d2

better documentation

view details

picoHz

commit sha ce26a5049ad3282959c918db4a179fadc8cf99f9

bump v0.2.2

view details

picoHz

commit sha b83bdb303eed9299bd99d0143df3852999c7f4a2

Update README.md

view details

picoHz

commit sha 68cc61f8689adda1dd961ae1722a02f0d11c9395

Update rust.yml

view details

push time in 18 days

issue commentpicoHz/lzzzz

WriteCompressor does not implement Send

Confirmation that the Compression / Decompression context is Send-safe https://github.com/lz4/lz4/issues/887#issuecomment-663653157.

jean-airoldie

comment created time in 20 days

issue closedpicoHz/lzzzz

WriteCompressor does not implement Send

It seems like all the compressors and decompressors are not Send because of CompressionContext and DecompressionContext respectively.

Here is the minimal reproduction code.

use lzzzz::{lz4f::*};

fn is_send<T: Send>(_: &T) {}

fn main() {
    let mut buf = vec![];
    let w = WriteCompressor::new(&mut buf, Preferences::default());
    is_send(&w);
}

Here is the backtrace for the Send bound.

error[E0277]: `std::ptr::NonNull<lzzzz::lz4f::binding::LZ4FCompressionCtx>` cannot be sent between threads safely
  --> tests/lz4f_stream.rs:20:9
   |
13 |     fn is_send<T: Send>(t: &T) {}
   |        -------    ---- required by this bound in `write_compressor::is_send`
...
20 |         is_send(&w);
   |         ^^^^^^^ `std::ptr::NonNull<lzzzz::lz4f::binding::LZ4FCompressionCtx>` cannot be sent between threads safely
   |
   = help: within `std::result::Result<lzzzz::lz4f::stream::comp::write::WriteCompressor<&mut std::vec::Vec<u8>>, lzzzz::lz4f::error::Error>`, the trait `std::marker::Send` is not implemented for `std::ptr::NonNull<lzzzz::lz4f::binding::LZ4FCompressionCtx>`
   = note: required because it appears within the type `lzzzz::lz4f::api::CompressionContext`
   = note: required because it appears within the type `lzzzz::lz4f::stream::comp::Compressor`
   = note: required because it appears within the type `lzzzz::lz4f::stream::comp::write::WriteCompressor<&mut std::vec::Vec<u8>>`
   = note: required because it appears within the type `std::result::Result<lzzzz::lz4f::stream::comp::write::WriteCompressor<&mut std::vec::Vec<u8>>, lzzzz::lz4f::error::Error>`

Is this a bug or a fundamental limitation of the liblz4?

closed time in 20 days

jean-airoldie

issue commentlz4/lz4

Compression & decompression context thread safety

Here's an example of such a structure that we use inside Facebook: [folly::compression::CompressionCoreLocalContextPool)(https://github.com/facebook/folly/blob/master/folly/compression/CompressionCoreLocalContextPool.h).

Cool, I'll look into it!.

jean-airoldie

comment created time in 20 days

issue closedlz4/lz4

Compression & decompression context thread safety

Can the LZ4F_cctx returned from LZ4F_createCompressionContext and the LZ4F_dctx returned from LZ4F_createDecompressionContext be shared safely accross threads? I couldn't find anything on the matter in the doc.

closed time in 20 days

jean-airoldie

issue commentlz4/lz4

Compression & decompression context thread safety

Would it be valid if thread A creates a context then gives exclusive ownership of that context to thread B? By exclusive ownership I mean that thread A will give its only ptr to the context to thread B and thread B will inherit the responsibility to properly deallocate that context. An instance of this wouldn't be valid is if a context has some thread-local storage for instance.

This is typically what happens when implementing a context pool.

Does reusing the context via a pool provide significant performance gains?

jean-airoldie

comment created time in 20 days

startedrhysd/github-action-benchmark

started time in 20 days

create barnchjean-airoldie/lzzzz

branch : is_send

created branch time in 20 days

issue openedlz4/lz4

Compression & decompression context thread safety

Can the LZ4F_cctx returned from LZ4F_createCompressionContext and the LZ4F_dctx returned from LZ4F_createDecompressionContext be shared safely accross threads? I couldn't find anything on the matter in the doc.

created time in 20 days

issue commentpicoHz/lzzzz

WriteCompressor does not implement Send

It seems like the lz4-sys crate does treat the contexts as Send, however its probably better to verify directly with the liblz4 devs.

jean-airoldie

comment created time in 20 days

issue openedpicoHz/lzzzz

WriteCompressor does not implement Send or Sync

It seems like all the compressors and decompressors are not Send or Sync because of CompressionContext and DecompressionContext respectively.

Here is the minimal reproduction code.

use lzzzz::{lz4f::*};

fn is_send<T: Send>(t: &T) {}
fn is_sync<T: Sync>(t: &T) {}

fn main() {
    let buf = vec![];
    let w = WriteCompressor::new(&mut buf, Preferences::default());
    is_send(&w);
    is_sync(&w);
}

Here is the backtrace for the Send bound. I omitted the Sync one because its the same.

error[E0277]: `std::ptr::NonNull<lzzzz::lz4f::binding::LZ4FCompressionCtx>` cannot be sent between threads safely
  --> tests/lz4f_stream.rs:20:9
   |
13 |     fn is_send<T: Send>(t: &T) {}
   |        -------    ---- required by this bound in `write_compressor::is_send`
...
20 |         is_send(&w);
   |         ^^^^^^^ `std::ptr::NonNull<lzzzz::lz4f::binding::LZ4FCompressionCtx>` cannot be sent between threads safely
   |
   = help: within `std::result::Result<lzzzz::lz4f::stream::comp::write::WriteCompressor<&mut std::vec::Vec<u8>>, lzzzz::lz4f::error::Error>`, the trait `std::marker::Send` is not implemented for `std::ptr::NonNull<lzzzz::lz4f::binding::LZ4FCompressionCtx>`
   = note: required because it appears within the type `lzzzz::lz4f::api::CompressionContext`
   = note: required because it appears within the type `lzzzz::lz4f::stream::comp::Compressor`
   = note: required because it appears within the type `lzzzz::lz4f::stream::comp::write::WriteCompressor<&mut std::vec::Vec<u8>>`
   = note: required because it appears within the type `std::result::Result<lzzzz::lz4f::stream::comp::write::WriteCompressor<&mut std::vec::Vec<u8>>, lzzzz::lz4f::error::Error>`

Is this a bug or a fundamental limitation of the liblz4?

created time in 20 days

create barnchjean-airoldie/lzzzz

branch : smol-support

created branch time in 20 days

fork jean-airoldie/lzzzz

Yet another liblz4 binding for Rust.

fork in 20 days

startedpicoHz/lzzzz

started time in 20 days

startedmozilla/sops

started time in 21 days

issue commentgoogle/flatbuffers

Tracking issue: Rust buffer verification

@TheButlah

I do want to say though, that if the panicking version stays and isn't removed from the API, it is not ok to remove its bounds checks unless it will require the user to wrap the accessor with an unsafe block - its considered a very large no no to expose potentially unsafe behavior in a safe API. Instead having it as an unsafe fn is the move there.

Just so we're clear, I'm talking about bounds check that would not be required if the type is known, as opposed to bounds check in general. Take for instance utf8::from_str_unchecked from the std lib. It returns a slice on bytes that are assumed to be utf8. After that slice is returned all the operations are unchecked with regards to the type (i.e. we don't make sure that each character is utf8 everytime), but there still are checked operations (get vs get_unchecked).

rw

comment created time in 24 days

issue commentgoogle/flatbuffers

Tracking issue: Rust buffer verification

Moreover I would argue that if you want data in a flatbuffer that should not be validated by the verifier for performance reasons, this data should be expressed a an array of bytes. This way you can do the type checking only when you access said data.

rw

comment created time in 24 days

issue commentgoogle/flatbuffers

Tracking issue: Rust buffer verification

So from what I'm gathering from this thread so far, a flatbuffer type could have two ways of construction:

  • The current unchecked way where the buffer might be invalid which could lead to panics when accessing (and eventually be UB if we remove the checks for better perf)
  • By first running the verifier which ensures that accesses won't panic.

Assuming that we had a working rust verifier the obvious approach would be:

impl Foo {
  pub fn new(bytes: &[u8]) -> Result<Self, Error> {
    verify::<Self>(bytes)?;
    Ok(get_root::<Self>(bytes))
  }

  pub unsafe fn new_unchecked(bytes: &[u8]) -> Self {
    // Currently this is not actually unsafe because we do bounds checking in the accessors,
    // but this would allow us to make accessing unchecked (performance) without it being a breaking change.
    unsafe {
      get_root::<Self>(bytes)
    }
  }
}

So in that sense the Foo flatbuffer type is a contract that assumes a valid buffer, just like you assume that the underlying representation of a pure rust struct is valid on the type system level. Think of it like some funky, more complex, alternative to #[repr(C)].

rw

comment created time in 24 days

pull request commentjean-airoldie/zeromq-src-rs

Fixing cmake version that was broken for iOS builds

I'm not following. The issue mentions the breaking changes being reverted. Is this still not fixed in the later release?

Assuming this is the case, I think the better solution would be to replace your own dependency with a patch, assuming this will eventually be fixed in the cmake crate. Otherwise this would force the cmake crate version to be exactly "0.1.43" for all the crates using zeromq-src in their dependency tree, since from what I recall, cargo forces one major version per crate which could lead to problems.

dr-orlovsky

comment created time in a month

delete branch jean-airoldie/caring

delete branch : prevent_resize

delete time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha 2a682716380c8ca825b81e83dfbd8ae341310767

Improved documentation based on Ekleog's comments Co-authored-by: Léo Gaspard <github@leo.gaspard.ninja>

view details

push time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha 002a5dc29fa7b62eb0a1ae72f7f54f55bebff85d

Update src/lib.rs Co-authored-by: Léo Gaspard <github@leo.gaspard.ninja>

view details

push time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha 1349c6c267ba4359a0eb6761595cd0993bf9cb1a

Update src/lib.rs Co-authored-by: Léo Gaspard <github@leo.gaspard.ninja>

view details

push time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha 79ffb157675d0ae826683d578c1a44f8380434ea

Document the resize protections

view details

push time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha dc1597f2290feb08e5428f84976adddd70267f4e

Document the resize protections

view details

push time in a month

pull request commentstandard-ai/caring

Prevent anonymous file resizing via seals if possible

Finally do we document this as being part of the API? If you plan to make this crate cross platform you would need to make a breaking change if the target platform does not support something similar to sealing.

jean-airoldie

comment created time in a month

pull request commentstandard-ai/caring

Prevent anonymous file resizing via seals if possible

So I did some more digging I found out that fnctl with F_SEAL_ADD will only fail if sealing is not supported by the filesystem but that's not possible if the previous memfd_create call succeeded since they were both introduced in the 3.17 linux version.

jean-airoldie

comment created time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha c7fc169701151da443783c3519b333743c0cdf66

Prevent anonymous file resizing via seals if possible This prevents anonymous file truncating which could lead to SIGBUS for filesystems that support sealing.

view details

push time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha fabf52c817cf5ee26f0e968c072b3198afc5fb2c

Prevent anonymous file resizing via seals if possible This prevents anonymous file truncating which could lead to SIGBUS for filesystems that support sealing.

view details

push time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha 7ac19bcae5ca8e3864c03320637ef920fe86c15a

Added is_resizable method to shared

view details

push time in a month

pull request commentstandard-ai/caring

Added Sharing::size() method to retrieve size

FWIW, manipulating the shared memory as &[AtomicU8] is probably going to be awfully slow.

In my specific use case I'm just memcpy'ing the shared bytes and then casting the local copy to &[u8] and vice versa. I benchmarked it and doing so has no performance cost on x86 (at least on the 10yo hardware i tested) for by specific use-case (shared lock-free ringbuffer).

You may want to have a look at using raw pointers and the volatile series of functions, which are designed for this (memory-mapped IO) use case :)

Thanks, looks like what I was looking for.

Also, you do need to not mutate &[u8] under the hood [...] it'd be Rust UB (and LLVM UB), and the compiler might do optimizations that assume this and will make monkeys fly out of your nose if it ever happens to not hold

Yeah, I'm not manipulating the shared memory directly, only memcpy to and from it.

jean-airoldie

comment created time in a month

pull request commentstandard-ai/caring

Prevent anonymous file resizing via seals if possible

While I agree this makes sense for cooperating processes, I think that if it is to be used as a security protection (as you appear to be planning to), I think it's worth it verifying the return value of fcntl.

Yeah I agree with your point. I think the simplest way to solve this is to had a method to the Shared that checks whether the grown and shrink seals are active.

jean-airoldie

comment created time in a month

delete branch jean-airoldie/caring

delete branch : get_size

delete time in a month

PR opened standard-ai/caring

Prevent anonymous file resizing via seals if possible

This prevents anonymous file truncating which could lead to SIGBUS for filesystems that support sealing, as mentionned here.

I think that this is a sane default because I can't really think of a reason why the user would want to resize the shared memory since he would have to munmap first and currently he can't get back ownership of the MmapRegion anyway.

Note that im ignoring the errors returned by fcntl because I've determined that most of them will never occur, and the only one that can occur is worth ignoring. Specifically, according to the fcntl manpage, these are the possible errors related to seals:

EBUSY [...] if flag includes F_SEAL_WRITE, [...]

Can't happen since we don't currently use that seal and it wouldn't make sense to add support for it in this context.

EINVAL [...] if flags are invalid

I've determined that the flags are valid.

EPERM [...] file already includes F_SEAL_SEAL.

This won't happen because of the allow_sealing option.

EINVAL [...] if the filesystem containing the inode referred to by fd does not support sealing.

This one could occur, in which case sealing fails. Realistically, unless the user absolutely needs a guarantee against resizing, I can't see a justification for error'ing out.

+7 -0

0 comment

1 changed file

pr created time in a month

create barnchjean-airoldie/caring

branch : prevent_resize

created branch time in a month

create barnchjean-airoldie/caring

branch : allow_sealing

created branch time in a month

push eventjean-airoldie/caring

jean-airoldie

commit sha b94128168e8f53ca476f287bfee178fc23ae58c5

Add Sharing::size() method to retrieve size (#5)

view details

push time in a month

pull request commentstandard-ai/caring

Added Sharing::size() method to retrieve size

@Ekleog I plan on using memfd seals (F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL) to prevent an untrusted peer to cause SIGBUS by truncating the file like you mentioned. I'm also manipulating the shared memory as a &[AtomicU8] instead of &[u8] to prevent collisions (although its not necessary for x86 AFAIK). Hopefully that should be enough.

jean-airoldie

comment created time in a month

PR opened udoprog/unicycle

Added FusedStream impl for Unordered

This simply calls is_empty and the Unordered.

This is especially useful when the Unordered is called inside the futures-util::select! macro.

+7 -1

0 comment

1 changed file

pr created time in a month

create barnchjean-airoldie/unicycle

branch : fused_stream

created branch time in a month

fork jean-airoldie/unicycle

A futures abstraction that runs a set of futures which may complete in any order

fork in a month

startedstandard-ai/sendfd

started time in a month

startedstandard-ai/caring

started time in a month

more