profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/benesch/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Nikhil Benesch benesch @MaterializeInc New York, NY Systems engineer

ballista-compute/sqlparser-rs 676

Extensible SQL Lexer and Parser for Rust

benesch/backport 4

automatically backport pull requests

benesch/autouseradd 3

👨‍🚒 put out fires started by `docker run --user`

benesch/adspygoogle.dfp 1

setuptools fork of Google DoubleClick for Publishers API Python Client

benesch/backboard 1

have you backported your PRs today?

benesch/abomonation 0

A mortifying serialization library for Rust

benesch/abomonation_derive 0

A macros 1.1 #[derive(Abomonation)] implementation for the abomonation crate

benesch/Amethyst 0

Tiling window manager for OS X à la xmonad.

issue commentfede1024/rust-rdkafka

Cooperative Incremental sticky rebalance

Any update on this please?

oronsh

comment created time in a day

PR opened MaterializeInc/homebrew-materialize

update --threads param to --workers

Currently brew services isn't able to start materialized because this param was renamed.

+3 -3

0 comment

1 changed file

pr created time in 3 days

issue closedfede1024/rust-rdkafka

Impossible to send a FutureRecord without key

Hi, I'm trying to send a payload without a key to Kafka and I'm obtaining this error:

cannot infer type for type parameter K

My IDE shows me this error:

type inside async fn body must be known in this context E0698 cannot infer type for type parameter K declared on the associated function send Note: the type is part of the async fn body because of this await

Without the key:

async fn create_json(data: String, producer: Data<FutureProducer>) {
    producer
        .send(
            FutureRecord::to("test").payload(&format!("{}", Json(AddResp { result: data }).result)),
            Timeout::Never,
        )
        .await;
}

But it's working with the key:

async fn create_json(data: String, producer: Data<FutureProducer>) {
    producer
        .send(
            FutureRecord::to("test")
                .payload(&format!(
                    "{}",
                    Json(AddResp {
                        result: data.clone()
                    })
                    .result
                ))
                .key(&data),
            Timeout::Never,
        )
        .await;
}

Versions:

  • rdkafka: 0.26 (cmake-build)
  • rustc: 1.52.1 (9bc8c42bb 2021-05-09)

Have you any idea? Thanks

closed time in 4 days

sycured

issue commentfede1024/rust-rdkafka

Impossible to send a FutureRecord without key

Ah, it was just that thing 😕 I see that I'm really new on rust… Thank you a lot

sycured

comment created time in 4 days

issue commentfede1024/rust-rdkafka

Impossible to send a FutureRecord without key

Look at the definition of the FutureProducer:

pub struct FutureRecord<'a, K: ToBytes + ?Sized, P: ToBytes + ?Sized> {
    /// Required destination topic.
    pub topic: &'a str,
    /// Optional destination partition.
    pub partition: Option<i32>,
    /// Optional payload.
    pub payload: Option<&'a P>,
    /// Optional key.
    pub key: Option<&'a K>,
    /// Optional timestamp.
    pub timestamp: Option<i64>,
    /// Optional message headers.
    pub headers: Option<OwnedHeaders>,
}
[...]

    /// Sets the destination payload of the record.
    pub fn payload(mut self, payload: &'a P) -> FutureRecord<'a, K, P> {
        self.payload = Some(payload);
        self
    }

    /// Sets the destination key of the record.
    pub fn key(mut self, key: &'a K) -> FutureRecord<'a, K, P> {
        self.key = Some(key);
        self
    }

As you never set the key the compiler doesn't know the type of K when compiling the program. One way to solve this is by providing a type when initialising the variable.

So in your case you could solve it via:

async fn create_json(data: String, producer: Data<FutureProducer>) {
    let record: FutureRecord<String, String> = FutureRecord::to("test").payload(&format!("{}", Json(AddResp { result: data }).result));
    producer
        .send(
            record,
            Timeout::Never,
        )
        .await;
}

Now K has a type when compiling.

sycured

comment created time in 4 days

delete branch MaterializeInc/homebrew-materialize

delete branch : v0.8.0

delete time in 5 days

push eventMaterializeInc/homebrew-materialize

Brandon W Maister

commit sha e683500523c0395e97f0be00de7adff0b9574b56

Release v0.8.0 (#44)

view details

push time in 5 days

PR merged MaterializeInc/homebrew-materialize

Release v0.8.0

Part of https://github.com/MaterializeInc/materialize/issues/6957

+4 -4

0 comment

1 changed file

quodlibetor

pr closed time in 5 days

create barnchMaterializeInc/homebrew-materialize

branch : v0.8.0

created branch time in 5 days

PR opened fede1024/rust-rdkafka

FEAT: Added threaded producer example

This is example produces 1_000_000 messages in around 1-2secs depending on the machine that it is being ran on.

Closes #369

+104 -0

0 comment

1 changed file

pr created time in 6 days

created tagMaterializeInc/rust-dec

tagdec-0.4.2

libdecnumber bindings for the Rust programming language

created time in 7 days

push eventMaterializeInc/rust-dec

Sean Loiselle

commit sha 56ca68136fdd2e870b130a60fe759000e2eddc97

dec: refactor raw part functions to return &[u8]

view details

Sean Loiselle

commit sha a111d6ed94d2bb29f1e28b1d9e6df49227e66862

dec: prepare 0.4.2 release

view details

push time in 7 days

issue openedfede1024/rust-rdkafka

Impossible to send a FutureRecord without key

Hi, I'm trying to send a payload without a key to Kafka and I'm obtaining this error:

cannot infer type for type parameter K

My IDE shows me this error:

type inside async fn body must be known in this context E0698 cannot infer type for type parameter K declared on the associated function send Note: the type is part of the async fn body because of this await

Without the key:

async fn create_json(data: String, producer: Data<FutureProducer>) {
    producer
        .send(
            FutureRecord::to("test").payload(&format!("{}", Json(AddResp { result: data }).result)),
            Timeout::Never,
        )
        .await;
}

But it's working with the key:

async fn create_json(data: String, producer: Data<FutureProducer>) {
    producer
        .send(
            FutureRecord::to("test")
                .payload(&format!(
                    "{}",
                    Json(AddResp {
                        result: data.clone()
                    })
                    .result
                ))
                .key(&data),
            Timeout::Never,
        )
        .await;
}

Have you any idea? Thanks

created time in 8 days

Pull request review commentMaterializeInc/rust-dec

dec: refactor raw parts + 0.4.2 release

 impl<const N: usize> Decimal<N> {         Context::<Decimal128>::default().from_decimal(self)     } -    /// Returns the raw parts of this decimal.+    /// Returns the raw parts of this decimal, with the `u16` elements of `lsu`+    /// converted to `u8`.     ///     /// The meaning of these parts are unspecified and subject to change.-    pub fn to_raw_parts(&self) -> (u32, i32, u8, &[u16]) {-        (self.digits, self.exponent, self.bits, &self.lsu())-    }--    /// Returns a `Decimal::<N>` with the supplied raw parts.+    pub fn to_raw_parts(&self) -> (u32, i32, u8, &[u8]) {+        // SAFETY: `lsu` (returned by `coefficient_units()`) is a `&[u16]`, so+        // each element can safely be transmuted into two `u8`s.+        let (prefix, lsu, suffix) = unsafe { self.coefficient_units().align_to::<u8>() };+        // Each element of the LSU is 3-digits wide, so the raw LSU has+        // essentially ceil(digits / 3) elements; the u8 aligned version should+        // have twice that many.+        assert!(+            lsu.len() == Self::digits_to_lsu_elements_len(self.digits) * 2,+            "u8 version of LSU contained the wrong number of elements; expected {}, but got {}",+            Self::digits_to_lsu_elements_len(self.digits) * 2,+            lsu.len()+        );+        // There should be no unaligned elements in the prefix or suffix.+        assert!(prefix.is_empty() && suffix.is_empty());+        (self.digits, self.exponent, self.bits, lsu)+    }++    /// Returns a `Decimal::<N>` with the supplied raw parts, which should be+    /// generated using [`Decimal::to_raw_parts`].     ///     /// # Safety     ///     /// The raw parts must be valid according to the guarantees required by the     /// underlying C library, or undefined behavior can result. The easiest way     /// to uphold these guarantees is to ensure the raw parts originate from a     /// call to `Decimal::to_raw_parts`.-    pub unsafe fn from_raw_parts(digits: u32, exponent: i32, bits: u8, lsu_in: &[u16]) -> Self {+    ///+    /// # Panics+    ///+    /// If `lsu_u8` is not a slice that can be recomposed into a `&[16]` with+    /// the number of digits implicitly specified by the `digits` parameter,+    /// i.e. essentially `ceil(digits / 3)`. You can determine the appropriate+    /// number of elements in `lsu_u8` using 2 *+    /// [`Decimal::digits_to_lsu_elements_len`].+    pub unsafe fn from_raw_parts(digits: u32, exponent: i32, bits: u8, lsu_u8: &[u8]) -> Self {+        let lsu_u16: &[u16] = std::slice::from_raw_parts(+            lsu_u8.as_ptr() as *const u16,+            lsu_u8.len() / (std::mem::size_of::<u16>() / std::mem::size_of::<u8>()),+        );

Ah––I really learned a lot here. I clearly didn't understand the alignment issue, but I've since closely read the docs and come up with some test cases that fail in the old implementation. ty for your patience and hope this moves the PR forward.

sploiselle

comment created time in 9 days

Pull request review commentMaterializeInc/rust-dec

dec: refactor raw parts + 0.4.2 release

 impl<const N: usize> Decimal<N> {         }     } +    /// Returns the number of elements required in the `lsu` to represent some+    /// number of digits.+    ///+    /// This function is public and accepts a `u32` instead of a `Decimal` to+    /// aid in decomposing ([`Self::to_raw_parts`]) and recomposing+    /// ([`Self::from_raw_parts`]) values.+    pub fn digits_to_lsu_elements_len(digits: u32) -> usize {+        (usize::try_from(digits).unwrap() + decnumber_sys::DECDPUN - 1) / decnumber_sys::DECDPUN+    }

If you take the output of to_raw_parts and blat them down somewhere else, e.g. in a row, this function gives you a way to determine the number of bytes you need to read out of the row to provide valid input to from_raw_parts as a function of digits, i.e. this is used on my fork of MZ locally for deserializing rows back to Datum::APD.

This doesn't need to be exported from the library, but we would need to have the same logic somewhere else; figured this was the most ergonomic place and addressed the oddity in the doc comment.

sploiselle

comment created time in 9 days

PR opened MaterializeInc/rust-dec

Reviewers
dec: refactor raw parts + 0.4.2 release
+70 -26

0 comment

5 changed files

pr created time in 10 days

created tagMaterializeInc/rust-dec

tagdec-0.4.1

libdecnumber bindings for the Rust programming language

created time in 10 days

push eventMaterializeInc/rust-dec

Sean Loiselle

commit sha 265b22a043fe5c3e3126e738627603346662a402

dec: prepare 0.4.1 release

view details

push time in 10 days

PR opened MaterializeInc/rust-dec

Reviewers
dec: prepare 0.4.1 release
+9 -4

0 comment

3 changed files

pr created time in 10 days

push eventMaterializeInc/rust-dec

Sean Loiselle

commit sha dd506446fb192ce0960e4273c471698b075f1303

dec: refactor to_raw_parts to only return valid digits

view details

Sean Loiselle

commit sha 00eb085e200a928fa4a38ec115ef4dfe215f6898

dec: add trim function

view details

push time in 10 days

PR merged MaterializeInc/rust-dec

Raw parts fix

Want to apologize bc last week you pointed out the issues I tried to fix this morning (MaterializeInc/materialize#6883). I thought this approach was an optimization and hadn't realized you filed an issue bc it actually fixed something! Nonetheless, this code makes implementing the fix very straightforward.

+19 -7

0 comment

2 changed files

sploiselle

pr closed time in 10 days

PR opened MaterializeInc/rust-dec

Raw parts fix

Want to apologize bc last week you pointed out the issues I tried to fix this morning (MaterializeInc/materialize#6883). I thought this approach was an optimization and hadn't realized you filed an issue bc it actually fixed something! Nonetheless, this code makes implementing the fix very straightforward.

+19 -7

0 comment

2 changed files

pr created time in 11 days

issue openedfede1024/rust-rdkafka

Kafka simple producer slow for 250_000 messages, add example?

Hey there,

Hope you all are doing fine!

As https://github.com/fede1024/kafka-benchmark/issues/10 cannot be built anymore is there any chance that the examples could be extended with an example showing how to produce around 1,000,000 messages per second.

I tried running the simple producer from the examples but stopped it after running it for 1min on my machine. Then I updated it with buffering of futures which resulted in 4814ms for 250_000 messages but that is still far off that 1,000,000 messages per second.

Would really appreciate an example for producing a lot of messages or fixing the kafka-benchmark so it can be ran locally!

Best regards,

Dario

Code snippets with the following toml:

[dependencies]
rdkafka = { version = "0.26", features = ["cmake-build"] }
tokio = { version = "1", features = ["full"] }
futures = "0.3.15"

<details> <summary>Producer with futures buffering 4814ms for 250_000 messages</summary> <pre> use std::time::{Duration, Instant};

use rdkafka::{ClientConfig, producer::{FutureProducer, FutureRecord}, util::get_rdkafka_version};

use futures::{StreamExt, stream::{self}};

#[tokio::main] async fn main() {

let (version_n, version_s) = get_rdkafka_version();
println!("rd_kafka_version: 0x{:08x}, {}", version_n, version_s);

let producer: &FutureProducer = &ClientConfig::new()
    .set("bootstrap.servers", "localhost:9092")
    .set("message.timeout.ms", "5000")
    .create()
    .expect("Kafka Producer creation error");

let start = Instant::now();

let amount =250_000;

let futures = (0..amount)
    .map(|i| async move {
        let delivery_status = producer
            .send(
                FutureRecord::to("topic-test")
                    .payload(&format!("Message {}", i))
                    .key(&format!("Key {}", i)),
                Duration::from_secs(0),
            )
            .await;

        delivery_status
    })
    .collect::<Vec<_>>();


let mut futures = stream::iter(futures).buffered(500);

while let Some(future) = futures.next().await {
    future.unwrap();
}

println!("Took {}ms to publish {} msg", start.elapsed().as_millis(), amount);

}


Running target/debug/rust_playground rd_kafka_version: 0x000001ff, 1.6.1 Took 4814ms to publish 250000 msg </pre> </details>


<details> <summary>Producer as in examples ??(stopped after 1minute) for 250_000 messages</summary> <pre> use std::time::{Duration, Instant};

use rdkafka::{ClientConfig, producer::{FutureProducer, FutureRecord}, util::get_rdkafka_version};

use futures::{StreamExt, stream::{self}};

#[tokio::main] async fn main() {

let (version_n, version_s) = get_rdkafka_version();
println!("rd_kafka_version: 0x{:08x}, {}", version_n, version_s);

let producer: &FutureProducer = &ClientConfig::new()
    .set("bootstrap.servers", "localhost:9092")
    .set("message.timeout.ms", "5000")
    .create()
    .expect("Kafka Producer creation error");

let start = Instant::now();

let amount =250_000;

let futures = (0..amount)
    .map(|i| async move {
        let delivery_status = producer
            .send(
                FutureRecord::to("topic-test")
                    .payload(&format!("Message {}", i))
                    .key(&format!("Key {}", i)),
                Duration::from_secs(0),
            )
            .await;

        delivery_status
    })
    .collect::<Vec<_>>();

// This loop will wait until all delivery statuses have been received.
for future in futures {
    future.await.unwrap();
}

println!("Took {}ms to publish {} msg", start.elapsed().as_millis(), amount);

}


</pre> </details>

created time in 12 days

push eventMaterializeInc/rust-dec

Sean Loiselle

commit sha 3b9135167119e846bf0a4b8f43107079c3e07da4

dec: support TryFrom<Decimal<N>> for f32/f64

view details

Sean Loiselle

commit sha 42d24e33109d0187c06caf817c4cd7a577a07c26

dec: support From<f32/f64> for Decimal

view details

push time in 12 days

PR merged MaterializeInc/rust-dec

to/from floats
+280 -1

0 comment

3 changed files

sploiselle

pr closed time in 12 days

Pull request review commentMaterializeInc/rust-dec

to/from floats

 impl<const N: usize> Context<Decimal<N>> {         decnum_tryinto_primitive_uint!(u128, self, 39, d)     } +    /// Attempts to convert `d` to `f32` or fails if not possible.+    ///+    /// Note that this function:+    /// - Errors for values that over- or underflow `f32`, rather than returning+    ///   infinity or `0.0`, respectively.+    /// - Errors if `self.status()` is set to `invalid_operation` irrespective+    ///   of whether or not this specific invocation of the function set that+    ///   status.

Oh derp; ty for catching

sploiselle

comment created time in 12 days