profile
viewpoint
Nick Cameron nrc @pingcap Christchurch, New Zealand https://www.ncameron.org Software engineer at PingCAP; @rust-lang core team alumnus.

nrc/derive-new 269

derive simple constructor functions for Rust structs

nrc/apr-intro 62

An alternate introdcution to the APR book

nrc/callgraph.rs 26

Callgraphs for Rust programs

nrc/find-work 23

find something Rusty to work on

GSam/rust-refactor 19

Rust refactoring project

nrc/box-error 4

A library for error handling using boxed errors

nrc/clyde 3

wip

nrc/cargo-edit 1

A utility for managing cargo dependencies from the command line.

nrc/chalk 1

A PROLOG-ish interpreter written in Rust, intended eventually for use in the compiler

push eventtikv/sig-transaction

cfzjywxk

commit sha a77c5c335d663541f78e0904cbba86d497dffc8a

Create weekly-2020-10-23.md

view details

Nick Cameron

commit sha 10ece0350fbf88ba127687583046309844b3d66c

Update meetings/minutes/weekly-2020-10-23.md Co-authored-by: Lei Zhao <zlwgx1023@gmail.com>

view details

Nick Cameron

commit sha cc83642679aca216b15ec33b42a269dc9a287439

Update weekly-2020-10-23.md

view details

cfzjywxk

commit sha d58e550777680d4911d412a615806dc1798d8a80

Update meetings/minutes/weekly-2020-10-23.md Co-authored-by: Yilin Chen <sticnarf@gmail.com>

view details

cfzjywxk

commit sha ff0205786ad275f0eb796d0a98d7e466a9154022

Update meetings/minutes/weekly-2020-10-23.md Co-authored-by: 龙方淞 <longfangsong@icloud.com>

view details

Nick Cameron

commit sha 97183cc36945697e70f8114b541043d7ffe0edd9

Update meetings/minutes/weekly-2020-10-23.md Co-authored-by: MyonKeminta <9948422+MyonKeminta@users.noreply.github.com>

view details

Nick Cameron

commit sha 1157b22eb4d1f29a4d86620b7dbf614cdaf8678b

Merge pull request #62 from cfzjywxk/patch-19 Create weekly-2020-10-23.md

view details

push time in 4 days

PR merged tikv/sig-transaction

Create weekly-2020-10-23.md

Create weekly-2020-10-23.md

+20 -0

0 comment

1 changed file

cfzjywxk

pr closed time in 4 days

push eventcfzjywxk/sig-transaction

Nick Cameron

commit sha 97183cc36945697e70f8114b541043d7ffe0edd9

Update meetings/minutes/weekly-2020-10-23.md Co-authored-by: MyonKeminta <9948422+MyonKeminta@users.noreply.github.com>

view details

push time in 4 days

push eventcfzjywxk/sig-transaction

Nick Cameron

commit sha cc83642679aca216b15ec33b42a269dc9a287439

Update weekly-2020-10-23.md

view details

push time in 4 days

push eventcfzjywxk/sig-transaction

Nick Cameron

commit sha 10ece0350fbf88ba127687583046309844b3d66c

Update meetings/minutes/weekly-2020-10-23.md Co-authored-by: Lei Zhao <zlwgx1023@gmail.com>

view details

push time in 4 days

push eventtikv/client-rust

Nick Cameron

commit sha a399f8157aa678e503846d89a4336b5e19da7489

Update README.md

view details

push time in 4 days

push eventtikv/client-rust

Nick Cameron

commit sha 20666c84e850bf517dd92d093c8a5dac6a805d8e

Update README.md Change docs link

view details

push time in 4 days

issue openedtikv/client-rust

Fix doc warnings

Run cargo doc --document-private-items and we get the following warnings:

warning: unresolved link to `transaction`
  --> src/transaction/snapshot.rs:11:27
   |
11 | /// See the [Transaction](transaction) docs for more information on the methods.
   |                           ^^^^^^^^^^^ no item named `transaction` in scope
   |
   = note: `#[warn(broken_intra_doc_links)]` on by default
   = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`

warning: unresolved link to `TransactionClient`
 --> src/transaction/mod.rs:5:37
  |
5 | //! Using the [`TransactionClient`](TransactionClient) you can utilize TiKV's transactional interface.
  |                                     ^^^^^^^^^^^^^^^^^ no item named `TransactionClient` in scope
  |
  = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`

warning: unresolved link to `transaction`
  --> src/transaction/snapshot.rs:11:27
   |
11 | /// See the [Transaction](transaction) docs for more information on the methods.
   |                           ^^^^^^^^^^^ no item named `transaction` in scope
   |
   = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`

warning: unresolved link to `Connect`
  --> src/transaction/client.rs:26:63
   |
26 |     /// Creates a new [`Client`](Client) once the [`Connect`](Connect) resolves.
   |                                                               ^^^^^^^ no item named `Connect` in scope
   |
   = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`

warning: unresolved link to `Transaction::set`
  --> src/transaction/client.rs:57:96
   |
57 |     /// Using the transaction you can issue commands like [`get`](Transaction::get) or [`set`](Transaction::set).
   |                                                                                                ^^^^^^^^^^^^^^^^ the struct `Transaction` has no field or associated item named `set`

warning: unresolved link to `raw::Client`
 --> src/raw/mod.rs:5:31
  |
5 | //! Using the [`raw::Client`](raw::Client) you can utilize TiKV's raw interface.
  |                               ^^^^^^^^^^^ no item named `raw` in scope

warning: 6 warnings emitted

created time in 4 days

push eventtikv/client-rust

Nick Cameron

commit sha d3cecad351bf2dbb8340e1d6ada263ea666e4fb5

Update doc.yml

view details

push time in 4 days

push eventtikv/client-rust

Nick Cameron

commit sha 58b04b766736187fe8d787d44123256863753efa

Update doc.yml

view details

push time in 4 days

push eventtikv/client-rust

Nick Cameron

commit sha 6583c532f84d1321010a45a761dfaad36891cbe0

Update doc.yml

view details

push time in 5 days

push eventtikv/client-rust

Nick Cameron

commit sha 258c9ee8b5ab176cdfdb5aaee9d9393b5c70a8b2

Update doc.yml

view details

push time in 5 days

push eventtikv/client-rust

Nick Cameron

commit sha 491c1fa47f1acec785561c1a52e75cbd7a304624

Update doc.yml Fix action

view details

push time in 5 days

push eventtikv/client-rust

Nick Cameron

commit sha e3cca9ce6e11130bb14f8e3f6ae718cfbf225bd3

Create doc.yml Add an action to build rustdoc and upload it

view details

push time in 5 days

push eventtikv/client-rust

Nick Cameron

commit sha c2aa25311293ee9bae6205451e135fcbcb191286

Add .git-ftp-include file Signed-off-by: Nick Cameron <nrc@ncameron.org>

view details

push time in 5 days

pull request commenttikv/client-rust

Basic support of pessimistic transaction

@ekexium @sticnarf PTAL

longfangsong

comment created time in 5 days

PullRequestReviewEvent

pull request commenttikv/tikv

txn: Move acquire_pessimistic_lock action

/merge

longfangsong

comment created time in 5 days

PullRequestReviewEvent

issue commenttikv/client-rust

Resolving blockers for publishing

Actually it seems that protobuf-build does not depend on our fork of protobuf, it depends on upstream. So maybe we can just copy the protos we need and we're done :-)

Hoverbear

comment created time in 5 days

issue commenttikv/client-rust

Resolving blockers for publishing

To summarise the problem:

  • All our deps must be published
  • kvproto (unpublished) depends on protobuf-build (unpublished)
    • publishing kvproto also opens questions about what to do with the Go code, etc.
  • protobuf-build depends on PingCAP's fork of rust-protobuf
    • it would be a relatively large amount of work, and uncertain, to try and upstream our changes
    • rust-client only uses Prost, so this is kind of a limitation of Cargo

So, possible solutions:

  • publish kvproto, requires
    • upstreaming changes to rust-protobuf
    • or publishing our fork of rust-protobuf (easiest, but not popular on tikv/protobuf-build#31)
    • removing the rust-protobuf dependency from protobuf build, either by removing rust-protobuf support from TiKV (hard) or by implementing some kind of pre-build step.
    • implement a separate version of protobuf-build which only supports Prost as a new project
  • Copy the protos we require to the client repo. However, we'd still need to fix the dependency of protobuf-build, as above. We could do our own protobuf codegen.

The easiest solution seems to me to be to copy protos to the client repo and to do our own codegen rather than rely on protobuf-build.

Hoverbear

comment created time in 5 days

Pull request review commenttikv/tikv

txn: make use of near_seek for write-cf

 impl<S: Snapshot> MvccReader<S> {         self.key_only = key_only;     } +    pub fn set_single_key(&mut self, single_key: bool) {

This doesn't need to be a function, just make the field pub

youjiali1995

comment created time in 6 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentpingcap/rust-protobuf

less allocate and memcpy in serialization

 pub trait Message: fmt::Debug + Clear + Any + Send + Sync {     /// Results in error if message is not fully initialized.     fn write_to(&self, os: &mut CodedOutputStream) -> ProtobufResult<()> {         self.check_initialized()?;--        // cache sizes-        self.compute_size();

I think this is necessary?

hicqu

comment created time in 6 days

Pull request review commentpingcap/rust-protobuf

less allocate and memcpy in serialization

 pub trait Message: fmt::Debug + Clear + Any + Send + Sync {         }     } -    /// Write the message to the writer.+    /// Write the message to the writer; an internal buffer is used.     fn write_to_writer(&self, w: &mut Write) -> ProtobufResult<()> {         w.with_coded_output_stream(|os| self.write_to(os))     } +    /// Write the message to the writer.+    fn write_to_writer_without_buffer(&self, w: &mut Write) -> ProtobufResult<()> {+        // TODO: add a unbuffer version.+        w.with_coded_output_stream(|os| {+            os.use_internal_buffer = false;+            self.write_to(os)+        })+    }+     /// Write the message to bytes vec.     fn write_to_vec(&self, v: &mut Vec<u8>) -> ProtobufResult<()> {+        let size = self.compute_size() as usize;+        if v.capacity() - v.len() < size {+            v.reserve(size - v.len());

I think something is off here. Does this function end up writing to the start of v or to v.len() - 1? It seems that line 115 assumes the latter and line 116 assumes the former.

hicqu

comment created time in 6 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventtikv/sig-transaction

lysu

commit sha 60644009d8e8443c605b2c1a8d92898268ee027d

Create weekly-2020-10-16.md Signed-off-by: lysu <sulifx@gmail.com>

view details

lysu

commit sha b648a07dbebdde49a3beeea8ea509aade657ffd0

Update meetings/minutes/weekly-2020-10-16.md Co-authored-by: Yilin Chen <sticnarf@gmail.com>

view details

lysu

commit sha 753a0b4bb83dff8f6f3b781c16dd5b9a79b3b328

Update meetings/minutes/weekly-2020-10-16.md Co-authored-by: Yilin Chen <sticnarf@gmail.com>

view details

lysu

commit sha 898edf62123ff7895a5b04bd613b6d07848d6b7d

Update meetings/minutes/weekly-2020-10-16.md Co-authored-by: MyonKeminta <9948422+MyonKeminta@users.noreply.github.com>

view details

Nick Cameron

commit sha 8fa3563e7a995c0bc860df3f3ff59c9ee87fd33a

Merge pull request #61 from lysu/lysu-patch-1 Create weekly-2020-10-16.md

view details

push time in 8 days

PR merged tikv/sig-transaction

Create weekly-2020-10-16.md

<!-- Reviewable:start --> This change is <img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/> <!-- Reviewable:end -->

+17 -0

0 comment

1 changed file

lysu

pr closed time in 8 days

Pull request review commenttikv/client-rust

Basic support of pessimistic transaction

 impl TwoPhaseCommitter {         }     } +    async fn pessimistic_commit(mut self, for_update_ts: u64) -> Result<()> {

If possible, it would be nice for these two methods to share code with the optimistic versions.

longfangsong

comment created time in 8 days

Pull request review commenttikv/client-rust

Basic support of pessimistic transaction

 impl Transaction {         .commit()         .await     }++    /// Pessimisticly lock the keys+    ///+    /// ```rust,no_run+    /// # use tikv_client::{Config, transaction::Client};+    /// # use futures::prelude::*;+    /// # use tikv_client_common::Key;+    /// # futures::executor::block_on(async {+    /// # let client = Client::new(Config::default()).await.unwrap();+    /// let mut txn = client.begin().await.unwrap();+    /// // ... Do some actions.+    /// let key1: Key = b"key1".to_vec().into();+    /// let result: () = txn.pessimistic_lock(vec![key1.clone()]).await.unwrap();+    /// # });+    /// ```+    pub async fn pessimistic_lock(+        &mut self,+        keys: impl IntoIterator<Item = impl Into<Key>>,+    ) -> Result<()> {+        let mut keys: Vec<Vec<u8>> = keys+            .into_iter()+            .map(|it| it.into())+            .map(|it: Key| it.into())+            .collect();+        keys.sort();+        let primary_lock = keys[0].clone();+        let lock_ttl = DEFAULT_LOCK_TTL;+        let for_update_ts = self.rpc.clone().get_timestamp().await.unwrap().version();+        self.for_update_ts = std::cmp::max(self.for_update_ts, for_update_ts);+        new_pessimistic_lock_request(+            keys,+            primary_lock,+            self.timestamp.version(),+            lock_ttl,+            for_update_ts,+        )+        .execute(self.rpc.clone())+        .await+    }++    /// Commits the actions of the pessimistic transaction.+    ///+    /// ```rust,no_run+    /// # use tikv_client::{Config, transaction::Client};+    /// # use futures::prelude::*;+    /// # futures::executor::block_on(async {+    /// # let client = Client::new(Config::default()).await.unwrap();+    /// let mut txn = client.begin().await.unwrap();+    /// // ... Do some actions.+    /// let req = txn.pessimistic_commit();+    /// let result: () = req.await.unwrap();+    /// # });+    /// ```+    pub async fn pessimistic_commit(&mut self) -> Result<()> {

Similarly, we shouldn't have separate pessimistic commit API, commit should do the right thing based on whether then transaction is optimistic or pessimistic

longfangsong

comment created time in 8 days

Pull request review commenttikv/client-rust

Basic support of pessimistic transaction

 pub fn new_batch_rollback_request(     req } +impl KvRequest for kvrpcpb::PessimisticLockRequest {+    type Result = ();+    type RpcResponse = kvrpcpb::PessimisticLockResponse;+    type KeyData = Vec<kvrpcpb::Mutation>;+    const REQUEST_NAME: &'static str = "kv_pessimistic_lock";+    const RPC_FN: RpcFnType<Self, Self::RpcResponse> = TikvClient::kv_pessimistic_lock_async_opt;++    fn store_stream<PdC: PdClient>(+        &mut self,+        pd_client: Arc<PdC>,+    ) -> BoxStream<'static, Result<(Self::KeyData, Store<PdC::KvClient>)>> {+        self.mutations.sort_by(|a, b| a.key.cmp(&b.key));+        let mutations = mem::take(&mut self.mutations);+        store_stream_for_keys(mutations, pd_client)+    }++    fn make_rpc_request<KvC: KvClient>(+        &self,+        mutations: Self::KeyData,+        store: &Store<KvC>,+    ) -> Self {+        let mut req = self.request_from_store(store);+        req.set_mutations(mutations);+        req.set_primary_lock(self.primary_lock.clone());+        req.set_start_version(self.start_version);+        req.set_lock_ttl(self.lock_ttl);+        req.set_for_update_ts(self.for_update_ts);+        req.set_is_first_lock(self.is_first_lock);+        req.set_wait_timeout(self.wait_timeout);+        req.set_force(self.force);+        req.set_return_values(self.return_values);+        req.set_min_commit_ts(self.min_commit_ts);++        req+    }++    fn map_result(_result: Self::RpcResponse) -> Self::Result {}++    fn reduce(+        results: BoxStream<'static, Result<Self::Result>>,+    ) -> BoxFuture<'static, Result<Self::Result>> {+        results.try_for_each(|_| future::ready(Ok(()))).boxed()
        results.try_for_each_concurrent(|_| future::ready(Ok(()))).boxed()
longfangsong

comment created time in 8 days

Pull request review commenttikv/client-rust

Basic support of pessimistic transaction

 pub trait KvRequest: Sync + Send + 'static + Sized {                 // Resolve locks                 let locks = response.take_locks();                 if !locks.is_empty() {+                    // todo: exit directly when found PessimisticLock                     let pd_client = pd_client.clone();                     return resolve_locks(locks, pd_client.clone())                         .map_ok(|resolved| {                             // TODO: backoff                             let delay_ms = if resolved { 0 } else { LOCK_RETRY_DELAY_MS };                             futures_timer::Delay::new(Duration::from_millis(delay_ms))                         })-                        .map_ok(move |_| request.response_stream(pd_client))+                        .map_ok(move |_| {+                            if remaining_retry_count != 0 {+                                // todo: when met up with a long-time lock, this might be called

Can you file an issue for this please?

longfangsong

comment created time in 8 days

Pull request review commenttikv/client-rust

Basic support of pessimistic transaction

 impl Transaction {         .commit()         .await     }++    /// Pessimisticly lock the keys+    ///+    /// ```rust,no_run+    /// # use tikv_client::{Config, transaction::Client};+    /// # use futures::prelude::*;+    /// # use tikv_client_common::Key;+    /// # futures::executor::block_on(async {+    /// # let client = Client::new(Config::default()).await.unwrap();+    /// let mut txn = client.begin().await.unwrap();+    /// // ... Do some actions.+    /// let key1: Key = b"key1".to_vec().into();+    /// let result: () = txn.pessimistic_lock(vec![key1.clone()]).await.unwrap();+    /// # });+    /// ```+    pub async fn pessimistic_lock(

I think this should not be part of the public API, instead we should set the transaction to be optimistic or pessimistic, and if pessimistic then we take the lock on each update/insert/delete, and we can add an option for reads to make locking reads.

longfangsong

comment created time in 8 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventnrc/rustaceans.org

Stefan Schindler

commit sha d642daa8e505f6f80f4b00aca2629d265761184f

Update dns2utf8.json

view details

Nick Cameron

commit sha bfc7fc81f89156ff318d3d065cdf1c8df9e91898

Merge pull request #648 from dns2utf8/patch-1 Update dns2utf8.json

view details

push time in 8 days

PR merged nrc/rustaceans.org

Update dns2utf8.json

Remove IRC and update blog

+6 -6

0 comment

1 changed file

dns2utf8

pr closed time in 8 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commenttikv/tikv

txn: Move pessimistic prewrite action

/merge

longfangsong

comment created time in 12 days

PullRequestReviewEvent

Pull request review commenttikv/sig-transaction

Add document about single region 1PC

+# Single Region 1PC
+
+For transactions that affect only one region, or more strictly speaking, that can be prewritten with only one prewrite request, can be committed directly while the prewrite request is being handled, so the commit phase can be totally removed. Therefore we can get less latency and more throughput. For TiDB, indices and rows are unsally not in a same region, **so this optimization only works under a limited amount of scenarios, like sysbench oltp_update_non_index**. But in a suitable scenario, it gains significantly better performance.
+
+We used to think about supporting 1PC before but stopped after meeting many difficulties. Now since async commit, which meets and solved mostly the same problem as 1PC, is implemented, we can continue supporting 1PC with much less effort to make.
+
+* Problems already solved by supporting async commit:
+  * Commit ts calculation
+  * Non-unique commit ts and rollback record overlapping
+  * Memory lock
+  * Replica read problem
+* Problems that we do not care anymore:
+  * Binlog incompatibility
+
+## Basic Design
+
+Since async commit is implemented, it's not difficult to implement a working 1PC. However, it's still hard to make it perfectly correct and compatible with other components.
+
+* When committing a transaction, if TiDB finds that the prewrite phase can be done with only one single request, the transaction is allowed to be committed with 1PC protocol. A field named `try_one_pc` in the prewrite request will be set to let TiKV know that 1PC is available for this transaction.

When would this be set or not set? If and only if the transaction only touches one region/store?

MyonKeminta

comment created time in 13 days

Pull request review commenttikv/sig-transaction

Add document about single region 1PC

+# Single Region 1PC
+
+For transactions that affect only one region, or more strictly speaking, that can be prewritten with only one prewrite request, can be committed directly while the prewrite request is being handled, so the commit phase can be totally removed. Therefore we can get less latency and more throughput. For TiDB, indices and rows are unsally not in a same region, **so this optimization only works under a limited amount of scenarios, like sysbench oltp_update_non_index**. But in a suitable scenario, it gains significantly better performance.
+
+We used to think about supporting 1PC before but stopped after meeting many difficulties. Now since async commit, which meets and solved mostly the same problem as 1PC, is implemented, we can continue supporting 1PC with much less effort to make.
+
+* Problems already solved by supporting async commit:
+  * Commit ts calculation
+  * Non-unique commit ts and rollback record overlapping
+  * Memory lock
+  * Replica read problem
+* Problems that we do not care anymore:
+  * Binlog incompatibility
+
+## Basic Design
+
+Since async commit is implemented, it's not difficult to implement a working 1PC. However, it's still hard to make it perfectly correct and compatible with other components.
+
+* When committing a transaction, if TiDB finds that the prewrite phase can be done with only one single request, the transaction is allowed to be committed with 1PC protocol. A field named `try_one_pc` in the prewrite request will be set to let TiKV know that 1PC is available for this transaction.
+* When TiKV receives a request with `try_one_pc` set, it first handle it just like how it handles normal prewrite requests.But after generating write buffer and before writing them down to RocksDB, it will additionally check if the prewrite is fully successful, and convert the locks into commit records if so. And finally write them down to RocksDB. The `commit_ts` is `max(max_ts, start_ts, for_update_ts) + 1`. It fetches the `max_ts` while acquiring the memory lock, and the memory lock is released after applying, just like how async commit does. The final `commit_ts` will be sent back to TiDB via prewrite response.
+* 1PC and async commit can be independent. When TiKV rejects to commit a transaction with 1PC, the transaction can then fallback to normal transactions, and it may become a normal 2PC transaction or an async commit transaction, according to if the async commit flag is set.
+
+## Problems need to be solved
+
+### Schema version checking problem
+
+[This problem exists in async commit too](https://github.com/tikv/sig-transaction/blob/master/design/async-commit/parallel-commit-known-issues-and-solutions.md#schema-version-checking). But it's even harder to solve for 1PC, because if it's committed in TiKV, it will have no chance to check if the schema version has changed between the `start_ts` and `commit_ts`, while for async commit it can be checked after prewrite finishing.
+
+Possible solution: When trying committing a transaction with 1PC, find a ts `one_pc_max_commit_ts` before which we can guarantee that the schema version can't change, and send it to TiKV. TiKV will reject committing if the calculated `commit_ts` exceeds the `one_pc_max_commit_ts`.
+
+### CDC compatibility problem
+
+CDC syncs data from TiKV by observing applying events. Prewrites and commits are distinguished, and CDC will use these events to compose complete transactions and then send them to the downstream. So when 1PC is enabled, there need to be some way for CDC to distinguish if a apply event is caused by 1PC committing, otherwise CDC will expect that a commit event must has a corresponding prewrite event to compose a complete transaction.
+
+Possible solutions:
+1. Passing a `is_1pc` flag from txn layer to apply, just like how `TxnExtra` (which is used to support CDC outputting old value) was written previously. It would be ugly.

I think this is the least ugly solution :-) 2 is very fragile, 3 seems wasteful and make write records more complex.

MyonKeminta

comment created time in 13 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventtikv/client-rust

ekexium

commit sha a23b160f9168e5338eac50b9992c1dec17f620fa

readme: remove gc workaround Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 8f40ed061a8be43e2b5c9953c20f2be938a45562

Merge branch 'master' into update-readme

view details

Nick Cameron

commit sha 540b50ace407301625ebccd5c311f67408234a67

Merge pull request #183 from ekexium/update-readme readme: remove gc workaround

view details

push time in 14 days

PR merged tikv/client-rust

readme: remove gc workaround status/LGT1

Remove GC workaround from readme.md since we implemented GC

+12 -16

0 comment

1 changed file

ekexium

pr closed time in 14 days

PullRequestReviewEvent

Pull request review commenttikv/client-rust

GC: initial implementation

 impl Client {         self.pd.clone().get_timestamp().await     } +    /// Cleans stale MVCC records in TiKV.+    ///+    /// It is done by:+    /// 1. resolve all locks with ts <= safepoint+    /// 2. update safepoint to PD+    ///+    /// This is a simplified version of [GC in TiDB](https://docs.pingcap.com/tidb/stable/garbage-collection-overview).

Could you state how it is simplified please?

ekexium

comment created time in 14 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventtikv/sig-transaction

longfangsong

commit sha 053b314c7f6a83805e551acee0128574aa94c62e

add transaction-handling-newbie-perspective Signed-off-by: longfangsong <longfangsong@icloud.com>

view details

longfangsong

commit sha bd81592cd9cfa3724c65da4d150e08e9026d0642

Update transaction-handling-newbie-perspective Signed-off-by: longfangsong <longfangsong@icloud.com>

view details

longfangsong

commit sha fd7600df247437042dc44d2158648dbe36b2a232

Update transaction-handling-newbie-perspective.md Signed-off-by: longfangsong <longfangsong@icloud.com>

view details

longfangsong

commit sha 7495e826a29f07b66ccc3d3dedf20d43ba4cc940

Upload an image Signed-off-by: longfangsong <longfangsong@icloud.com>

view details

longfangsong

commit sha 9919fa16a05571cb057405dad9ed112ac25ea0c6

Apply suggestions from code review Signed-off-by: longfangsong <longfangsong@icloud.com>

view details

龙方淞

commit sha d6890087ccb35862ab4c02f76f568f32382f7c80

Merge branch 'master' into master Signed-off-by: longfangsong <longfangsong@icloud.com>

view details

Nick Cameron

commit sha 2fed337c8e1fa167728ff6b6218e79c467c802d5

Merge pull request #12 from longfangsong/master Transaction Handling Newbie Perspective

view details

push time in 14 days

PR merged tikv/sig-transaction

Reviewers
Transaction Handling Newbie Perspective documentation

Signed-off-by: longfangsong longfangsong@icloud.com

This pr add a transaction-handling-newbie-perspective document.

This documents is focused on telling newbie of tikv how a transaction is handled in the whole system, and where can they find the corresponding function of some certain operation.

+879 -0

1 comment

4 changed files

longfangsong

pr closed time in 14 days

PullRequestReviewEvent

push eventtikv/client-rust

ekexium

commit sha 3079e2ada014c4d82162997a724b5873fad975de

CI: use `latest` instead of `nightly` PD and TiKV Signed-off-by: ekexium <ekexium@gmail.com>

view details

Nick Cameron

commit sha 91d7a838226046c85cbd0565d4d8206a47e9bbc6

Merge branch 'master' into ci

view details

Nick Cameron

commit sha 592089f1d744376bc55c806e863769c93564e3e7

Merge pull request #181 from ekexium/ci CI: use latest instead of nightly PD and TiKV

view details

push time in 14 days

PR merged tikv/client-rust

Reviewers
CI: use latest instead of nightly PD and TiKV

CI failed for nightly TiKV and PD. Use latest version instead.

The problem in TiKV should be fixed soon, we can wait until then and don't need to change the config.

+2 -2

1 comment

1 changed file

ekexium

pr closed time in 14 days

push eventekexium/client-rust

ekexium

commit sha 1c383ae2e18af059e636364034e301b144a551c2

fix txn.batch_get() signature; now it returns Iter<KvPair>, and skips non-existent entries Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha ca160b8fa8d4c71545c32c64de168052709c0505

Merge branch 'master' into fix-txn-batch-scan

view details

ekexium

commit sha a22b385c1ce49061b6c14fc6991f903f7966e5f7

use delete_range to clear tikv Signed-off-by: ekexium <ekexium@gmail.com>

view details

Yilin Chen

commit sha a1d80250e9cc071dc7edf4ce9e6eaf5afc92c5ac

Merge branch 'master' into fix-integration-test

view details

ekexium

commit sha da7d0d9d1f35b7f90fcf3e7940ceb98ec622d2f7

Merge branch 'master' into fix-txn-batch-scan

view details

ekexium

commit sha bcd18b3e53c602a392bf9e7565245dd68c991844

fix limit of raw_scan Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha cd9c87e24bd173087ae42e33839f7012eaaab8a4

fix Some(empty) == unbouned problem in group_ranges_by_region Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 10b6f44b45763a7b4cb4f62e6e19a8691498723d

format code Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 319ec5254516150d100e6e26490d10fa404ca9ae

fix typo Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 7c584fbb80c7d10b499e0193e3bdb25eb39c6aef

Merge branch 'master' into fix-txn-batch-scan

view details

ekexium

commit sha 3dbf1567fef67007f94e6f1dc3ea7445d88842ce

Merge branch 'master' into fix-integration-test Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 02629f820556ac0f13614ca7a6261799c0163422

Merge branch 'master' into fix-raw-scan-limit

view details

Nick Cameron

commit sha eab2ce978be5aaa3c36b96e4f123ee8d09e2ee78

Merge pull request #173 from ekexium/fix-txn-batch-scan Fix txn.batch_get

view details

Nick Cameron

commit sha c6bb3fac31db844c476e3bfc2d13aa7979a74397

Merge branch 'master' into fix-integration-test

view details

Nick Cameron

commit sha cc43fbdcf47e0060f7167ddccd9ada36f5ec44ab

Merge pull request #175 from ekexium/fix-integration-test Use delete_range to clear TiKV in integration tests

view details

Nick Cameron

commit sha d9935c29368185e0b0451b0ab12bcf08e434beea

Merge pull request #179 from ekexium/fix-raw-scan-limit Fix limit problem in raw_scan and unbouned problem in batch_scan

view details

Nick Cameron

commit sha 91d7a838226046c85cbd0565d4d8206a47e9bbc6

Merge branch 'master' into ci

view details

push time in 14 days

push eventtikv/client-rust

ekexium

commit sha bcd18b3e53c602a392bf9e7565245dd68c991844

fix limit of raw_scan Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha cd9c87e24bd173087ae42e33839f7012eaaab8a4

fix Some(empty) == unbouned problem in group_ranges_by_region Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 10b6f44b45763a7b4cb4f62e6e19a8691498723d

format code Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 02629f820556ac0f13614ca7a6261799c0163422

Merge branch 'master' into fix-raw-scan-limit

view details

Nick Cameron

commit sha d9935c29368185e0b0451b0ab12bcf08e434beea

Merge pull request #179 from ekexium/fix-raw-scan-limit Fix limit problem in raw_scan and unbouned problem in batch_scan

view details

push time in 14 days

PR merged tikv/client-rust

Fix limit problem in raw_scan and unbouned problem in batch_scan status/LGT1

Changes:

  • raw_scan limit the final results.
  • end_key=Some(empty) is treated as upper unbouned. The problem is also found in other places #171 .

Remaining problem:

  • The parameter each_limit of raw_batch_scan is not what we expect. It works on each region of each range, but not each range. It's not convenient to fix this problem so I suggest we keep this request but mark it as experimental for users #170 .
+72 -5

1 comment

3 changed files

ekexium

pr closed time in 14 days

issue commentpingcap/tiup

playground: playground should not take over console

The final decision is that we should keep it foreground because it's a playground

I think this would make sense if it were interactive, e.g., it started up and gave you the mysql client and when you exited the client, it stopped the playgound. However, since it is not interactive, there doesn't seem much advantage to being a foreground process - it actually takes longer to get up and running because you have to open a new terminal to do anything.

nrc

comment created time in 14 days

PullRequestReviewEvent

push eventtikv/client-rust

ekexium

commit sha a22b385c1ce49061b6c14fc6991f903f7966e5f7

use delete_range to clear tikv Signed-off-by: ekexium <ekexium@gmail.com>

view details

Yilin Chen

commit sha a1d80250e9cc071dc7edf4ce9e6eaf5afc92c5ac

Merge branch 'master' into fix-integration-test

view details

ekexium

commit sha 3dbf1567fef67007f94e6f1dc3ea7445d88842ce

Merge branch 'master' into fix-integration-test Signed-off-by: ekexium <ekexium@gmail.com>

view details

Nick Cameron

commit sha c6bb3fac31db844c476e3bfc2d13aa7979a74397

Merge branch 'master' into fix-integration-test

view details

Nick Cameron

commit sha cc43fbdcf47e0060f7167ddccd9ada36f5ec44ab

Merge pull request #175 from ekexium/fix-integration-test Use delete_range to clear TiKV in integration tests

view details

push time in 14 days

PR merged tikv/client-rust

Use delete_range to clear TiKV in integration tests

We can use delete_range on {Default, Lock, Write} CFs to clear TiKV.

Needs #176 to work correcly.

+0 -0

0 comment

0 changed file

ekexium

pr closed time in 14 days

push eventekexium/client-rust

ekexium

commit sha 1c383ae2e18af059e636364034e301b144a551c2

fix txn.batch_get() signature; now it returns Iter<KvPair>, and skips non-existent entries Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha ca160b8fa8d4c71545c32c64de168052709c0505

Merge branch 'master' into fix-txn-batch-scan

view details

ekexium

commit sha da7d0d9d1f35b7f90fcf3e7940ceb98ec622d2f7

Merge branch 'master' into fix-txn-batch-scan

view details

ekexium

commit sha 319ec5254516150d100e6e26490d10fa404ca9ae

fix typo Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 7c584fbb80c7d10b499e0193e3bdb25eb39c6aef

Merge branch 'master' into fix-txn-batch-scan

view details

Nick Cameron

commit sha eab2ce978be5aaa3c36b96e4f123ee8d09e2ee78

Merge pull request #173 from ekexium/fix-txn-batch-scan Fix txn.batch_get

view details

Nick Cameron

commit sha c6bb3fac31db844c476e3bfc2d13aa7979a74397

Merge branch 'master' into fix-integration-test

view details

push time in 14 days

push eventtikv/client-rust

ekexium

commit sha 1c383ae2e18af059e636364034e301b144a551c2

fix txn.batch_get() signature; now it returns Iter<KvPair>, and skips non-existent entries Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha ca160b8fa8d4c71545c32c64de168052709c0505

Merge branch 'master' into fix-txn-batch-scan

view details

ekexium

commit sha da7d0d9d1f35b7f90fcf3e7940ceb98ec622d2f7

Merge branch 'master' into fix-txn-batch-scan

view details

ekexium

commit sha 319ec5254516150d100e6e26490d10fa404ca9ae

fix typo Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 7c584fbb80c7d10b499e0193e3bdb25eb39c6aef

Merge branch 'master' into fix-txn-batch-scan

view details

Nick Cameron

commit sha eab2ce978be5aaa3c36b96e4f123ee8d09e2ee78

Merge pull request #173 from ekexium/fix-txn-batch-scan Fix txn.batch_get

view details

push time in 14 days

PR merged tikv/client-rust

Fix txn.batch_get

Previously txn.batch_get returns an iterator of (Key, Option<Value>). However, it does not always return all non-existent entries, which is confusing.

Changes: Change the signature and behavior to be same as raw.batch_get: it returns an iterator of Kvpair and skips non-existent entries.

+28 -23

0 comment

4 changed files

ekexium

pr closed time in 14 days

push eventpingcap/tiup

Nick Cameron

commit sha cc3f596ce71f4a5dca40fc5733360a1cda7fbe1d

Update doc/user/README.md Co-authored-by: SIGSEGV <gnu.crazier@gmail.com>

view details

push time in 14 days

pull request commenttikv/rfcs

Error handling

What's the status of the RFC? Do you need help on this?

I've not had time to update it, but still intend to. I should have time this quarter.

nrc

comment created time in 15 days

PullRequestReviewEvent

PR closed tikv/client-rust

raw: Fix scan range in raw scan request

Prior scan range is ignored before passing into requests. Propagate out each region range and pass them into requests.

Signed-off-by: Tzu-Chiao Yeh su3g4284zo6y7@gmail.com

+32 -33

1 comment

5 changed files

tz70s

pr closed time in 15 days

push eventtikv/client-rust

ekexium

commit sha 344f7cce759f810a99c83498272a3a1d581e7607

use try_for_each_concurrent in reduce() Signed-off-by: ekexium <ekexium@gmail.com>

view details

Nick Cameron

commit sha caa869e041cf3d408af14030526810d84e449eb1

Merge pull request #177 from ekexium/concurrent-reduce Use try_for_each_concurrent in reduce()

view details

push time in 15 days

PR merged tikv/client-rust

Use try_for_each_concurrent in reduce()

Use try_for_each_concurrent instead of try_for_each

+12 -4

0 comment

2 changed files

ekexium

pr closed time in 15 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commenttikv/client-rust

Fix txn.batch_get

 impl Transaction {             .await     } -    /// Gets the values associated with the given keys.+    /// Gets the values associated with the given keys, skipping non-existent entris.
    /// Gets the values associated with the given keys, skipping non-existent entries.
ekexium

comment created time in 15 days

Pull request review commenttikv/client-rust

Fix txn.batch_get

 impl Buffer {                 })                 .partition(|(_, v)| *v == MutationValue::Undetermined); -            let cached_results = cached_results.into_iter().map(|(k, v)| (k, v.unwrap()));+            let cached_results = cached_results+                .into_iter()+                .filter_map(|(k, v)| match v.unwrap() {

could be v.map(...) rather than matching

ekexium

comment created time in 15 days

Pull request review commenttikv/client-rust

Fix txn.batch_get

 impl Client {     /// Create a new 'batch get' request.     ///     /// Once resolved this request will result in the fetching of the values associated with the-    /// given keys.+    /// given keys, skipping non-existent entris.
    /// given keys, skipping non-existent entries.
ekexium

comment created time in 15 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventtikv/client-rust

ekexium

commit sha 316a194002fd7dbf85b90cb41671a3cf21d6660d

update readme: dependency, limit and code snippet Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 2765d7528a9263cd0900893b1abfa75ae997196a

Merge branch 'master' into update-readme

view details

ekexium

commit sha 914ed7238933e9930c50636855417462a9ae7eee

readme: add API list and intro to types Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 653d133c4d7478a41be4dce63599ce98f0fa7798

Merge branch 'master' into update-readme

view details

ekexium

commit sha 398a673a5fa925820a3acf317338f7e75a6c1a1a

readme: separate raw and txn API table Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 3ffcb6f0ed5c24b4c05502595b01f1681d1de559

add some descriptions on noteworthy behavior of requests Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 59d8307d0856880e51fa69e103a2310deb464a2c

Merge branch 'master' into update-readme

view details

ekexium

commit sha 46d20a6c42ab388a1154130d9733e8414b35ca72

move raw_batch_scan to experimental Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha f4f86b18f99bf0568b0f1882d76098f803717876

add a workaround of GC Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 06283c0ad3772b1802164ad67800652d2b62b034

fix typo Signed-off-by: ekexium <ekexium@gmail.com>

view details

ekexium

commit sha 5483922eecb6c9e4eea4eb69caf51a85c91c5531

Merge branch 'master' into update-readme

view details

Nick Cameron

commit sha aa10da847f0f87f4966205582af1b2b8e04410b0

Merge pull request #170 from ekexium/update-readme Update readme

view details

push time in 15 days

PR merged tikv/client-rust

Update readme

Update dependency, limit and add simple code examples.

+95 -11

1 comment

2 changed files

ekexium

pr closed time in 15 days

Pull request review commenttikv/tikv

Make raftstore testable with arbitrary combinations of storage engines

+// Copyright 2020 TiKV Project Authors. Licensed under Apache-2.0.++//! Engines for use in the test suite, implementing both the KvEngine+//! and RaftEngine traits.+//!+//! These engines link to all other engines, providing concrete single storage+//! engine type to run tests against.+//!+//! This provides a simple way to integrate non-RocksDB engines into the+//! existing test suite without too much disruption.+//!+//! Engines presently supported by this crate are+//!+//! - RocksEngine from engine_rocks+//! - PanicEngine from engine_panic+//!+//! TiKV uses two different storage engine instances,+//! the "raft" engine, for storing consensus data,+//! and the "kv" engine, for storing user data.+//!+//! The types and constructors for these two engines are located in the `raft`+//! and `kv` modules respectively.+//!+//! The engine for each module is chosen at compile time with feature flags:+//!+//! - `--features test-engine-kv-rocksdb`+//! - `--features test-engine-raft-rocksdb`+//! - `--features test-engine-kv-panic`+//! - `--features test-engine-raft-panic`+//!+//! By default, the `tikv` crate turns on `test-engine-kv-rocksdb`,+//! and `test-engine-raft-rocksdb`. This behavior can be disabled+//! with `--disable-default-features`.+//!+//! The `tikv` crate additionally provides two feature flags that+//! contral both the `kv` and `raft` engines at the same time:+//!+//! - `--features test-engines-rocksdb`+//! - `--features test-engines-panic`+//!+//! So, e.g., to run the test suite with the panic engine:+//!+//! ```+//! cargo test --all --disable-default-features --features=protobuf_codec,test-engines-panic+//! ```+//!+//! We'll probably revisit the engine-testing strategy in the future,+//! e.g. by using engine-parameterized tests instead.+//!+//! This create also contains a `ctor` module that contains constructor methods+//! appropriate for constructing storage engines of any type. It is intended+//! that this module is _the only_ module within TiKV that knows about concrete+//! storage engines, and that it be extracted into its own crate for use in+//! TiKV, once the full requirements are better understood.++/// Types and constructors for the "raft" engine+pub mod raft {+    use crate::ctor::{CFOptions, DBOptions, EngineConstructorExt};+    use engine_traits::Result;++    #[cfg(feature = "test-engine-raft-panic")]+    pub use engine_panic::{+        PanicEngine as RaftTestEngine, PanicSnapshot as RaftTestSnapshot,+        PanicWriteBatch as RaftTestWriteBatch,+    };++    #[cfg(feature = "test-engine-raft-rocksdb")]+    pub use engine_rocks::{+        RocksEngine as RaftTestEngine, RocksSnapshot as RaftTestSnapshot,+        RocksWriteBatch as RaftTestWriteBatch,+    };++    pub fn new_engine(+        path: &str,+        db_opt: Option<DBOptions>,+        cfs: &[&str],+        opts: Option<Vec<CFOptions>>,+    ) -> Result<RaftTestEngine> {+        RaftTestEngine::new_engine(path, db_opt, cfs, opts)+    }++    pub fn new_engine_opt(

what does _opt mean here? Both functions return a Result (not Option) and both take both db and cf options

brson

comment created time in 15 days

Pull request review commenttikv/tikv

Make raftstore testable with arbitrary combinations of storage engines

+// Copyright 2020 TiKV Project Authors. Licensed under Apache-2.0.++//! Engines for use in the test suite, implementing both the KvEngine+//! and RaftEngine traits.+//!+//! These engines link to all other engines, providing concrete single storage+//! engine type to run tests against.+//!+//! This provides a simple way to integrate non-RocksDB engines into the+//! existing test suite without too much disruption.+//!+//! Engines presently supported by this crate are+//!+//! - RocksEngine from engine_rocks+//! - PanicEngine from engine_panic+//!+//! TiKV uses two different storage engine instances,+//! the "raft" engine, for storing consensus data,+//! and the "kv" engine, for storing user data.+//!+//! The types and constructors for these two engines are located in the `raft`+//! and `kv` modules respectively.+//!+//! The engine for each module is chosen at compile time with feature flags:+//!+//! - `--features test-engine-kv-rocksdb`+//! - `--features test-engine-raft-rocksdb`+//! - `--features test-engine-kv-panic`+//! - `--features test-engine-raft-panic`+//!+//! By default, the `tikv` crate turns on `test-engine-kv-rocksdb`,+//! and `test-engine-raft-rocksdb`. This behavior can be disabled+//! with `--disable-default-features`.+//!+//! The `tikv` crate additionally provides two feature flags that+//! contral both the `kv` and `raft` engines at the same time:+//!+//! - `--features test-engines-rocksdb`+//! - `--features test-engines-panic`+//!+//! So, e.g., to run the test suite with the panic engine:+//!+//! ```+//! cargo test --all --disable-default-features --features=protobuf_codec,test-engines-panic+//! ```+//!+//! We'll probably revisit the engine-testing strategy in the future,+//! e.g. by using engine-parameterized tests instead.+//!+//! This create also contains a `ctor` module that contains constructor methods+//! appropriate for constructing storage engines of any type. It is intended+//! that this module is _the only_ module within TiKV that knows about concrete+//! storage engines, and that it be extracted into its own crate for use in+//! TiKV, once the full requirements are better understood.++/// Types and constructors for the "raft" engine+pub mod raft {+    use crate::ctor::{CFOptions, DBOptions, EngineConstructorExt};+    use engine_traits::Result;++    #[cfg(feature = "test-engine-raft-panic")]+    pub use engine_panic::{+        PanicEngine as RaftTestEngine, PanicSnapshot as RaftTestSnapshot,+        PanicWriteBatch as RaftTestWriteBatch,+    };++    #[cfg(feature = "test-engine-raft-rocksdb")]+    pub use engine_rocks::{+        RocksEngine as RaftTestEngine, RocksSnapshot as RaftTestSnapshot,+        RocksWriteBatch as RaftTestWriteBatch,+    };++    pub fn new_engine(+        path: &str,+        db_opt: Option<DBOptions>,+        cfs: &[&str],+        opts: Option<Vec<CFOptions>>,+    ) -> Result<RaftTestEngine> {+        RaftTestEngine::new_engine(path, db_opt, cfs, opts)+    }++    pub fn new_engine_opt(+        path: &str,+        db_opt: DBOptions,+        cfs_opts: Vec<CFOptions>,+    ) -> Result<RaftTestEngine> {+        RaftTestEngine::new_engine_opt(path, db_opt, cfs_opts)+    }+}++/// Types and constructors for the "kv" engine+pub mod kv {+    use crate::ctor::{CFOptions, DBOptions, EngineConstructorExt};+    use engine_traits::Result;++    #[cfg(feature = "test-engine-kv-panic")]+    pub use engine_panic::{+        PanicEngine as KvTestEngine, PanicSnapshot as KvTestSnapshot,+        PanicWriteBatch as KvTestWriteBatch,+    };++    #[cfg(feature = "test-engine-kv-rocksdb")]+    pub use engine_rocks::{+        RocksEngine as KvTestEngine, RocksSnapshot as KvTestSnapshot,+        RocksWriteBatch as KvTestWriteBatch,+    };++    pub fn new_engine(+        path: &str,+        db_opt: Option<DBOptions>,+        cfs: &[&str],+        opts: Option<Vec<CFOptions>>,+    ) -> Result<KvTestEngine> {+        KvTestEngine::new_engine(path, db_opt, cfs, opts)+    }++    pub fn new_engine_opt(+        path: &str,+        db_opt: DBOptions,+        cfs_opts: Vec<CFOptions>,+    ) -> Result<KvTestEngine> {+        KvTestEngine::new_engine_opt(path, db_opt, cfs_opts)+    }+}++/// Create a storage engine with a concrete type. This should ultimately be the+/// only module within TiKv that needs to know about concrete engines. Other+/// code only uses the `engine_traits` abstractions.+///+/// At the moment this has a lot of open-coding of engine-specific+/// initialization, but in the future more constructor abstractions should be+/// pushed down into engine_traits.+///+/// This module itself is intended to be extracted from this crate into its own+/// crate, once the requirements for engine construction are better understood.+pub mod ctor {+    use engine_traits::Result;++    /// Engine construction+    ///+    /// For simplicity, all engine constructors are expected to configure every+    /// engine such that all of TiKV and its tests work correctly, for the+    /// constructed column families.+    ///+    /// Specifically, this means that RocksDB constructors should set up+    /// all properties collectors, always.+    pub trait EngineConstructorExt: Sized {+        /// Create a new engine with either:+        ///+        /// - The column families specified as `cfs`, with default options, or+        /// - The column families specified as `opts`, with options.+        ///+        /// Note that if `opts` is not `None` then the `cfs` argument is completely ignored.+        fn new_engine(+            path: &str,+            db_opt: Option<DBOptions>,+            cfs: &[&str],+            opts: Option<Vec<CFOptions>>,+        ) -> Result<Self>;++        /// Create a new engine with specified column families and options+        fn new_engine_opt(path: &str, db_opt: DBOptions, cfs_opts: Vec<CFOptions>) -> Result<Self>;+    }++    #[derive(Clone)]+    pub enum CryptoOpts {

I think CryptoOptions would be a better name - its only a few chars and means it follows db options and cf options.

brson

comment created time in 15 days

Pull request review commenttikv/tikv

Make raftstore testable with arbitrary combinations of storage engines

+// Copyright 2020 TiKV Project Authors. Licensed under Apache-2.0.++//! Engines for use in the test suite, implementing both the KvEngine+//! and RaftEngine traits.+//!+//! These engines link to all other engines, providing concrete single storage+//! engine type to run tests against.+//!+//! This provides a simple way to integrate non-RocksDB engines into the+//! existing test suite without too much disruption.+//!+//! Engines presently supported by this crate are+//!+//! - RocksEngine from engine_rocks+//! - PanicEngine from engine_panic+//!+//! TiKV uses two different storage engine instances,+//! the "raft" engine, for storing consensus data,+//! and the "kv" engine, for storing user data.+//!+//! The types and constructors for these two engines are located in the `raft`+//! and `kv` modules respectively.+//!+//! The engine for each module is chosen at compile time with feature flags:+//!+//! - `--features test-engine-kv-rocksdb`+//! - `--features test-engine-raft-rocksdb`+//! - `--features test-engine-kv-panic`+//! - `--features test-engine-raft-panic`+//!+//! By default, the `tikv` crate turns on `test-engine-kv-rocksdb`,+//! and `test-engine-raft-rocksdb`. This behavior can be disabled+//! with `--disable-default-features`.+//!+//! The `tikv` crate additionally provides two feature flags that+//! contral both the `kv` and `raft` engines at the same time:+//!+//! - `--features test-engines-rocksdb`+//! - `--features test-engines-panic`+//!+//! So, e.g., to run the test suite with the panic engine:+//!+//! ```+//! cargo test --all --disable-default-features --features=protobuf_codec,test-engines-panic+//! ```+//!+//! We'll probably revisit the engine-testing strategy in the future,+//! e.g. by using engine-parameterized tests instead.+//!+//! This create also contains a `ctor` module that contains constructor methods+//! appropriate for constructing storage engines of any type. It is intended+//! that this module is _the only_ module within TiKV that knows about concrete+//! storage engines, and that it be extracted into its own crate for use in+//! TiKV, once the full requirements are better understood.++/// Types and constructors for the "raft" engine+pub mod raft {+    use crate::ctor::{CFOptions, DBOptions, EngineConstructorExt};+    use engine_traits::Result;++    #[cfg(feature = "test-engine-raft-panic")]+    pub use engine_panic::{+        PanicEngine as RaftTestEngine, PanicSnapshot as RaftTestSnapshot,+        PanicWriteBatch as RaftTestWriteBatch,+    };++    #[cfg(feature = "test-engine-raft-rocksdb")]+    pub use engine_rocks::{+        RocksEngine as RaftTestEngine, RocksSnapshot as RaftTestSnapshot,+        RocksWriteBatch as RaftTestWriteBatch,+    };++    pub fn new_engine(+        path: &str,+        db_opt: Option<DBOptions>,+        cfs: &[&str],+        opts: Option<Vec<CFOptions>>,+    ) -> Result<RaftTestEngine> {+        RaftTestEngine::new_engine(path, db_opt, cfs, opts)+    }++    pub fn new_engine_opt(+        path: &str,+        db_opt: DBOptions,+        cfs_opts: Vec<CFOptions>,+    ) -> Result<RaftTestEngine> {+        RaftTestEngine::new_engine_opt(path, db_opt, cfs_opts)+    }+}++/// Types and constructors for the "kv" engine+pub mod kv {+    use crate::ctor::{CFOptions, DBOptions, EngineConstructorExt};+    use engine_traits::Result;++    #[cfg(feature = "test-engine-kv-panic")]+    pub use engine_panic::{+        PanicEngine as KvTestEngine, PanicSnapshot as KvTestSnapshot,+        PanicWriteBatch as KvTestWriteBatch,+    };++    #[cfg(feature = "test-engine-kv-rocksdb")]+    pub use engine_rocks::{+        RocksEngine as KvTestEngine, RocksSnapshot as KvTestSnapshot,+        RocksWriteBatch as KvTestWriteBatch,+    };++    pub fn new_engine(+        path: &str,+        db_opt: Option<DBOptions>,+        cfs: &[&str],+        opts: Option<Vec<CFOptions>>,+    ) -> Result<KvTestEngine> {+        KvTestEngine::new_engine(path, db_opt, cfs, opts)+    }++    pub fn new_engine_opt(+        path: &str,+        db_opt: DBOptions,+        cfs_opts: Vec<CFOptions>,+    ) -> Result<KvTestEngine> {+        KvTestEngine::new_engine_opt(path, db_opt, cfs_opts)+    }+}++/// Create a storage engine with a concrete type. This should ultimately be the+/// only module within TiKv that needs to know about concrete engines. Other+/// code only uses the `engine_traits` abstractions.+///+/// At the moment this has a lot of open-coding of engine-specific+/// initialization, but in the future more constructor abstractions should be+/// pushed down into engine_traits.+///+/// This module itself is intended to be extracted from this crate into its own+/// crate, once the requirements for engine construction are better understood.+pub mod ctor {+    use engine_traits::Result;++    /// Engine construction+    ///+    /// For simplicity, all engine constructors are expected to configure every+    /// engine such that all of TiKV and its tests work correctly, for the+    /// constructed column families.+    ///+    /// Specifically, this means that RocksDB constructors should set up+    /// all properties collectors, always.+    pub trait EngineConstructorExt: Sized {+        /// Create a new engine with either:+        ///+        /// - The column families specified as `cfs`, with default options, or+        /// - The column families specified as `opts`, with options.+        ///+        /// Note that if `opts` is not `None` then the `cfs` argument is completely ignored.+        fn new_engine(+            path: &str,+            db_opt: Option<DBOptions>,+            cfs: &[&str],+            opts: Option<Vec<CFOptions>>,+        ) -> Result<Self>;++        /// Create a new engine with specified column families and options+        fn new_engine_opt(path: &str, db_opt: DBOptions, cfs_opts: Vec<CFOptions>) -> Result<Self>;+    }++    #[derive(Clone)]+    pub enum CryptoOpts {+        None,+        DefaultCtrEncryptedEnv(Vec<u8>),+    }++    #[derive(Clone)]+    pub struct DBOptions {+        encryption: CryptoOpts,+    }++    impl DBOptions {+        pub fn new() -> DBOptions {+            DBOptions {+                encryption: CryptoOpts::None,+            }+        }++        pub fn with_default_ctr_encrypted_env(&mut self, ciphertext: Vec<u8>) {+            self.encryption = CryptoOpts::DefaultCtrEncryptedEnv(ciphertext);+        }+    }++    pub struct CFOptions<'a> {+        pub cf: &'a str,

perhaps name instead of cf?

brson

comment created time in 15 days

Pull request review commenttikv/tikv

Make raftstore testable with arbitrary combinations of storage engines

+// Copyright 2020 TiKV Project Authors. Licensed under Apache-2.0.++//! Engines for use in the test suite, implementing both the KvEngine+//! and RaftEngine traits.+//!+//! These engines link to all other engines, providing concrete single storage+//! engine type to run tests against.+//!+//! This provides a simple way to integrate non-RocksDB engines into the+//! existing test suite without too much disruption.+//!+//! Engines presently supported by this crate are+//!+//! - RocksEngine from engine_rocks+//! - PanicEngine from engine_panic+//!+//! TiKV uses two different storage engine instances,+//! the "raft" engine, for storing consensus data,+//! and the "kv" engine, for storing user data.+//!+//! The types and constructors for these two engines are located in the `raft`+//! and `kv` modules respectively.+//!+//! The engine for each module is chosen at compile time with feature flags:+//!+//! - `--features test-engine-kv-rocksdb`+//! - `--features test-engine-raft-rocksdb`+//! - `--features test-engine-kv-panic`+//! - `--features test-engine-raft-panic`+//!+//! By default, the `tikv` crate turns on `test-engine-kv-rocksdb`,+//! and `test-engine-raft-rocksdb`. This behavior can be disabled+//! with `--disable-default-features`.+//!+//! The `tikv` crate additionally provides two feature flags that+//! contral both the `kv` and `raft` engines at the same time:+//!+//! - `--features test-engines-rocksdb`+//! - `--features test-engines-panic`+//!+//! So, e.g., to run the test suite with the panic engine:+//!+//! ```+//! cargo test --all --disable-default-features --features=protobuf_codec,test-engines-panic+//! ```+//!+//! We'll probably revisit the engine-testing strategy in the future,+//! e.g. by using engine-parameterized tests instead.+//!+//! This create also contains a `ctor` module that contains constructor methods+//! appropriate for constructing storage engines of any type. It is intended+//! that this module is _the only_ module within TiKV that knows about concrete+//! storage engines, and that it be extracted into its own crate for use in+//! TiKV, once the full requirements are better understood.++/// Types and constructors for the "raft" engine+pub mod raft {+    use crate::ctor::{CFOptions, DBOptions, EngineConstructorExt};

style: should be CfOptions and DbOptions

brson

comment created time in 15 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commenttikv/rfcs

RFC for QoS: Quality of Service

+# QoS: Quality of Service++## Motivation++Queries compete for resources and thus interfere with each other. Currently users can only deal with this in very time consuming ways by either increasing cluster capacity or altering their applications, the latter of which may take hours or days.++Users want to ensure a quality of service for their queries. Some queries should be prioritized above others. For queries of the same priority, resources should be divided fairly among queries. When there are multiple tenants, provide resource isolation but still allow for high utilization.++## Summary++This solution provide QoS at the level of the TiKV node. QoS is configured both globally in PD and dynamically by clients.++  * QoS Policy is set in PD for region groups such as key spaces (tenant) and tables+    * Larger region groups have more capacity allocated+  * Allow an application (TiDB) to create its own policies by sending a QoS-Request that further prioritizes its own capacity.+  * Analytics queries can request a low QoS+  * Apply local back pressure on a TiKV node by rejecting queries using too much capacity++![QoS Architecture](../media/qos-architecture.png)++![QoS Capacity Slicing](../media/media/QoS-capacity-slice.png)+++## Terminology++* QoS: a relative priority setting. This is not a quota: usage is always “bursting” to achieve high utilization.+* Capacity: the total resources available to be prioritized+* Key Space: in a multi-tenant setup, every tenant gets a distinct key space. More generally a key space is designed for applications with different data ownership.+++## Detailed design++### Architectural and Implementation advantages++Ti Components are loosely coupled:+  * PD stores policies and communicates them to TiKV+  * TiKV performs query admission, providing localized back pressure+  * TiDB can create its own QoS policies for its users/tables just by sending a header++Iterative. We can try to produce a useful first version without:+  * Bursting+  * Global Fairness with adjusted weighting and a PD placement policy+  * Back Pressure fairness with detailed resource usage measurements+++This is designed to be a minimal step towards supporting QoS sensitive workloads such as multi-tenant. Future work will be needed to create an improved scheduler and to improve global fairness.++### TiKV Back Pressure++#### Local Back Pressure at TiKV++TiKV will have an admission controller component. This component will track the QoS status and reject queries before they are accepted.++The downside to following this approach is that TiKV does not understand a multi-node query. One node blocking a query can slow down a larger transaction and end up slowing down the system as a whole. Trying to give TiKV global information won’t scale up well for a large cluster.++#### Query inhibition++Queries should be inhibited based on+* The total capacity available on the node+* The QoS policies that apply to the query+* The estimated resources needed for the queries++#### Resource Estimation++The amount of inhibition required depends on the number of requests and amount of resources being requested. Effectively when resources are highly utilized we build up a queue of pending requests with a limited size where the overflow is rejected.++Policy application is allowed to take into account resources that will be used

Here you talk about prioritisation of queries but in the above section it sounds like TiKV just has a run/reject binary for queries.

gregwebs

comment created time in 15 days

Pull request review commenttikv/rfcs

RFC for QoS: Quality of Service

+# QoS: Quality of Service++## Motivation++Queries compete for resources and thus interfere with each other. Currently users can only deal with this in very time consuming ways by either increasing cluster capacity or altering their applications, the latter of which may take hours or days.++Users want to ensure a quality of service for their queries. Some queries should be prioritized above others. For queries of the same priority, resources should be divided fairly among queries. When there are multiple tenants, provide resource isolation but still allow for high utilization.++## Summary++This solution provide QoS at the level of the TiKV node. QoS is configured both globally in PD and dynamically by clients.++  * QoS Policy is set in PD for region groups such as key spaces (tenant) and tables+    * Larger region groups have more capacity allocated+  * Allow an application (TiDB) to create its own policies by sending a QoS-Request that further prioritizes its own capacity.+  * Analytics queries can request a low QoS+  * Apply local back pressure on a TiKV node by rejecting queries using too much capacity++![QoS Architecture](../media/qos-architecture.png)++![QoS Capacity Slicing](../media/media/QoS-capacity-slice.png)+++## Terminology++* QoS: a relative priority setting. This is not a quota: usage is always “bursting” to achieve high utilization.+* Capacity: the total resources available to be prioritized+* Key Space: in a multi-tenant setup, every tenant gets a distinct key space. More generally a key space is designed for applications with different data ownership.+++## Detailed design++### Architectural and Implementation advantages++Ti Components are loosely coupled:+  * PD stores policies and communicates them to TiKV+  * TiKV performs query admission, providing localized back pressure

Presumably TiFlash would work in the same way as TiKV

gregwebs

comment created time in 15 days

Pull request review commenttikv/rfcs

RFC for QoS: Quality of Service

+# QoS: Quality of Service++## Motivation++Queries compete for resources and thus interfere with each other. Currently users can only deal with this in very time consuming ways by either increasing cluster capacity or altering their applications, the latter of which may take hours or days.++Users want to ensure a quality of service for their queries. Some queries should be prioritized above others. For queries of the same priority, resources should be divided fairly among queries. When there are multiple tenants, provide resource isolation but still allow for high utilization.++## Summary++This solution provide QoS at the level of the TiKV node. QoS is configured both globally in PD and dynamically by clients.++  * QoS Policy is set in PD for region groups such as key spaces (tenant) and tables+    * Larger region groups have more capacity allocated+  * Allow an application (TiDB) to create its own policies by sending a QoS-Request that further prioritizes its own capacity.+  * Analytics queries can request a low QoS+  * Apply local back pressure on a TiKV node by rejecting queries using too much capacity++![QoS Architecture](../media/qos-architecture.png)++![QoS Capacity Slicing](../media/media/QoS-capacity-slice.png)+++## Terminology++* QoS: a relative priority setting. This is not a quota: usage is always “bursting” to achieve high utilization.+* Capacity: the total resources available to be prioritized+* Key Space: in a multi-tenant setup, every tenant gets a distinct key space. More generally a key space is designed for applications with different data ownership.+++## Detailed design++### Architectural and Implementation advantages++Ti Components are loosely coupled:+  * PD stores policies and communicates them to TiKV+  * TiKV performs query admission, providing localized back pressure+  * TiDB can create its own QoS policies for its users/tables just by sending a header++Iterative. We can try to produce a useful first version without:+  * Bursting+  * Global Fairness with adjusted weighting and a PD placement policy+  * Back Pressure fairness with detailed resource usage measurements+++This is designed to be a minimal step towards supporting QoS sensitive workloads such as multi-tenant. Future work will be needed to create an improved scheduler and to improve global fairness.++### TiKV Back Pressure++#### Local Back Pressure at TiKV++TiKV will have an admission controller component. This component will track the QoS status and reject queries before they are accepted.++The downside to following this approach is that TiKV does not understand a multi-node query. One node blocking a query can slow down a larger transaction and end up slowing down the system as a whole. Trying to give TiKV global information won’t scale up well for a large cluster.++#### Query inhibition++Queries should be inhibited based on+* The total capacity available on the node+* The QoS policies that apply to the query+* The estimated resources needed for the queries++#### Resource Estimation++The amount of inhibition required depends on the number of requests and amount of resources being requested. Effectively when resources are highly utilized we build up a queue of pending requests with a limited size where the overflow is rejected.++Policy application is allowed to take into account resources that will be used++* less intensive queries can be prioritized above more intensive queries, particularly for bursting+* queries can be prioritized that together make for better resource utilization given the multiple dimensions of resource usage.++#### Resource measurement++TiKV must measure the resource usage of the node as a whole.+However, in our first version we do not take into account the actual usage of different policies. To improve our ability to estimate resource usage we will need to develop the ability to measure the actual resources used of policies being applied. These measurements can eventually be used to apply QoS more intelligently. For example, the effects of bad estimates can be corrected.++### QoS Policy++#### QoS Value and composition++QoS is specified as an integer value on a linear scale. A greater value reflects a greater priority and a value twice as large is twice as high of a priority. Negative values are effectively treated as a fraction between 0 and 1.++QoS values can compose in two different ways (these are also discussed in later sections)+* Inner Override (replace): a table QoS value overrides a keyspace QoS setting+8 Inner Prioritization (greater specificity): a custom application request QoS value is a priority relative to other application requests. The application as a whole is still governed by the keyspace QoS value++#### QoS Policy stored in PD++A QoS policy is set by an administrator in PD. It is a combination of a region group and a QoS value. The main region group is a key space. Smaller regions within a key space may be specified such as a table and this QoS setting will take precedence over that of the key space. These groups are dynamic (new regions can be added) and translated to regions by PD which has knowledge of tenant and table groupings.

Presumably when a region splits, it inherits the QoS parameters from its parents. What happens when two regions with different QoS are merged?

Does PD have knowledge of how tables/tenants are represented within a key space? My assumption is that only TiDB knows this.

gregwebs

comment created time in 15 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventnrc/stupid-stats

Yoshiki Takashima

commit sha 156bfeb32cdda76c4d182d58c41888b4976d8506

Made compile continue to allow for use in cargo. If you don't continue, this cargo build with replaced rustc will not work because some artifacts are missing.

view details

Nick Cameron

commit sha 94902a8c0559a45f4a504eeb899687889992af64

Merge pull request #18 from YoshikiTakashima/pr-156bfeb Made compile continue to allow for use in cargo.

view details

push time in 15 days

PR merged nrc/stupid-stats

Made compile continue to allow for use in cargo.

If you don't continue, some build artifacts will be missing when using this as a drop-in replacement for rustc during cargo build

I think this is a fairly easy 1-line fix for #17.

+3 -3

1 comment

1 changed file

YoshikiTakashima

pr closed time in 15 days

PullRequestReviewEvent

push eventtikv/client-rust

ekexium

commit sha 3f8c3a7200a2479d860cc5ab122bab95f3299c27

fix a bug in examples/raw Signed-off-by: ekexium <ekexium@gmail.com>

view details

Nick Cameron

commit sha e07e973cbffe4e2c7d9b3785ab24ecd9150eab9a

Merge branch 'master' into fix-example

view details

Nick Cameron

commit sha 43248ac007f8335540ae8543229b1f3cf1bf75f0

Merge pull request #178 from ekexium/fix-example Fix a mistake in examples/raw

view details

push time in 15 days

PR merged tikv/client-rust

Fix a mistake in examples/raw

#134 introduced the wrong assertion.

+1 -5

0 comment

1 changed file

ekexium

pr closed time in 15 days

push eventekexium/client-rust

George Teo

commit sha ad8ef075af4ca27c231203162e21f08c2e639ea6

Add codec for encoding region for transaction client (#162) Add codec for encoding region for transaction client. Fix some other bugs. Signed-off-by: George Teo <george.c.teo@gmail.com> Co-authored-by: ekexium <ekexium@gmail.com>

view details

Nick Cameron

commit sha e07e973cbffe4e2c7d9b3785ab24ecd9150eab9a

Merge branch 'master' into fix-example

view details

push time in 15 days

more