profile
viewpoint

kennytm/cargo-kcov 102

Cargo subcommand to run kcov to get coverage report on Linux

kennytm/cov 96

LLVM-GCOV Source coverage for Rust

kennytm/CoCCalc 49

THIS PROJECT HAS BEEN ABANDONED.

kennytm/dbgen 9

Generate random test cases for databases

kennytm/CatSaver 8

Automatically save logcat

auraht/gamepad 4

A cross-platform gamepad library to supplement HIDDriver

kennytm/borsholder 4

Combined status board of rust-lang/rust's Homu queue and GitHub PR status.

kennytm/711cov 3

Coverage reporting software for gcov-4.7

kennytm/aar-to-eclipse 3

Convert *.aar to Android library project for Eclipse ADT

kennytm/BinarySpec.swift 3

Parsing binary protocols (for Swift)

pull request commentrust-lang/rfcs

Unsafe statics

The unsafe keyword in Rust can either define (unsafe trait) or "discharge" (unsafe impl, unsafe {} block) the unsafe obligation. Currently unsafe fn is in both categories, and it is now considered mistake in #2585 and eventually won't discharge the unsafe effect in some future edition.

In this RFC, unsafe static again takes both the define and discharge roles:

  • It discharges the safe invariance which is "static must be Sync"
  • It defines the unsafe effect which its usage must be unsafe.

perhaps this causes the surprises in the above comments.


Is it possible to split the two roles? Here is an alternative:

  1. unsafe static just declares the item unsafe to use, and nothing more.
  2. static no longer errors on non-Sync types. The check is converted to detect unsafe code usage, similar to accessing a static mut or union field — trying to access a non-Sync static would require an unsafe block.

This means the following is allowed (perhaps with a lint)...

static X: Cell<i32> = Cell::new(0);

but doing these would require unsafe:

// not ok:
let x = &X;
// ok:
let x = unsafe { &X };

// not ok:
let x = X.get();
// ok:
let x = unsafe { X.get() };

// not ok:
X.set(10);
// ok:
unsafe { X.set(10) };

// not ok:
let x = &mut X;
// never ok:
let x = unsafe { &mut X };

// not ok even if Y is Copy (??) (does `static Y: *mut T = ...` make sense?):
let y = Y;
// ok if Y is Copy (??):
let y = unsafe { Y };
error[E0133]: use of non-Sync static is unsafe and requires unsafe block
 --> src/main.rs:4:13
  |
4 |     let x = &X;
  |             ^^ use of non-Sync static
  |
  = note: non-Sync statics cannot be safely shared by multiple threads: aliasing violations or data races will cause undefined behavior

One big drawback of such proposal is that the Sync check is deferred from declaration time to usage time, which is quite the opposite direction of how Rust performs code checking. It also does not explicitly tell the reader that using X is unsafe, which is one of the rationales of this RFC.

withoutboats

comment created time in 6 hours

pull request commentrust-lang/rfcs

Unsafe statics

cc rust-lang/rust#53639

withoutboats

comment created time in 6 hours

push eventkennytm/tzfile

kennytm

commit sha 184a3920fa8b75d991d313181cba435d14b2f1f5

Implement From<Tz> for ArcTz and RcTz.

view details

push time in a day

push eventkennytm/rustup-toolchain-install-master

kennytm

commit sha 662f7e6d7209514943faf824fbc63882cd461689

Run cargo update.

view details

push time in a day

push eventtikv/crc64fast

kennytm

commit sha 43b07a06caa820f17b4929bac0f42eba130824ca

Cargo.toml: update proptest dev-dependency Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in a day

created tagkennytm/async-ctrlc

tagv1.2.0

`async-ctrlc` is an async wrapper of the `ctrlc` crate in Rust

created time in a day

release kennytm/async-ctrlc

v1.2.0

released time in a day

Pull request review commentrust-lang/rfcs

RFC: Placement by return

+- Feature Name: placement-by-return+- Start Date: 2020-01-23+- RFC PR: [rust-lang/rfcs#0000](https://github.com/rust-lang/rfcs/pull/0000)+- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)++# Summary+[summary]: #summary++Implement ["placement"](https://internals.rust-lang.org/t/removal-of-all-unstable-placement-features/7223) with no new syntax, by extending the existing capabilities of ordinary `return`. This involves [copying Guaranteed Copy Elision rules pretty much wholesale from C++17](https://jonasdevlieghere.com/guaranteed-copy-elision/), adding functions like `fn new_with<F: FnOnce() -> T, T: ?Sized>(f: F) -> Self` to Box and `fn push_with<F: FnOnce() -> T>(&mut self, f: F)` to Vec to allow performing the allocation before evaluating F, providing raw access to the return slot for functions as an unsafe feature, and allowing functions to directly return **Dynamically-Sized Types** (DSTs) by compiling such functions into a special kind of "generator".++Starting with the questions given at the end of the [old RFC's mortician's note](https://github.com/rust-lang/rust/issues/27779):++ * **Does the new RFC support DSTs? serde and fallible creation?** Yes on DSTs. On fallible creation, it punts it into the future section.+ * **Does the new RFC have a lot of traits? Is it justified?** It introduces no new traits at all.+ * **Can the new RFC handle cases where allocation fails? Does this align with wider language plans (if any) for fallible allocation?** Yes.+ * **are there upcoming/potential language features that could affect the design of the new RFC? e.g. custom allocators, NoMove, HKTs? What would the implications be?** Not really. `Pin` can have a `new_with` function just like anyone else, custom allocators would happen entirely behind this, true HKT's are probably never going to be added, and associated type constructors aren't going to affect this proposal since the proposal defines no new traits or types that would use them.++## Glossary++- **GCE:** [Guaranteed Copy Elision](https://stackoverflow.com/questions/38043319/how-does-guaranteed-copy-elision-work).+- **NRVO:** [Named Return Value Optimization](https://shaharmike.com/cpp/rvo/).+- **DST:** [Dynamically-Sized Type](https://doc.rust-lang.org/reference/dynamically-sized-types.html).+- **HKT:** [Higher-Kinded Type](https://stackoverflow.com/a/6417328/3511753).++# Motivation+[motivation]: #motivation++Rust has a dysfunctional relationship with objects that are large or variable in size. It can accept them as parameters pretty well using references, but creating them is unwieldy and inneficient:++* A function pretty much has to use `Vec` to create huge arrays, even if the array is fixed size. The way you'd want to do it, `Box::new([0; 1_000_000])`, will allocate the array on the stack and then copy it into the Box. This same form of copying shows up in tons of API's, like serde's Serialize trait.+* There's no safe way to create gigantic, singular structs without overhead. If your 1M array is wrapped in a struct, then the only safe way to dynamically allocate one is to use `Box::new(MyStruct::new())`, which ends up creating an instance of `MyStruct` on the stack and copying it to the box, 1M array included.+* You can't return bare unsized types. [RFC-1909](https://github.com/rust-lang/rfcs/blob/master/text/1909-unsized-rvalues.md) allows you to create them locally, and pass them as arguments, but not return them.++As far as existing emplacement proposals go, this one was written with the following requirements in mind:++* **It needs to be possible to wrap it in a safe API.** Safe API examples are given for built-in data structures, including a full sketch of the implementation for Box, including exception safety.+* **It needs to support already-idiomatic constructors like `fn new() -> GiantStruct { GiantStruct { ... } }`** Since this proposal is defined in terms of Guaranteed Copy Elision, this is a gimme.+* **It needs to be possible to in-place populate data structures that cannot be written using a single literal expression.** The `write_return_with` intrinsic suggested in this proposal allows this to be done in an unsafe way. Sketches for APIs built on top of them are also given in the [future-possibilities] section.+* **It needs to avoid adding egregious overhead in cases where the values being populated are small (in other words, if the value being initialized is the size of a pointer or smaller, it needs to be possible for the compiler to optimize away the outptr).** Since this proposal is written in terms of Guaranteed Copy Elision, this is a gimme. The exception of the "weird bypass functions" `read_return_with` and `write_return_with` may seem to break this; see the [example desugarings here](#How-do-the-return-slot-functions-work-when-the-copy-is-not-actually-elided) for info on how these functions work when the copy is not actually elided.+* **It needs to solve most of the listed problems with the old proposals.** Since this one actually goes the distance and defines when copy elision will kick in, it fixes the biggest problems that the old `box` literal system had. It is also written in terms of present-day Rust, using `impl Trait`, and with the current direction of Rust in mind.++# Guide-level explanation+[guide-level-explanation]: #guide-level-explanation++If you need to allocate a very large data structure, or a DST, you should prefer using `Box::new_with` over `Box::new`. For example:++```rust+let boxed_array = Box::new_with(|| [0; 1_000_000]); // instead of Box::new([0; 1_000_000])+let boxed_data_struct = Box::new_with(DataStruct::new); // instead of Box::new(DataStruct::new())+```++The `new_with` function will perform the allocation first, then evaluate the closure, placing the result directly within it. The `new` function, on the other hand, evaluates the argument *then* performs the allocation and copies the value into it.

There is no "shared global state" in Arc::new, those ref counters are still "private" before return. Perhaps Vec::push is better example?

But even if LLVM is free to reorder the calls to eliminate the memcpy, it is also free to not able to perform that optimization. I'd still prefer to have this RFC to ensure the evaluation happens in-place.

PoignardAzur

comment created time in a day

push eventkennytm/async-ctrlc

kennytm

commit sha 243eec45c84f7c58ce0175fd14e4adf6f1372490

Version = 1.2.0

view details

push time in a day

delete tag kennytm/async-ctrlc

delete tag : v1.1.1

delete time in a day

created tagkennytm/async-ctrlc

tagv1.1.1

`async-ctrlc` is an async wrapper of the `ctrlc` crate in Rust

created time in a day

release kennytm/async-ctrlc

v1.1.1

released time in a day

push eventkennytm/async-ctrlc

kennytm

commit sha 43e45ee2c622d969f268562a7e19287f4d14cc24

Version = 1.1.1

view details

push time in a day

issue closedkennytm/extprim

Publish 1.7.1?

Would it be possible to release an extprim 1.7.1 soon so that we can get #18 out there? Currently ctest is broken downstream on nightly because of this.

closed time in a day

sagebind

issue commentkennytm/extprim

Publish 1.7.1?

https://crates.io/crates/extprim/1.7.1

please consider using the built-in i128 and u128 types instead 😕

sagebind

comment created time in a day

created tagkennytm/extprim

tagv1.7.1

Extra primitive types (u128, i128) for Rust.

created time in a day

release kennytm/extprim

v1.7.1

released time in a day

push eventpingcap/tidb-tools

kennytm

commit sha cdec436356259a802708840312c40b9b08037572

table-filter: added filter.All() to conveniently make a match-all filter (#347) * table-filter: added filter.All() to conveniently make a match-all filter * table-filter: fix: NewSchemasFilter() should return the public interface

view details

push time in a day

PR merged pingcap/tidb-tools

table-filter: added filter.All() to conveniently make a match-all filter priority/unimportant status/LGT2 type/enhancement

<!-- Thank you for contributing to TiDB! Please read TiDB's CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Add a function filter.All() which creates a match-all filter. This is used as the default value of filter.Filter in a config, which has the same effect as f, _ := filter.Parse([]string{"*.*"}).

What is changed and how it works?

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test

Code changes

Side effects

Related changes

+30 -1

0 comment

3 changed files

kennytm

pr closed time in a day

push eventpingcap/dumpling

Chunzhu Li

commit sha f4b78ed00b0174bb5306e4c2df48a4e37bca81f5

Quote tables, databases, columns in output sqls (#87) * quote db, table and field * fix bug * add integration test * fix bug * fix again * fix * fix

view details

push time in a day

PR merged pingcap/dumpling

Reviewers
Quote tables, databases, columns in output sqls status/PTAL

Quote tables, databases, columns in output sqls to make sure tables/databases/columns with special name can be outputed correctly.

+103 -62

1 comment

9 changed files

lichunzhu

pr closed time in a day

issue closedpingcap/dumpling

Support dump databases and tables with special name

dumpling need to support dumping databases and tables with special name. When I try to run ./dumpling -u root -P 3306 -H 127.0.0.1 -B "live-test" to dump database live-test, dumpling returns error: We can escape database and table name to solve this problem.

Release version:
Git commit hash: a36198fef0eff1752f6bd645bb5f884e3c30ed98
Git branch:      master
Build timestamp: 2020-04-21 02:46:31Z
Go version:      go version go1.13.4 linux/amd64

[2020/04/21 22:46:41.256 +08:00] [INFO] [config.go:111] ["detect server type"] [type=MySQL]
[2020/04/21 22:46:41.256 +08:00] [INFO] [config.go:129] ["detect server version"] [version=5.7.26-log]
dump failed: SHOW CREATE DATABASE live-test: err = Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-test' at line 1
goroutine 1 [running]:
runtime/debug.Stack(0xa2c9c0, 0xc000120280, 0xa2d680)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
github.com/pingcap/dumpling/v4/export.withStack(0xa2c9c0, 0xc000120280, 0xc0000c8078, 0xc000128080)
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/v4/export/error.go:40 +0x8d
github.com/pingcap/dumpling/v4/export.simpleQuery(0xc0001540c0, 0xc000128080, 0x1e, 0xc0001198c0, 0x1, 0xc000128080)
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/v4/export/sql.go:345 +0x152
github.com/pingcap/dumpling/v4/export.ShowCreateDatabase(0xc0001540c0, 0x7ffce0e6c37e, 0x9, 0x1ed, 0x0, 0x1ed, 0x0)
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/v4/export/sql.go:38 +0x10c
github.com/pingcap/dumpling/v4/export.dumpDatabases(0xa371c0, 0xc0000c8078, 0xc0000fea00, 0xc0001540c0, 0xa35540, 0xc000124030, 0x0, 0xc0)
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/v4/export/dump.go:102 +0x2e3
github.com/pingcap/dumpling/v4/export.Dump(0xc0000fea00, 0x0, 0x0)
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/v4/export/dump.go:90 +0x83e
main.run()
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/cmd/dumpling/main.go:119 +0x64b
main.glob..func1(0xd80aa0, 0xc000152aa0, 0x0, 0xa)
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/cmd/dumpling/main.go:57 +0x20
github.com/spf13/cobra.(*Command).execute(0xd80aa0, 0xc0000d0010, 0xa, 0xa, 0xd80aa0, 0xc0000d0010)
        /home/pingcap/goPath/pkg/mod/github.com/spf13/cobra@v0.0.6/command.go:844 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0xd80aa0, 0x0, 0x1, 0xc0000a6058)
        /home/pingcap/goPath/pkg/mod/github.com/spf13/cobra@v0.0.6/command.go:945 +0x317
github.com/spf13/cobra.(*Command).Execute(...)
        /home/pingcap/goPath/pkg/mod/github.com/spf13/cobra@v0.0.6/command.go:885
main.main()
        /home/pingcap/goPath/src/github.com/pingcap/dumpling/cmd/dumpling/main.go:128 +0x2d

closed time in a day

lichunzhu

push eventpingcap/tidb-tools

qixiaobin

commit sha 523af52c17577fa807d1fa9b7d4bb17c55e12816

Two improvements of sync_diff_inspector (#346)

view details

kennytm

commit sha f043b56c2540aa212e795acadcd7faba9e0d7194

Merge branch 'master' into kennytm/all-filter

view details

push time in a day

push eventlichunzhu/dumpling

kennytm

commit sha d7d52969d5a52a466305de2b90f49479e7c6de4e

Adjust CLI parameters to make them the same as mydumper (#86) * cmd: align CLI with mydumper * -F without unit means MiB. also allow including units * swapped -s and -S * changed -H to -h * cmd: added the -T flag

view details

kennytm

commit sha e6328456e91d8221be5b9217b4c93a82fc5b7f39

Merge branch 'master' into quoteTables

view details

push time in a day

push eventpingcap/dumpling

kennytm

commit sha d7d52969d5a52a466305de2b90f49479e7c6de4e

Adjust CLI parameters to make them the same as mydumper (#86) * cmd: align CLI with mydumper * -F without unit means MiB. also allow including units * swapped -s and -S * changed -H to -h * cmd: added the -T flag

view details

push time in a day

delete branch pingcap/dumpling

delete branch : kennytm/align-cli-with-mydumper

delete time in a day

PR merged pingcap/dumpling

Reviewers
Adjust CLI parameters to make them the same as mydumper status/LGT1

Fix #85, fix the -F part of #76.

  • -F's unit is now MiB rather than Bytes. Additionally, allow explicitly setting the unit (e.g. -F 64MiB).
  • --statement-size is now -s rather than -S.
  • --sql is now -S rather than -s (cc @AndrewDi)
  • --host is now -h rather than -H
  • Added -T
+56 -13

1 comment

8 changed files

kennytm

pr closed time in a day

issue closedpingcap/dumpling

dumpling use `-H` to set host, but mydumper use `-h`

User may use mydumper before, and then use dumpling to replace mydumper, but dumpling has different flag with mydumper, for host, dumpling use -H, but mydumper use -h.

closed time in a day

WangXiangUSTC

pull request commentrust-lang/rfcs

Destructuring assignment

We could first add a clippy lint for this and see what's the effect. The lint could be added independently from this RFC. The lint could be silenced by using {...} to force the content of &mut to be an rvalue.

let a = [0_u8; 64];
do_somthing(&mut [a[1], a[0]]);
warning: this expression is moved, its mutations will not be visible to the surroundings
 --> src/main.rs:6:22
  |
6 |     do_somthing(&mut [a[1], a[0]]);
  |                      ^^^^^^^^^^^^ help: to move explicitly: `{[a[1], a[0]]}`
  |
  = note: `#[warn(clippy::???????)]` on by default
  = help: for further information visit ....

The lint should be emitted when the expression given to &mut can be ambiguous under this RFC, so &mut () and &mut [0; 256] should not be linted.

varkor

comment created time in a day

issue openedpingcap/go-tpc

Design a fast TPCC test data generation tool: Generate TPCC SST data, then use br to complete a quick import

Feature Request

Describe your feature request related problem:

We do not have a simple tool to generate large-scale example archives. For large-scale tests, we need to use dbgen to produce SQL dump and then use TiDB Lightning to import into the cluster. This is very time consuming — for 10T-scale test we need almost 2 days for this preparation step.

Describe the feature you'd like:

We should be able to directly generate the backup archive (create SSTs directly and populate the corresponding backupmeta).

Either we create a dedicated tool (focusing on a few selected schemas, e.g. sysbench or TPC-C), or extend dbgen to create SSTs (hard, since dbgen is schema-less and won't generate indices).

Describe alternatives you've considered:

<!-- A description of any alternative solutions or features you've considered. -->

Teachability, Documentation, Adoption, Migration Strategy:

<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->

created time in a day

Pull request review commenttikv/tikv

sst_importer: add import mode timeout

 // Copyright 2018 TiKV Project Authors. Licensed under Apache-2.0. +use std::sync::{Arc, Mutex};+use std::time::Duration;+use std::time::Instant;+ use engine_traits::{ColumnFamilyOptions, DBOptions, KvEngine};+use futures_cpupool::CpuPool;+use futures_util::compat::{Compat, Future01CompatExt};+use futures_util::future::FutureExt; use kvproto::import_sstpb::*;+use tikv_util::timer::GLOBAL_TIMER_HANDLE; +use super::Config; use super::Result;  type RocksDBMetricsFn = fn(cf: &str, name: &str, v: f64); -pub struct ImportModeSwitcher {+struct ImportModeSwitcherInner<T: KvEngine> {
struct ImportModeSwitcherInner<E: KvEngine> {

TiKV typically uses E for the engine.

codeworm96

comment created time in a day

Pull request review commenttikv/tikv

sst_importer: add import mode timeout

 mod tests {          fn mf(_cf: &str, _name: &str, _v: f64) {} -        let mut switcher = ImportModeSwitcher::new();+        let cfg = Config::default();+        let threads = futures_cpupool::Builder::new()+            .name_prefix("sst-importer")+            .pool_size(cfg.num_threads)+            .create();++        let mut switcher = ImportModeSwitcher::new(&cfg, &threads, db.clone());         check_import_options(&db, &normal_db_options, &normal_cf_options);-        switcher.enter_import_mode(&db, mf).unwrap();+        switcher.enter_import_mode(mf).unwrap();         check_import_options(&db, &import_db_options, &import_cf_options);-        switcher.enter_import_mode(&db, mf).unwrap();+        switcher.enter_import_mode(mf).unwrap();         check_import_options(&db, &import_db_options, &import_cf_options);-        switcher.enter_normal_mode(&db, mf).unwrap();+        switcher.enter_normal_mode(mf).unwrap();+        check_import_options(&db, &normal_db_options, &normal_cf_options);+        switcher.enter_normal_mode(mf).unwrap();         check_import_options(&db, &normal_db_options, &normal_cf_options);-        switcher.enter_normal_mode(&db, mf).unwrap();+    }++    #[test]+    fn test_import_mode_timeout() {+        let temp_dir = Builder::new()+            .prefix("test_import_mode_timeout")+            .tempdir()+            .unwrap();+        let db = new_test_engine(temp_dir.path().to_str().unwrap(), &["a", "b"]);++        let import_db_options = ImportModeDBOptions::new();+        let normal_db_options = ImportModeDBOptions::new_options(&db);+        let import_cf_options = ImportModeCFOptions::new();+        let normal_cf_options = ImportModeCFOptions::new_options(&db, "default");++        fn mf(_cf: &str, _name: &str, _v: f64) {}++        let cfg = Config {+            import_mode_timeout: ReadableDuration::secs(5),+            ..Config::default()+        };+        let threads = futures_cpupool::Builder::new()+            .name_prefix("sst-importer")+            .pool_size(cfg.num_threads)+            .create();++        let mut switcher = ImportModeSwitcher::new(&cfg, &threads, db.clone());+        check_import_options(&db, &normal_db_options, &normal_cf_options);+        switcher.enter_import_mode(mf).unwrap();+        check_import_options(&db, &import_db_options, &import_cf_options);++        thread::sleep(Duration::from_secs(10));

Please use a shorter duration for this test (e.g. 300ms timeout and sleep for 1s)

codeworm96

comment created time in a day

PR opened pingcap/dumpling

Reviewers
Adjust CLI parameters to make them the same as mydumper status/PTAL

Fix #85, fix #76.

  • -F's unit is now MiB rather than Bytes. Additionally, allow explicitly setting the unit (e.g. -F 64MiB).
  • --statement-size is now -s rather than -S.
  • --sql is now -S rather than -s (cc @AndrewDi)
  • --host is now -h rather than -H
  • Added -T
+56 -13

0 comment

8 changed files

pr created time in 2 days

create barnchpingcap/dumpling

branch : kennytm/align-cli-with-mydumper

created branch time in 2 days

pull request commentrust-lang/rfcs

Destructuring assignment

🤔 So a side effect of this is that *&mut $EXPR = ... is not equivalent to $EXPR = ...:

let mut x = 5;
let mut y = 6;
*&mut (x, y) = (7, 8);
dbg!(x, y); // x = 5, y = 6
(x, y) = (9, 10);
dbg!(x, y); // x = 9, y = 10
varkor

comment created time in 2 days

pull request commentrust-lang/rfcs

Destructuring assignment

@joshtriplett The x and y in (x, y) = &(a, b) are not patterns. I don't see how ref could meaningfully be applied to places (lvalue expressions).

varkor

comment created time in 2 days

pull request commenttikv/tikv

external_storage: fix GCS download error, support GCS endpoints, and refactoring (#7734)

Because of the flipped merge order, the essence of this cherry-pick has been contained in #7965 already. So this PR is much less urgent now 🤷

sre-bot

comment created time in 2 days

Pull request review commentpingcap/parser

support `require SAN` clause

 import ( 	columnFormat          "COLUMN_FORMAT" 	columns               "COLUMNS" 	config                "CONFIG"+	san                   "SAN"

ditto

lysu

comment created time in 2 days

Pull request review commentpingcap/parser

support `require SAN` clause

 var tokenMap = map[string]int{ 	"COMPRESSION":              compression, 	"CONCURRENCY":              concurrency, 	"CONFIG":                   config,+	"SAN":                      san,

Please put this alphabetically.

lysu

comment created time in 2 days

push eventsre-bot/tikv

pingcap-github-bot

commit sha 768711dae26efd55f2371ff5c32de0d3cceaae85

Improve robustness of Backup/Restore involving external_storage (#7917) (#7965) Signed-off-by: sre-bot <sre-bot@pingcap.com> Signed-off-by: kennytm <kennytm@gmail.com>

view details

pingcap-github-bot

commit sha 7c84fb50adb5ccc3f83c1b362b6a42c74001e060

raftstore: don't gc snapshot files when shutting down (#7877) (#7926) Signed-off-by: sre-bot <sre-bot@pingcap.com> Signed-off-by: qupeng <qupeng@pingcap.com>

view details

pingcap-github-bot

commit sha e927870da224126529a900101c9932d456b9cc75

kv_service: fix batch empty request deadlock (#7535) (#7539) Signed-off-by: sre-bot <sre-bot@pingcap.com> Signed-off-by: Jay Lee <BusyJayLee@gmail.com>

view details

kennytm

commit sha 58fae54588f2011a6e57ad3ccf987787032a8fbb

Merge branch 'release-3.1' into release-3.1-1bdab7bf652b

view details

push time in 2 days

Pull request review commentpingcap/parser

let function names like a.b() be parsed successfully

 func (ts *testDMLSuite) TestWindowFuncExprRestore(c *C) { 	} 	RunNodeRestoreTest(c, testCases, "select %s from t", extractNodeFunc) }++func (ts *testFunctionsSuite) TestGenericFuncRestore(c *C) {+	testCases := []NodeRestoreTestCase{+		{"s.a()", "`s`.`a`()"},+		{"`s`.`a`()", "`s`.`a`()"},+		{"now()", "NOW()"},+		{"`s`.`now`()", "`s`.`now`()"},+		// FIXME: expectSQL should be `generic_func()`.+		{"generic_func()", "GENERIC_FUNC()"},

🤔 That's gonna break a lot of test cases.

tangenta

comment created time in 2 days

Pull request review commentpingcap/parser

let function names like a.b() be parsed successfully

 func (ts *testDMLSuite) TestWindowFuncExprRestore(c *C) { 	} 	RunNodeRestoreTest(c, testCases, "select %s from t", extractNodeFunc) }++func (ts *testFunctionsSuite) TestGenericFuncRestore(c *C) {+	testCases := []NodeRestoreTestCase{+		{"s.a()", "`s`.A()"},+		{"`s`.`a`()", "`s`.A()"},+		{"now()", "NOW()"},+		{"`s`.`now`()", "`s`.NOW()"},+		{"generic_func()", "GENERIC_FUNC()"},

Perhaps we need one more field to indicate whether the function is a name or keyword (according to rules in https://dev.mysql.com/doc/refman/8.0/en/function-resolution.html)

tangenta

comment created time in 2 days

push eventsre-bot/tikv

pingcap-github-bot

commit sha 20f58dcedec721bf327ad9ae6ea4cf07a9f5dbbd

external_storage: pass sse_kms_key_id to S3 (#7627) (#7749) Signed-off-by: sre-bot <sre-bot@pingcap.com> Signed-off-by: Yi Wu <yiwu@pingcap.com>

view details

pingcap-github-bot

commit sha 275904ea064d250385d506deec7dd339c5070869

raft client: make grpc message size limit configurable (#7816) (#7823) Signed-off-by: qupeng <qupeng@pingcap.com> Co-authored-by: qupeng <qupeng@pingcap.com>

view details

kennytm

commit sha 7725bfd3c45b7757c9e2fe808ff4a716ca9db63b

Merge branch 'release-3.1' into release-3.1-3c667df06126

view details

push time in 2 days

delete branch kennytm/br

delete branch : cherry-pick-298-to-3.1

delete time in 2 days

pull request commentpingcap/br

[3.1] restore: Make all download error as retryable (#298)

/merge

kennytm

comment created time in 2 days

push eventkennytm/br

kennytm

commit sha e89eb5ccd114f4db72dff4b218a55e5cd244688d

Merge branch 'release-3.1' into cherry-pick-298-to-3.1

view details

push time in 2 days

push eventkennytm/br

3pointer

commit sha 5437159cd864131988aab6864432d54b454f1540

Fix row data lost (#315) (#323) * try fix lost row data

view details

kennytm

commit sha 0c0ce6e83e6392c5483f17a0cf4f512083260fed

Merge branch 'release-3.1' into cherry-pick-298-to-3.1

view details

push time in 2 days

pull request commentpingcap/br

[3.1] restore: Make all download error as retryable (#298)

/merge

kennytm

comment created time in 2 days

pull request commenttikv/tikv

Improve robustness of Backup/Restore involving external_storage (#7917)

/run-integration-ddl-test

/run-integration-compatibility-test

sre-bot

comment created time in 2 days

pull request commenttikv/tikv

Improve robustness of Backup/Restore involving external_storage (#7917)

/run-integration-ddl-test

/run-integration-compatibility-test

sre-bot

comment created time in 2 days

pull request commenttikv/tikv

external_storage: fix GCS download error, support GCS endpoints, and refactoring (#7734)

/run-integration-ddl-test

/run-integration-compatibility-test

sre-bot

comment created time in 2 days

push eventsre-bot/tikv

kennytm

commit sha 8f604e12aaae914297a9c2e39ecf831f3354b0b9

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

push eventsre-bot/tikv

kennytm

commit sha 1a39666ed2614743ccae7c49faf01c23e2afd2e8

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

push eventsre-bot/tikv

kennytm

commit sha 0d829d5ad3aafbc414766c73ca0b9f9051910ff1

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

push eventsre-bot/tikv

kennytm

commit sha 65594312648b24a3b6e96352ff620e4f54728dc4

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

pull request commentpingcap/br

tasks: print error to log after fail. (#310)

/merge

sre-bot

comment created time in 2 days

push eventsre-bot/tikv

kennytm

commit sha fe78ec421eb81c816402930f0a6a5504e94b4354

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

issue commentpingcap/br

[5.0] Support Point-in-Time Recovery (PITR)

There are currently two solutions we may choose:

  • 3. Perform BR full backup regularly at some sparse interval, and use CDC to capture the incremental changes between two BR full backups.
  • 5. Use RocksDB Checkpoints.

We will decide which to use after investigating viability of Solution 5.

kennytm

comment created time in 2 days

push eventsre-bot/tikv

kennytm

commit sha 7750c0380efe7d5c14a45159b3cac2f9190291d8

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

pull request commentpingcap/br

tasks: print error to log after fail. (#310)

/merge

sre-bot

comment created time in 2 days

push eventsre-bot/tikv

kennytm

commit sha 14a38c9fed50da6026afd6670a65e512aabe3751

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

pull request commentpingcap/br

tasks: print error to log after fail. (#310)

/merge

sre-bot

comment created time in 2 days

push eventpingcap/dumpling

kennytm

commit sha 7e161c7e84f324633045f01067aacc6a51e52413

Implement --filter and --case-sensitive (#82) * black_white_list: use table-filter instead of legacy BW list * cmd: changed -B from a String to StringSlice * cmd: added --filter and --case-sensitive flags * docs: document -f

view details

push time in 2 days

delete branch pingcap/dumpling

delete branch : kennytm/table-filter

delete time in 2 days

PR merged pingcap/dumpling

Implement --filter and --case-sensitive status/LGT1

Fix #64, fix #65, close #81.

Replaced the black-white-list filter by the new table-filter. Use it like:

./dumpling -u root -P 14000 -H 172.16.5.189 -f db1.t1 -f db1.t2
+61 -159

3 comments

12 changed files

kennytm

pr closed time in 2 days

PR closed pingcap/dumpling

Support -T/--tables-list option

Support -T/--tables-list option : issue#64

Test:

prepare data

create database db1;
create database db2;
use db1;
create table t1(id int primary key auto_increment,name varchar(20));
insert into t1(name) values("zhangsan"),("lisi"),("wangwu");
create table t2(id int primary key auto_increment,age int);
insert into t2(age) values(10),(20),(30);
create view user as select t1.id,t1.name,t2.age from t1 left join t2 on t1.id = t2.id;

use db2;
create table t1(id int primary key auto_increment,name varchar(20));
insert into t1(name) values("Jack"),("Mark"),("Tom");
create table t2(id int primary key auto_increment,age int);
insert into t2(age) values(5),(15),(25);

dumpling test cases

  • ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1 -o dumpling_B_db1
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1 -o dumpling_B_db1
Release version:
Git commit hash: 0eb24b8cb0ae3a2823b77c8e6efd34870a940c6c
Git branch:      sg-table-list
Build timestamp: 2020-05-24 06:30:27Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 14:55:36.726 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 14:55:36.727 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]

$ ll dumpling_B_db1
total 48
drwxr-xr-x  8 shengang  staff  256  5 24 14:55 .
drwxr-xr-x  4 shengang  staff  128  5 24 14:55 ..
-rwxr-xr-x  1 shengang  staff   65  5 24 14:55 db1-schema-create.sql
-rwxr-xr-x  1 shengang  staff  200  5 24 14:55 db1.t1-schema.sql
-rwxr-xr-x  1 shengang  staff   95  5 24 14:55 db1.t1.0.sql
-rwxr-xr-x  1 shengang  staff  195  5 24 14:55 db1.t2-schema.sql
-rwxr-xr-x  1 shengang  staff   77  5 24 14:55 db1.t2.0.sql
-rwxr-xr-x  1 shengang  staff  140  5 24 14:55 metadata
  • ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1 -T t1,t2 -o dumpling_B_db1_T_t1_t2
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1 -T t1,t2 -o dumpling_B_db1_T_t1_t2
Release version:
Git commit hash: 0eb24b8cb0ae3a2823b77c8e6efd34870a940c6c
Git branch:      sg-table-list
Build timestamp: 2020-05-24 06:30:27Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 14:56:19.006 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 14:56:19.006 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]

$ ll dumpling_B_db1_T_t1_t2
total 48
drwxr-xr-x  8 shengang  staff  256  5 24 14:56 .
drwxr-xr-x  5 shengang  staff  160  5 24 14:56 ..
-rwxr-xr-x  1 shengang  staff   65  5 24 14:56 db1-schema-create.sql
-rwxr-xr-x  1 shengang  staff  200  5 24 14:56 db1.t1-schema.sql
-rwxr-xr-x  1 shengang  staff   95  5 24 14:56 db1.t1.0.sql
-rwxr-xr-x  1 shengang  staff  195  5 24 14:56 db1.t2-schema.sql
-rwxr-xr-x  1 shengang  staff   77  5 24 14:56 db1.t2.0.sql
-rwxr-xr-x  1 shengang  staff  140  5 24 14:56 metadata
  • ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1 -T db1.t1,db1.t2 -o dumpling_B_db1_T_db1.t1_db2._t2
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1 -T db1.t1,db1.t2 -o dumpling_B_db1_T_db1.t1_db2._t2
Release version:
Git commit hash: 0eb24b8cb0ae3a2823b77c8e6efd34870a940c6c
Git branch:      sg-table-list
Build timestamp: 2020-05-24 06:30:27Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 14:56:52.769 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 14:56:52.769 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]

$ ll dumpling_B_db1_T_db1.t1_db2._t2
total 48
drwxr-xr-x  8 shengang  staff  256  5 24 14:56 .
drwxr-xr-x  6 shengang  staff  192  5 24 14:56 ..
-rwxr-xr-x  1 shengang  staff   65  5 24 14:56 db1-schema-create.sql
-rwxr-xr-x  1 shengang  staff  200  5 24 14:56 db1.t1-schema.sql
-rwxr-xr-x  1 shengang  staff   95  5 24 14:56 db1.t1.0.sql
-rwxr-xr-x  1 shengang  staff  195  5 24 14:56 db1.t2-schema.sql
-rwxr-xr-x  1 shengang  staff   77  5 24 14:56 db1.t2.0.sql
-rwxr-xr-x  1 shengang  staff  140  5 24 14:56 metadata
  • ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1,db2 -T t1 -o dumpling_B_db1_db2_T_t1
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -B db1,db2 -T t1 -o dumpling_B_db1_db2_T_t1
Release version:
Git commit hash: 0eb24b8cb0ae3a2823b77c8e6efd34870a940c6c
Git branch:      sg-table-list
Build timestamp: 2020-05-24 06:30:27Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 14:57:41.821 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 14:57:41.821 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]
dump failed: SHOW CREATE DATABASE db1,db2: err = Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your TiDB version for the right syntax to use line 1 column 25 near ",db2"
goroutine 1 [running]:
runtime/debug.Stack(0x16144e0, 0xc0001aa020, 0x1615280)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
github.com/pingcap/dumpling/v4/export.withStack(0x16144e0, 0xc0001aa020, 0xc0000ca070, 0xc000028320)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/error.go:40 +0x8d
github.com/pingcap/dumpling/v4/export.simpleQuery(0xc0001820c0, 0xc000028320, 0x1c, 0xc000127ab0, 0x1, 0xc000028320)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/sql.go:367 +0x152
github.com/pingcap/dumpling/v4/export.ShowCreateDatabase(0xc0001820c0, 0x7ffeefbff94f, 0x7, 0xc000127bd0, 0x1428e94, 0x7ffeefbff960, 0x17)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/sql.go:38 +0x10c
github.com/pingcap/dumpling/v4/export.dumpDatabases(0x161e960, 0xc0000ca070, 0xc00009e300, 0xc0001820c0, 0x161cd20, 0xc00012c088, 0x0, 0x400)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/dump.go:117 +0x39a
github.com/pingcap/dumpling/v4/export.Dump(0xc00009e300, 0x0, 0x0)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/dump.go:99 +0x83d
main.main()
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/cmd/dumpling/main.go:136 +0x10e0

$ ll dumpling_B_db1_db2_T_t1
total 8
drwxr-xr-x  3 shengang  staff   96  5 24 14:57 .
drwxr-xr-x  7 shengang  staff  224  5 24 14:57 ..
-rwxr-xr-x  1 shengang  staff  102  5 24 14:57 metadata
  • ./dumpling -u root -P 14000 -H 172.16.5.189 -T db1.t1,db1.t2 -o dumpling_T_db1.t1_db2.t2
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -T db1.t1,db1.t2 -o dumpling_T_db1.t1_db2.t2
Release version:
Git commit hash: 0eb24b8cb0ae3a2823b77c8e6efd34870a940c6c
Git branch:      sg-table-list
Build timestamp: 2020-05-24 06:30:27Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 14:58:16.656 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 14:58:16.656 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]

$ ll dumpling_T_db1.t1_db2.t2
total 40
drwxr-xr-x  7 shengang  staff  224  5 24 14:58 .
drwxr-xr-x  8 shengang  staff  256  5 24 14:58 ..
-rwxr-xr-x  1 shengang  staff  200  5 24 14:58 db1.t1-schema.sql
-rwxr-xr-x  1 shengang  staff   95  5 24 14:58 db1.t1.0.sql
-rwxr-xr-x  1 shengang  staff  195  5 24 14:58 db1.t2-schema.sql
-rwxr-xr-x  1 shengang  staff   77  5 24 14:58 db1.t2.0.sql
-rwxr-xr-x  1 shengang  staff  140  5 24 14:58 metadata
  • ./dumpling -u root -P 14000 -H 172.16.5.189 -T t1,t2 -o dumpling_T_t1_t2
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -T t1,t2 -o dumpling_T_t1_t2
Release version:
Git commit hash: 0eb24b8cb0ae3a2823b77c8e6efd34870a940c6c
Git branch:      sg-table-list
Build timestamp: 2020-05-24 06:30:27Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 14:58:46.364 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 14:58:46.364 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]
dump failed: SHOW CREATE TABLE .t2: err = Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your TiDB version for the right syntax to use line 1 column 19 near ".t2"
goroutine 7 [running]:
runtime/debug.Stack(0x16144e0, 0xc000138240, 0x1615280)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
github.com/pingcap/dumpling/v4/export.withStack(0x16144e0, 0xc000138240, 0xc0000e6068, 0xc0000285a0)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/error.go:40 +0x8d
github.com/pingcap/dumpling/v4/export.simpleQuery(0xc0001b2000, 0xc0000285a0, 0x15, 0xc0001f1da8, 0x2, 0xc0000285a0)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/sql.go:367 +0x152
github.com/pingcap/dumpling/v4/export.ShowCreateTable(0xc0001b2000, 0x0, 0x0, 0x7ffeefbff95a, 0x2, 0xc000032118, 0x0, 0x0, 0xc000058e60)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/sql.go:51 +0x154
github.com/pingcap/dumpling/v4/export.dumpTable(0x161e960, 0xc0000e6068, 0xc0000bc180, 0xc0001b2000, 0x0, 0x0, 0xc00000e660, 0x161cd20, 0xc0000b8040, 0x0, ...)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/dump.go:166 +0x340
github.com/pingcap/dumpling/v4/export.dumpDatabases.func1(0x0, 0x0)
	/Users/shengang/Documents/002-workspace/git-workspace/pingcap/dumpling/v4/export/dump.go:136 +0x166
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc00013cf30, 0xc000158370)
	/Users/shengang/go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:57 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go
	/Users/shengang/go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:54 +0x66

$ ll dumpling_T_t1_t2
total 8
drwxr-xr-x  3 shengang  staff   96  5 24 14:58 .
drwxr-xr-x  9 shengang  staff  288  5 24 14:58 ..
-rwxr-xr-x  1 shengang  staff  102  5 24 14:58 metadata
  • ./dumpling -u root -P 14000 -H 172.16.5.189 -B db2 -T db1.t1,db1.t2 -o dumpling_B_db2_T_db1.t1_db2._t2
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -B db2 -T db1.t1,db1.t2 -o dumpling_B_db2_T_db1.t1_db2._t2
Release version:
Git commit hash: 0eb24b8cb0ae3a2823b77c8e6efd34870a940c6c
Git branch:      sg-table-list
Build timestamp: 2020-05-24 06:30:27Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 14:59:20.782 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 14:59:20.782 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]

$ ll dumpling_B_db2_T_db1.t1_db2._t2
total 48
drwxr-xr-x   8 shengang  staff  256  5 24 14:59 .
drwxr-xr-x  10 shengang  staff  320  5 24 14:59 ..
-rwxr-xr-x   1 shengang  staff   65  5 24 14:59 db1-schema-create.sql
-rwxr-xr-x   1 shengang  staff  200  5 24 14:59 db1.t1-schema.sql
-rwxr-xr-x   1 shengang  staff   95  5 24 14:59 db1.t1.0.sql
-rwxr-xr-x   1 shengang  staff  195  5 24 14:59 db1.t2-schema.sql
-rwxr-xr-x   1 shengang  staff   77  5 24 14:59 db1.t2.0.sql
-rwxr-xr-x   1 shengang  staff  140  5 24 14:59 metadata
  • ./dumpling -u root -P 14000 -H 172.16.5.189 -T db1.t1,db2.t2 -o dumpling_T_db1.t1_db2.t2
$ ./dumpling -u root -P 14000 -H 172.16.5.189 -T db1.t1,db2.t2 -o dumpling_T_db1.t1_db2.t2
Release version:
Git commit hash: 22471656da09ee550f08530558c9fae53ae2d808
Git branch:      sg-table-list
Build timestamp: 2020-05-24 07:14:37Z
Go version:      go version go1.13 darwin/amd64

[2020/05/24 15:16:02.797 +08:00] [INFO] [config.go:117] ["detect server type"] [type=TiDB]
[2020/05/24 15:16:02.798 +08:00] [INFO] [config.go:135] ["detect server version"] [version=4.0.0-rc.2]
[2020/05/24 15:16:02.798 +08:00] [INFO] [prepare.go:92] ["dbTables:map[db1:[0xc0000f0780] db2:[0xc0000f0800]]"]

$ ll dumpling_T_db1.t1_db2.t2
total 40
drwxr-xr-x   7 shengang  staff  224  5 24 15:16 .
drwxr-xr-x  11 shengang  staff  352  5 24 15:16 ..
-rwxr-xr-x   1 shengang  staff  200  5 24 15:16 db1.t1-schema.sql
-rwxr-xr-x   1 shengang  staff   95  5 24 15:16 db1.t1.0.sql
-rwxr-xr-x   1 shengang  staff  194  5 24 15:16 db2.t2-schema.sql
-rwxr-xr-x   1 shengang  staff   76  5 24 15:16 db2.t2.0.sql
-rwxr-xr-x   1 shengang  staff  140  5 24 15:16 metadata
+49 -17

3 comments

4 changed files

Win-Man

pr closed time in 2 days

issue closedpingcap/dumpling

Support -T/--table-list args to dump single tables

support -T/--table-list args to dump single tables

closed time in 2 days

lichunzhu

pull request commentpingcap/dumpling

Implement --filter and --case-sensitive

  1. -T is not needed. But it depends on how much we want to mimic the CLI of mydumper.
  2. DM can convert a black-white list filter into the new filter interface with https://pkg.go.dev/github.com/pingcap/tidb-tools@v4.0.0-rc.2.0.20200521050818-6dd445d83fe0+incompatible/pkg/table-filter?tab=doc#ParseMySQLReplicationRules
kennytm

comment created time in 2 days

issue commenttidb-challenge-program/bug-hunting-issue

P2-[4.0 bug hunting]-[BR]-Backup/Restore to S3 consistently fails

(Should reopen until https://github.com/tidb-challenge-program/bug-hunting-issue/issues/72#issuecomment-635176501 is completed...)

wwar

comment created time in 2 days

pull request commentpingcap/tidb-lightning

WIP: Alter random

OK let's just ignore that issue at the moment.

3pointer

comment created time in 2 days

push eventsre-bot/tikv

kennytm

commit sha f2473204bcd9a5bfe56a4a4dd774934167decdf5

*: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

delete branch kennytm/tikv

delete branch : fix-various-dbaas-errors

delete time in 2 days

pull request commenttikv/tikv

Improve robustness of Backup/Restore involving external_storage

/merge

kennytm

comment created time in 2 days

pull request commenttikv/tikv

Improve robustness of Backup/Restore involving external_storage

copr-test failing as usual.

https://internal.pingcap.net/idc-jenkins/blue/organizations/jenkins/tikv_ghpr_integration-copr-test/detail/tikv_ghpr_integration-copr-test/2919/pipeline

<details>

[2020-05-29T06:40:26.686Z] 2020/05/29 14:40:26 2020/05/29 14:39:43 Test fail: Outputs are not matching.
[2020-05-29T06:40:26.686Z] Test case: sql/randgen-topn/3_compare_2.sql
[2020-05-29T06:40:26.686Z] Statement: #507 -  SELECT COALESCE( `col_tinyint_unsigned`, ( COALESCE( ( LEAST( `col_double_unsigned_key`, 4556, ( '2000-01-09' >= ( STRCMP( `col_time_key`, `col_smallint_unsigned_key` ) ) ) ) ), ( ( STRCMP( ( `col_float_unsigned` IS NOT TRUE ), `col_char_255_key` ) ) > ( 'nmo' != -15482 ) ), '1990-06-16 17:22:56.005534' ) ), '2007-07-03', `col_float`, ( `col_float` BETWEEN 0 AND ( ( GREATEST( ( ( `col_set` <=> '2027-11-11' ) <> ( ( ( COALESCE( 'moxqnhbnkyxksjwoaajpbxxobggqewsbvtlqqjkkakmuskosyzsuhdlvfrhobgixtbeqjisgazsdqtccshcxarzvuxsjteeyxmzfxpu', 'p' ) ) IS NULL ) < ( STRCMP( ( `col_int_key` >= 'oxqnhbnkyxksjwoaajpbxxobggqewsbvtlqqjkkakmuskosyzsuhdlvfrhobgixtbeqjisgazsdqtccshcxarzvuxsjteeyxmzfxpuenwuwdczcyaumvpzhxuzftshcwrycsohwtdkrljskkbbbpvsxhzmlpqxozpcrpvqevuvdosgxtlunuhjyomjbucywsqgvbwromzotrdldqfpqvjaxfyhndbzyuuupvchaxe' ), ( -24369 != 'xqnhbnkyxksjwoaajpbxxobggqewsbvtlqqjkkakmuskosyzsuhdlvfrhobgixtbeqjisgazsdqtccshcxarzvuxsjteeyxmzfxpuenwuwdczcyaumvpzhxuzftshcwrycsohwtdkrljskkbbbpvsxhzmlpqxozpcrpvqevuvdosgxtlunuhjyomjbucywsqgvbwromz' ) ) ) ) ), '20:14:46.019164' ) ) <> 'j' ) ) ) AS field1, ISNULL( ( ( ( STRCMP( ( `col_year` IS FALSE ), 7334674943126274048 ) ) IN ( -607985949695016960, `col_varbinary_32`, '2023-03-02 19:05:21.008216', ( 0 < 0 ), `col_set_key` ) ) NOT BETWEEN ( ( 51 IS NOT NULL ) NOT IN ( NULL, ( ISNULL( ( `col_varbinary_32_key` IS UNKNOWN ) ) ) ) ) AND `col_binary_8_key` ) ) AS field2, LEAST( `col_varbinary_32_key`, `col_text_key` ) AS field3 FROM `table1000_int_autoinc` WHERE ( 0 <> `col_decimal_unsigned_key` ) IN ( ( COALESCE( ( `col_double_unsigned` = `col_char_2` ), `col_datetime_key` ) ), ( INTERVAL( ( COALESCE( '04:24:43.033324' ) ), `col_text` ) ) ) ORDER BY field1, field2, field3 LIMIT 7 /* QNO 509 CON_ID 196 */ ;
[2020-05-29T06:40:26.686Z] NoPushDown Output: 
[2020-05-29T06:40:26.686Z] field1	field2	field3
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	0
[2020-05-29T06:40:26.686Z] 0	0	0
[2020-05-29T06:40:26.686Z] 
[2020-05-29T06:40:26.686Z] 
[2020-05-29T06:40:26.686Z] WithPushDown Output: 
[2020-05-29T06:40:26.686Z] field1	field2	field3
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 0	0	NULL
[2020-05-29T06:40:26.686Z] 
[2020-05-29T06:40:26.686Z] 
[2020-05-29T06:40:26.686Z] 
[2020-05-29T06:40:26.686Z] NoPushDown Plan: 
[2020-05-29T06:40:26.686Z] id	estRows	task	access object	operator info
[2020-05-29T06:40:26.686Z] Projection_7	7.00	root		coalesce(cast(push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, var_string(20)), coalesce(cast(least(push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, 4556, cast(ge(2000, strcmp(cast(push_down_test_db.table1000_int_autoinc.col_time_key, var_string(10)), cast(push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, var_string(20)))), double BINARY)), var_string(23)), cast(gt(strcmp(cast(not(istrue(push_down_test_db.table1000_int_autoinc.col_float_unsigned)), var_string(20)), push_down_test_db.table1000_int_autoinc.col_char_255_key), 1), var_string(20)), 1990-06-16 17:22:56.005534), 2007-07-03, cast(push_down_test_db.table1000_int_autoinc.col_float, var_string(12)), cast(and(ge(push_down_test_db.table1000_int_autoinc.col_float, 0), le(push_down_test_db.table1000_int_autoinc.col_float, cast(ne(greatest(cast(ne(nulleq(push_down_test_db.table1000_int_autoinc.col_set, 2027-11-11), lt(0, strcmp(cast(ge(push_down_test_db.table1000_int_autoinc.col_int_key, 0), var_string(20)), 1))), double BINARY), 20), 0), double BINARY))), var_string(20)))->Column#62, isnull(not(and(ge(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), not(in(1, <nil>, isnull(isnull(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key))))), le(cast(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_binary_8_key, double BINARY)))))->Column#63, least(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text_key)->Column#64
[2020-05-29T06:40:26.687Z] └─Projection_14	7.00	root		push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key, push_down_test_db.table1000_int_autoinc.col_int_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, push_down_test_db.table1000_int_autoinc.col_char_2, push_down_test_db.table1000_int_autoinc.col_set_key, push_down_test_db.table1000_int_autoinc.col_datetime_key, push_down_test_db.table1000_int_autoinc.col_float_unsigned, push_down_test_db.table1000_int_autoinc.col_binary_8_key, push_down_test_db.table1000_int_autoinc.col_float, push_down_test_db.table1000_int_autoinc.col_char_255_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned, push_down_test_db.table1000_int_autoinc.col_set, push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, push_down_test_db.table1000_int_autoinc.col_time_key, push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text, push_down_test_db.table1000_int_autoinc.col_text_key, push_down_test_db.table1000_int_autoinc.col_year, push_down_test_db.table1000_int_autoinc.col_varbinary_32
[2020-05-29T06:40:26.687Z]   └─TopN_10	7.00	root		Column#65, Column#66, Column#67, offset:0, count:7
[2020-05-29T06:40:26.687Z]     └─Projection_15	8000.00	root		push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key, push_down_test_db.table1000_int_autoinc.col_int_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, push_down_test_db.table1000_int_autoinc.col_char_2, push_down_test_db.table1000_int_autoinc.col_set_key, push_down_test_db.table1000_int_autoinc.col_datetime_key, push_down_test_db.table1000_int_autoinc.col_float_unsigned, push_down_test_db.table1000_int_autoinc.col_binary_8_key, push_down_test_db.table1000_int_autoinc.col_float, push_down_test_db.table1000_int_autoinc.col_char_255_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned, push_down_test_db.table1000_int_autoinc.col_set, push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, push_down_test_db.table1000_int_autoinc.col_time_key, push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text, push_down_test_db.table1000_int_autoinc.col_text_key, push_down_test_db.table1000_int_autoinc.col_year, push_down_test_db.table1000_int_autoinc.col_varbinary_32, coalesce(cast(push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, var_string(20)), coalesce(cast(least(push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, 4556, cast(ge(2000, strcmp(cast(push_down_test_db.table1000_int_autoinc.col_time_key, var_string(10)), cast(push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, var_string(20)))), double BINARY)), var_string(23)), cast(gt(strcmp(cast(not(istrue(push_down_test_db.table1000_int_autoinc.col_float_unsigned)), var_string(20)), push_down_test_db.table1000_int_autoinc.col_char_255_key), 1), var_string(20)), 1990-06-16 17:22:56.005534), 2007-07-03, cast(push_down_test_db.table1000_int_autoinc.col_float, var_string(12)), cast(and(ge(push_down_test_db.table1000_int_autoinc.col_float, 0), le(push_down_test_db.table1000_int_autoinc.col_float, cast(ne(greatest(cast(ne(nulleq(push_down_test_db.table1000_int_autoinc.col_set, 2027-11-11), lt(0, strcmp(cast(ge(push_down_test_db.table1000_int_autoinc.col_int_key, 0), var_string(20)), 1))), double BINARY), 20), 0), double BINARY))), var_string(20)))->Column#65, isnull(not(and(ge(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), not(in(1, <nil>, isnull(isnull(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key))))), le(cast(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_binary_8_key, double BINARY)))))->Column#66, least(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text_key)->Column#67
[2020-05-29T06:40:26.687Z]       └─Selection_11	8000.00	root		or(eq(cast(ne(0, push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key)), cast(coalesce(cast(eq(push_down_test_db.table1000_int_autoinc.col_double_unsigned, cast(push_down_test_db.table1000_int_autoinc.col_char_2))), cast(push_down_test_db.table1000_int_autoinc.col_datetime_key)))), eq(ne(0, push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key), interval(4, cast(push_down_test_db.table1000_int_autoinc.col_text))))
[2020-05-29T06:40:26.687Z]         └─TableReader_13	10000.00	root		data:TableFullScan_12
[2020-05-29T06:40:26.687Z]           └─TableFullScan_12	10000.00	cop[tikv]	table:table1000_int_autoinc	keep order:false, stats:pseudo
[2020-05-29T06:40:26.687Z] 
[2020-05-29T06:40:26.687Z] 
[2020-05-29T06:40:26.687Z] WithPushDown Plan: 
[2020-05-29T06:40:26.687Z] id	estRows	task	access object	operator info
[2020-05-29T06:40:26.687Z] Projection_7	7.00	root		coalesce(cast(push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, var_string(20)), coalesce(cast(least(push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, 4556, cast(ge(2000, strcmp(cast(push_down_test_db.table1000_int_autoinc.col_time_key, var_string(10)), cast(push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, var_string(20)))), double BINARY)), var_string(23)), cast(gt(strcmp(cast(not(istrue(push_down_test_db.table1000_int_autoinc.col_float_unsigned)), var_string(20)), push_down_test_db.table1000_int_autoinc.col_char_255_key), 1), var_string(20)), 1990-06-16 17:22:56.005534), 2007-07-03, cast(push_down_test_db.table1000_int_autoinc.col_float, var_string(12)), cast(and(ge(push_down_test_db.table1000_int_autoinc.col_float, 0), le(push_down_test_db.table1000_int_autoinc.col_float, cast(ne(greatest(cast(ne(nulleq(push_down_test_db.table1000_int_autoinc.col_set, 2027-11-11), lt(0, strcmp(cast(ge(push_down_test_db.table1000_int_autoinc.col_int_key, 0), var_string(20)), 1))), double BINARY), 20), 0), double BINARY))), var_string(20)))->Column#62, isnull(not(and(ge(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), not(in(1, <nil>, isnull(isnull(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key))))), le(cast(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_binary_8_key, double BINARY)))))->Column#63, least(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text_key)->Column#64
[2020-05-29T06:40:26.688Z] └─Projection_14	7.00	root		push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key, push_down_test_db.table1000_int_autoinc.col_int_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, push_down_test_db.table1000_int_autoinc.col_char_2, push_down_test_db.table1000_int_autoinc.col_set_key, push_down_test_db.table1000_int_autoinc.col_datetime_key, push_down_test_db.table1000_int_autoinc.col_float_unsigned, push_down_test_db.table1000_int_autoinc.col_binary_8_key, push_down_test_db.table1000_int_autoinc.col_float, push_down_test_db.table1000_int_autoinc.col_char_255_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned, push_down_test_db.table1000_int_autoinc.col_set, push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, push_down_test_db.table1000_int_autoinc.col_time_key, push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text, push_down_test_db.table1000_int_autoinc.col_text_key, push_down_test_db.table1000_int_autoinc.col_year, push_down_test_db.table1000_int_autoinc.col_varbinary_32
[2020-05-29T06:40:26.688Z]   └─TopN_10	7.00	root		Column#65, Column#66, Column#67, offset:0, count:7
[2020-05-29T06:40:26.688Z]     └─Projection_15	8000.00	root		push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key, push_down_test_db.table1000_int_autoinc.col_int_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, push_down_test_db.table1000_int_autoinc.col_char_2, push_down_test_db.table1000_int_autoinc.col_set_key, push_down_test_db.table1000_int_autoinc.col_datetime_key, push_down_test_db.table1000_int_autoinc.col_float_unsigned, push_down_test_db.table1000_int_autoinc.col_binary_8_key, push_down_test_db.table1000_int_autoinc.col_float, push_down_test_db.table1000_int_autoinc.col_char_255_key, push_down_test_db.table1000_int_autoinc.col_double_unsigned, push_down_test_db.table1000_int_autoinc.col_set, push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, push_down_test_db.table1000_int_autoinc.col_time_key, push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text, push_down_test_db.table1000_int_autoinc.col_text_key, push_down_test_db.table1000_int_autoinc.col_year, push_down_test_db.table1000_int_autoinc.col_varbinary_32, coalesce(cast(push_down_test_db.table1000_int_autoinc.col_tinyint_unsigned, var_string(20)), coalesce(cast(least(push_down_test_db.table1000_int_autoinc.col_double_unsigned_key, 4556, cast(ge(2000, strcmp(cast(push_down_test_db.table1000_int_autoinc.col_time_key, var_string(10)), cast(push_down_test_db.table1000_int_autoinc.col_smallint_unsigned_key, var_string(20)))), double BINARY)), var_string(23)), cast(gt(strcmp(cast(not(istrue(push_down_test_db.table1000_int_autoinc.col_float_unsigned)), var_string(20)), push_down_test_db.table1000_int_autoinc.col_char_255_key), 1), var_string(20)), 1990-06-16 17:22:56.005534), 2007-07-03, cast(push_down_test_db.table1000_int_autoinc.col_float, var_string(12)), cast(and(ge(push_down_test_db.table1000_int_autoinc.col_float, 0), le(push_down_test_db.table1000_int_autoinc.col_float, cast(ne(greatest(cast(ne(nulleq(push_down_test_db.table1000_int_autoinc.col_set, 2027-11-11), lt(0, strcmp(cast(ge(push_down_test_db.table1000_int_autoinc.col_int_key, 0), var_string(20)), 1))), double BINARY), 20), 0), double BINARY))), var_string(20)))->Column#65, isnull(not(and(ge(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), not(in(1, <nil>, isnull(isnull(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key))))), le(cast(or(or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), -607985949695016960), eq(cast(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_varbinary_32, double BINARY))), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 2023), or(eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), 0), eq(strcmp(cast(isfalse(push_down_test_db.table1000_int_autoinc.col_year), var_string(20)), 7334674943126274048), cast(push_down_test_db.table1000_int_autoinc.col_set_key, bigint(20) BINARY))))), double BINARY), cast(push_down_test_db.table1000_int_autoinc.col_binary_8_key, double BINARY)))))->Column#66, least(push_down_test_db.table1000_int_autoinc.col_varbinary_32_key, push_down_test_db.table1000_int_autoinc.col_text_key)->Column#67
[2020-05-29T06:40:26.688Z]       └─Selection_11	8000.00	root		or(eq(cast(ne(0, push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key)), cast(coalesce(cast(eq(push_down_test_db.table1000_int_autoinc.col_double_unsigned, cast(push_down_test_db.table1000_int_autoinc.col_char_2))), cast(push_down_test_db.table1000_int_autoinc.col_datetime_key)))), eq(ne(0, push_down_test_db.table1000_int_autoinc.col_decimal_unsigned_key), interval(4, cast(push_down_test_db.table1000_int_autoinc.col_text))))
[2020-05-29T06:40:26.688Z]         └─TableReader_13	10000.00	root		data:TableFullScan_12
[2020-05-29T06:40:26.688Z]           └─TableFullScan_12	10000.00	cop[tikv]	table:table1000_int_autoinc	keep order:false, stats:pseudo
[2020-05-29T06:40:26.688Z] 
[2020-05-29T06:40:26.688Z] 
[2020-05-29T06:40:26.688Z] 
[2020-05-29T06:40:26.688Z] 2020/05/29 14:40:26 Test summary: non-matching queries: 1, success queries: 2046, skipped queries: 951
[2020-05-29T06:40:26.688Z] 2020/05/29 14:40:26 Test summary(sql/randgen-topn/3_compare_2.sql): Test case FAIL

</details>

kennytm

comment created time in 2 days

push eventkennytm/br

Neil Shen

commit sha 2bd3248f62289bfcc2cb9cfdbe8032b1c1b37e28

restore: support restore empty databases and tables (#317)

view details

3pointer

commit sha 25e0ee8468254ab58af2b98da0fdafd0ad32aad2

Fix row data lost (#315) * try fix lost row data

view details

Lonng

commit sha 428dcb724846e1353618af18e5d3ef105474caea

fix typo in cmd/cmd.go (#316) Signed-off-by: Lonng <heng@lonng.org>

view details

山岚

commit sha 1edbe26e1a380c347287f567c48b01d257b939c8

tasks: print error to log after fail. (#310) * tasks: print error to log after fail. * Update restore.go Co-authored-by: 3pointer <luancheng@pingcap.com>

view details

3pointer

commit sha 3e05ea604c608b758f7c66078bf23fdadc35b7c0

*: add version check before start (#311) * add version check * Update pkg/utils/version.go Co-authored-by: kennytm <kennytm@gmail.com> * fix test * add flag to control check * address comment * fix ci * remove DS_Store Co-authored-by: kennytm <kennytm@gmail.com> Co-authored-by: Neil Shen <overvenus@gmail.com>

view details

kennytm

commit sha 79c941dc28d3760f857bd058a380377ec20e998b

Merge branch 'master' into table-filter

view details

push time in 2 days

push eventsre-bot/tikv

kennytm

commit sha ad483b8734420c18e4d7c663aa6ad955800bba87

import: fix merge conflict Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

pull request commentpingcap/br

[3.1] restore: Make all download error as retryable (#298)

/merge

kennytm

comment created time in 2 days

pull request commenttikv/tikv

Improve robustness of Backup/Restore involving external_storage

/merge

kennytm

comment created time in 2 days

pull request commentpingcap/br

*: add version check before start

/run-cherry-picker

3pointer

comment created time in 2 days

push eventpingcap/br

3pointer

commit sha 3e05ea604c608b758f7c66078bf23fdadc35b7c0

*: add version check before start (#311) * add version check * Update pkg/utils/version.go Co-authored-by: kennytm <kennytm@gmail.com> * fix test * add flag to control check * address comment * fix ci * remove DS_Store Co-authored-by: kennytm <kennytm@gmail.com> Co-authored-by: Neil Shen <overvenus@gmail.com>

view details

push time in 2 days

PR merged pingcap/br

Reviewers
*: add version check before start LGT1 enhancement needs-cherry-pick-3.1 needs-cherry-pick-4.0

<!-- Thank you for working on BR! Please read BR's CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Resolve #301

What is changed and how it works?

Add cluster version check when br start

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test

Related changes

  • Need to cherry-pick to the release branch

Release Note

  • Feature: Add version check at beginning in order to avoid version mismatch.

<!-- fill in the release note, or just write "No release note" -->

+204 -13

14 comments

10 changed files

3pointer

pr closed time in 2 days

issue closedpingcap/br

BR add version check at beginning.

Feature Request

Describe your feature request related problem:

<!-- A description of what the problem is. --> Current not all BR version is compatible with all TiDB version, For example, BR 3.1 is not compatible with TiDB 4.0.

Describe the feature you'd like:

<!-- A description of what you want to happen. --> A version check when start BR.

Describe alternatives you've considered:

<!-- A description of any alternative solutions or features you've considered. --> Use PD to get backup/restore cluster version, if not compatible, then warning in log.

Teachability, Documentation, Adoption, Migration Strategy:

<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->

closed time in 2 days

3pointer

issue openedpingcap/tidb-lightning

[5.0] Online import

Description

Lightning currently requires the cluster to be offline and empty because

  1. SST import violates ACID
  2. Import mode trades space amplification for write speed
  3. Region-split works better on empty ranges
  4. Cluster performance is reduced.

We may support online import based on pingcap/br#87.

Category

  • Feature
  • Performance

Value

Value description

This is needed for data-hub and data middle platform cases. A lot of cases, users have not only real-time data but also third party data, to import and analyze together with transactional data. This might as well be merged together with batch scenario and speed them up all together.

Value score

  • (TBD) / 5

Workload estimation

  • 30 person-day

created time in 3 days

issue openedpingcap/br

[5.0] Support Point-in-Time Recovery (PITR)

Description

PITR allows user to recover the database back to any point of time.

In principle, given an initial full backup archive, and several incremental backup archives, it is enough to restore to a snapshot at any given time.

The PITR feature should balance between the cluster's performance when making the continuous backup, and the restore speed when doing PITR.

Category

  • Feature
  • Reliability

Value

Value description

(TBD)

Value score

  • (TBD) / 5

Workload estimation

  • (TBD) person-day

created time in 3 days

issue commentpingcap/br

Design a fast TPCC test data generation tool: Generate TPCC SST data, then use br to complete a quick import

This should better be transferred to https://github.com/pingcap/go-tpc/ (but i've no permission 🙃)

kennytm

comment created time in 3 days

issue closedpingcap/br

Support timestamp parameter on backup

Feature Request

Describe your feature request related problem:

Sometimes user need one certain timestamp in different backup, such as

  • two different backup want the same timestamp snapshot

current timeago can't solve above problem.

Describe the feature you'd like:

support input timestamp on backup

Describe alternatives you've considered:

<!-- A description of any alternative solutions or features you've considered. -->

Teachability, Documentation, Adoption, Migration Strategy:

<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->

closed time in 3 days

3pointer

issue commentpingcap/br

Support timestamp parameter on backup

Already implemented in #172. Closing.

3pointer

comment created time in 3 days

issue commentpingcap/br

UCP: Add backup retry when some tikv get down

This should be much less needed after tikv/tikv#7917 is merged.

3pointer

comment created time in 3 days

issue commentpingcap/br

Provide a local S3 server to replace local storage

We may provide an HTTP server instead (easier to implement 😉) if #308 is implemented.

kennytm

comment created time in 3 days

issue closedpingcap/br

make concurrency setting more clear

Feature Request

Describe your feature request related problem:

<!-- A description of what the problem is. --> user may confuse about concurrency setting, because Restore concurrency default value is 128, but Backup concurrency default value is 4.

Describe the feature you'd like:

<!-- A description of what you want to happen. -->

  1. give a warning when user didn't set proper value of concurrency.

Describe alternatives you've considered:

<!-- A description of any alternative solutions or features you've considered. -->

  1. update document to make it more clearly
  • remove concurrency introduction in document
  • remove rate-limit introduction in document

Teachability, Documentation, Adoption, Migration Strategy:

<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->

closed time in 3 days

3pointer

issue commentpingcap/br

make concurrency setting more clear

Everything causing this issue has been fixed. Closing.

3pointer

comment created time in 3 days

issue commentpingcap/br

br support http download

Note that in the current design, an external storage cannot "just support download". It must define all these 3 operations:

  1. Download a file
  2. Upload a file
  3. Checks if a file exists

An HTTP storage clearly supports operation 1 and 3 via GET and HEAD requests. The question is what should operation 2 do.

(In CrDB's BACKUP statement, it will issue a PUT request with the body being the file.)

zhouqiang-cl

comment created time in 3 days

issue commentpingcap/tidb-lightning

streaming from cloud object storage

🤔 I assume this is more than #69?

gregwebs

comment created time in 3 days

issue commentpingcap/tidb-lightning

Implement split & scatter in lightning

Note: This is part of #300. Close?

3pointer

comment created time in 3 days

issue commentpingcap/tidb-lightning

Implement data and index kv sort in lightning

Note: This is part of #300. Close?

3pointer

comment created time in 3 days

issue closedpingcap/tidb-lightning

Improve usability:tikv-importer addr config check

Feature Request

Is your feature request related to a problem? Please describe: User start tikv-importer error when set addr = "0.0.0.0:{port}" in tikv-importer.toml.The nohup.out log file content shows the reason:

invalid configuration: Other("[/rust/git/checkouts/tikv-71ea2042335c4528/ac6f026/src/server/config.rs:164]: invalid advertise-addr: \"0.0.0.0:18287\"")

but the tikv-importer.log log file content didm't show the error reason clearly:

[2020/03/04 11:11:21.002 +08:00] [INFO] [tikv-importer.rs:41] ["Welcome to TiKV Importer."]
[2020/03/04 11:11:21.003 +08:00] [INFO] [tikv-importer.rs:43] []
[2020/03/04 11:11:21.003 +08:00] [INFO] [tikv-importer.rs:43] ["Release Version:   3.0.8"]
[2020/03/04 11:11:21.003 +08:00] [INFO] [tikv-importer.rs:43] ["Git Commit Hash:   a9f1e2dc6d20284a2c61b57f7bcfd84d161268f2"]
[2020/03/04 11:11:21.003 +08:00] [INFO] [tikv-importer.rs:43] ["Git Commit Branch: release-3.0"]
[2020/03/04 11:11:21.003 +08:00] [INFO] [tikv-importer.rs:43] ["UTC Build Time:    2019-12-31 01:01:07"]
[2020/03/04 11:11:21.003 +08:00] [INFO] [tikv-importer.rs:43] ["Rust Version:      rustc 1.37.0-nightly (0e4a56b4b 2019-06-13)"]
[2020/03/04 11:11:21.003 +08:00] [INFO] [tikv-importer.rs:45] []
[2020/03/04 11:11:21.003 +08:00] [WARN] [lib.rs:545] ["environment variable `TZ` is missing, using `/etc/localtime`"]
[2020/03/04 11:11:21.024 +08:00] [FATAL] [lib.rs:499] ["called `Result::unwrap()` on an `Err` value: AddrParseError(())"] [backtrace="stack backtrace:\n   0:     0x55e0b882c3bd - backtrace::backtrace::trace::hcb2647c6d67dfa4f"] [location=src/libcore/result.rs:999] [thread_name=main]

Describe the feature you'd like:

  1. Log the error reason into tikv-importer.log
  2. The tikv-importer.toml could notes user not using ‘0.0.0.0’ or domain name as value of addr.

Describe alternatives you've considered:

Teachability, Documentation, Adoption, Optimization:

closed time in 3 days

Win-Man

issue commentpingcap/tidb-lightning

Improve usability:tikv-importer addr config check

Closing, since TiKV-Importer can listen to 0.0.0.0.

Win-Man

comment created time in 3 days

issue closedpingcap/dumpling

Maybe we need quote table names when querying DB info.

Currently, we inject table name directly into sql query.

https://github.com/pingcap/dumpling/blob/62b84147a330b6975a80d3133b320190a3c7a304/v4/export/sql.go#L37

https://github.com/pingcap/dumpling/blob/62b84147a330b6975a80d3133b320190a3c7a304/v4/export/sql.go#L50

Even those table names are get by connection from database directly, so we don't need to fear SQL injection attack. But when there are tables using key words of SQL, dumpling will fail, a example is run integration_test on MariaDB:

dump failed: SHOW CREATE DATABASE rows: err = Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'rows' at line 1
goroutine 1 [running]:
runtime/debug.Stack(0x1615be0, 0xc00000e4e0, 0x1616980)
	/usr/local/Cellar/go/1.14/libexec/src/runtime/debug/stack.go:24 +0x9d
github.com/pingcap/dumpling/v4/export.withStack(0x1615be0, 0xc00000e4e0, 0xc0000260c0, 0xc0000243e0)
	/Users/hillium/Developer/dumpling/v4/export/error.go:40 +0x8d
github.com/pingcap/dumpling/v4/export.simpleQuery(0xc0001f4180, 0xc0000243e0, 0x19, 0xc0001858c0, 0x1, 0xc0000243e0)
	/Users/hillium/Developer/dumpling/v4/export/sql.go:367 +0x152
github.com/pingcap/dumpling/v4/export.ShowCreateDatabase(0xc0001f4180, 0x7ffeefbff7b5, 0x4, 0xc0001859e0, 0x14200df, 0x7ffeefbff7c1, 0x26)
	/Users/hillium/Developer/dumpling/v4/export/sql.go:38 +0x10c
github.com/pingcap/dumpling/v4/export.dumpDatabases(0x1620180, 0xc0000260c0, 0xc0001b4780, 0xc0001f4180, 0x161e3c0, 0xc000010078, 0x0, 0x100)
	/Users/hillium/Developer/dumpling/v4/export/dump.go:109 +0x124
github.com/pingcap/dumpling/v4/export.Dump(0xc0001b4780, 0x0, 0x0)
	/Users/hillium/Developer/dumpling/v4/export/dump.go:92 +0x84f
main.run()
	/Users/hillium/Developer/dumpling/cmd/dumpling/main.go:122 +0x5b9
main.glob..func1(0x19774e0, 0xc0001ec600, 0x0, 0x10)
	/Users/hillium/Developer/dumpling/cmd/dumpling/main.go:58 +0x20
github.com/spf13/cobra.(*Command).execute(0x19774e0, 0xc000020130, 0x10, 0x11, 0x19774e0, 0xc000020130)
	/Users/hillium/go/pkg/mod/github.com/spf13/cobra@v0.0.6/command.go:844 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0x19774e0, 0x0, 0x1, 0xc00007a058)
	/Users/hillium/go/pkg/mod/github.com/spf13/cobra@v0.0.6/command.go:945 +0x317
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/hillium/go/pkg/mod/github.com/spf13/cobra@v0.0.6/command.go:885
main.main()
	/Users/hillium/Developer/dumpling/cmd/dumpling/main.go:131 +0x2d

closed time in 3 days

YuJuncen
more