profile
viewpoint

kennytm/cargo-kcov 101

Cargo subcommand to run kcov to get coverage report on Linux

kennytm/cov 87

LLVM-GCOV Source coverage for Rust

kennytm/CoCCalc 49

THIS PROJECT HAS BEEN ABANDONED.

kennytm/CatSaver 8

Automatically save logcat

kennytm/dbgen 5

Generate random test cases for databases

auraht/gamepad 4

A cross-platform gamepad library to supplement HIDDriver

kennytm/711cov 3

Coverage reporting software for gcov-4.7

kennytm/aar-to-eclipse 3

Convert *.aar to Android library project for Eclipse ADT

kennytm/BinarySpec.swift 3

Parsing binary protocols (for Swift)

kennytm/borsholder 3

Combined status board of rust-lang/rust's Homu queue and GitHub PR status.

create barnchpingcap/tidb-tools

branch : kennytm/filter-disable-regex-and-wildcard

created branch time in 7 minutes

create barnchpingcap/br

branch : reduce-tidb-deps

created branch time in 2 hours

pull request commentpingcap/br

Release 3.1.0.beta2

CI is broken 🤔

3pointer

comment created time in 6 hours

pull request commentpingcap/br

Release 3.1.0.beta2

/run-all-tests

[2020-02-18T13:05:27.555Z] Error: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:20161: i/o timeout"
[2020-02-18T13:05:27.555Z] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:20161: i/o timeout"
3pointer

comment created time in 6 hours

Pull request review commentpingcap/tidb

planner: adapter the generic hint struct TableOptimizerHint

 func (b *PlanBuilder) pushTableHints(hints []*ast.TableOptimizerHint, nodeType n 				}) 			} 		case HintReadFromStorage:-			if hint.StoreType.L == HintTiFlash {+			if hint.HintData.(model.CIStr).L == HintTiFlash { 				tiflashTables = tableNames2HintTableInfo(b.ctx, hint.Tables, b.hintProcessor, nodeType, currentLevel) 			}-			if hint.StoreType.L == HintTiKV {+			if hint.HintData.(model.CIStr).L == HintTiKV { 				tikvTables = tableNames2HintTableInfo(b.ctx, hint.Tables, b.hintProcessor, nodeType, currentLevel) 			}

Use a switch here

lonng

comment created time in 9 hours

push eventpingcap/parser

Yuanjia Zhang

commit sha d376009d7d9f99c480e0cf6e1b1f2d6969405ba5

parser: support the `SELECT ... INTO OUTFILE` syntax (#745) * SELECT OUTFILE * fix CI * update * add more tests

view details

Lonng

commit sha 517beb2e39c20e78cca59b63351aacb5c1ef5845

refine table optimizer hint and make it more generic (#747) * refine table optimizer hint and make it more generic Signed-off-by: Lonng <heng@lonng.org> * implement the restore Signed-off-by: Lonng <heng@lonng.org> * address comment Signed-off-by: Lonng <heng@lonng.org> * address comment Signed-off-by: Lonng <heng@lonng.org> * fix typo Signed-off-by: Lonng <heng@lonng.org>

view details

kennytm

commit sha eb7159364f9c4daeefa5ab82172081cc292441a0

Merge branch 'master' into kennytm/backup-restore-statements

view details

push time in 10 hours

delete branch pingcap/parser

delete branch : lonng/generic-hint

delete time in 10 hours

push eventpingcap/parser

Lonng

commit sha 517beb2e39c20e78cca59b63351aacb5c1ef5845

refine table optimizer hint and make it more generic (#747) * refine table optimizer hint and make it more generic Signed-off-by: Lonng <heng@lonng.org> * implement the restore Signed-off-by: Lonng <heng@lonng.org> * address comment Signed-off-by: Lonng <heng@lonng.org> * address comment Signed-off-by: Lonng <heng@lonng.org> * fix typo Signed-off-by: Lonng <heng@lonng.org>

view details

push time in 10 hours

PR merged pingcap/parser

refine table optimizer hint and make it more generic status/LGT2 type/enhancement

Signed-off-by: Lonng heng@lonng.org

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

The struct TableOptimizerHint is used to save optimizer information, but its field count growth as various types of hint adding.

What is changed and how it works?

  1. This PR introduces a new HintData to save the generic hint payload and remove the extra fields to reduce the struct size.
  2. This PR adds a new hint TIME_RANGE, which is used to hint the time range of inspection_result/inspection_summary system table.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
+647 -567

1 comment

7 changed files

lonng

pr closed time in 10 hours

Pull request review commentpingcap/br

BR support TLS

 func NewMgr( 	failure := errors.Errorf("pd address (%s) has wrong format", pdAddrs) 	cli := &http.Client{Timeout: 30 * time.Second} 	if tlsConf != nil {-		cli = &http.Client{-			Timeout: 30 * time.Second,-			Transport: &http.Transport{-				TLSClientConfig: tlsConf,-			},-		}+		defaultTransport := http.DefaultTransport+		defaultTransport.(*http.Transport).TLSClientConfig = tlsConf

No,

defaultTransport := *http.DefaultTransport.(*http.Transport)
defaultTransport.TLSClientConfig = tlsConf
cli.Transport = &defaultTransport

we don't want to modify everyone's DefaultTransport.

3pointer

comment created time in 12 hours

Pull request review commentpingcap/parser

*: support parsing BACKUP and RESTORE statements

 func (s *testParserSuite) TestIndexAdviseStmt(c *C) {  	s.RunTest(c, table) }++// For BRIE+func (s *testParserSuite) TestBRIE(c *C) {+	table := []testCase{+		{"BACKUP DATABASE a TO 'local:///tmp/archive01/'", true, "BACKUP DATABASE `a` TO 'local:///tmp/archive01/'"},+		{"BACKUP SCHEMA a TO 'local:///tmp/archive01/'", true, "BACKUP DATABASE `a` TO 'local:///tmp/archive01/'"},+		{"BACKUP DATABASE a,b,c FULL TO 'noop://'", true, "BACKUP DATABASE `a`, `b`, `c` TO 'noop://'"},+		{"BACKUP DATABASE a.b FULL TO 'noop://'", false, ""},+		{"BACKUP DATABASE * TO 'noop://'", true, "BACKUP DATABASE * TO 'noop://'"},+		{"BACKUP DATABASE *, a TO 'noop://'", false, ""},+		{"BACKUP DATABASE a, * TO 'noop://'", false, ""},+		{"BACKUP DATABASE TO 'noop://'", false, ""},+		{"BACKUP TABLE a TO 'noop://'", true, "BACKUP TABLE `a` TO 'noop://'"},+		{"BACKUP TABLE a.b TO 'noop://'", true, "BACKUP TABLE `a`.`b` TO 'noop://'"},+		{"BACKUP TABLE a.b,c.d,e TO 'noop://'", true, "BACKUP TABLE `a`.`b`, `c`.`d`, `e` TO 'noop://'"},+		{"BACKUP TABLE a.* TO 'noop://'", false, ""},+		{"BACKUP TABLE * TO 'noop://'", false, ""},+		{"BACKUP TABLE TO 'noop://'", false, ""},+		{"RESTORE DATABASE * FROM 's3://bucket/path/'", true, "RESTORE DATABASE * FROM 's3://bucket/path/'"},++		{"BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP '2020-02-02 14:14:14' TO 'noop://'", true, "BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP '2020-02-02 14:14:14' TO 'noop://'"},+		{"BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP_ORACLE 1234567890 TO 'noop://'", true, "BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP_ORACLE 1234567890 TO 'noop://'"},+		{"BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP_ORACLE '2020-02-02 14:14:14' TO 'noop://'", false, ""},+		{"BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP 1234567890 TO 'noop://'", false, ""},++		{"backup database * to 'noop://' rate_limit 500 MB/second snapshot 5 minute ago", true, "BACKUP DATABASE * TO 'noop://' RATE_LIMIT = 500 MB/SECOND SNAPSHOT = 300000000 MICROSECOND AGO"},+		{"restore table g from 'noop://' concurrency 40 checksum 0 online 1", true, "RESTORE TABLE `g` FROM 'noop://' CONCURRENCY = 40 CHECKSUM = 0 ONLINE = 1"},+		{+			// FIXME: should we really include the access key in the Restore() text???

😱. Perhaps deal with this later (e.g. add a WriteSecureString() method). This needs to be considered together with how TiDB uses Restore().

kennytm

comment created time in 12 hours

Pull request review commentpingcap/parser

*: support parsing BACKUP and RESTORE statements

 ExplainFormatType: 		$$ = "json" 	} +/*******************************************************************+ * Backup / restore statements+ *+ *	BACKUP DATABASE [ * | db1, db2, db3 ] [ FULL ] TO 'scheme://location' [ options... ]+ *	BACKUP TABLE [ db1.tbl1, db2.tbl2 ] [ FULL ] TO 'scheme://location' [ options... ]+ *	RESTORE DATABASE [ * | db1, db2, db3 ] [ FULL ] FROM 'scheme://location' [ options... ]+ *	RESTORE TABLE [ db1.tbl1, db2.tbl2 ] [ FULL ] FROM 'scheme://location' [ options... ]+ */+BRIEStmt:+	"BACKUP" BRIETables BackupTypeSpec "TO" stringLit BRIEOptions+	{+		stmt := $2.(*ast.BRIEStmt)+		stmt.Kind = ast.BRIEKindBackup+		stmt.Storage = $5+		stmt.Options = $6.([]*ast.BRIEOption)+		if $3 != nil {+			stmt.Incremental = $3.(*ast.BRIEOption)+		}+		$$ = stmt+	}+|	"RESTORE" BRIETables BackupTypeSpec "FROM" stringLit BRIEOptions+	{+		stmt := $2.(*ast.BRIEStmt)+		stmt.Kind = ast.BRIEKindRestore+		stmt.Storage = $5+		stmt.Options = $6.([]*ast.BRIEOption)+		if $3 != nil {+			stmt.Incremental = $3.(*ast.BRIEOption)+		}+		$$ = stmt+	}++BRIETables:+	DatabaseSym '*'+	{+		$$ = &ast.BRIEStmt{}+	}+|	DatabaseSym DBNameList+	{+		$$ = &ast.BRIEStmt{Schemas: $2.([]string)}+	}+|	"TABLE" TableNameList+	{+		$$ = &ast.BRIEStmt{Tables: $2.([]*ast.TableName)}+	}++DBNameList:+	DBName+	{+		$$ = []string{$1.(string)}+	}+|	DBNameList ',' DBName+	{+		$$ = append($1.([]string), $3.(string))+	}++BackupTypeSpec:+	%prec empty+	{+		$$ = nil+	}+|	"FULL"+	{+		$$ = nil+	}+|	"INCREMENTAL" "UNTIL" "TIMESTAMP" stringLit+	{+		$$ = &ast.BRIEOption{+			Tp:       ast.BRIEOptionLastBackupTS,+			StrValue: $4,+		}+	}+|	"INCREMENTAL" "UNTIL" "TIMESTAMP_ORACLE" LengthNum+	{+		$$ = &ast.BRIEOption{+			Tp:        ast.BRIEOptionLastBackupTSO,+			UintValue: $4.(uint64),+		}+	}++BRIEOptions:+	%prec empty+	{+		$$ = []*ast.BRIEOption{}+	}+|	BRIEOptions BRIEOption+	{+		$$ = append($1.([]*ast.BRIEOption), $2.(*ast.BRIEOption))+	}++BRIEIntegerOptionName:+	"CONCURRENCY"+	{+		$$ = ast.BRIEOptionConcurrency+	}+|	"CHECKSUM"+	{+		$$ = ast.BRIEOptionChecksum+	}+|	"SEND_CREDENTIALS_TO_TIKV"+	{+		$$ = ast.BRIEOptionSendCreds+	}+|	"ONLINE"+	{+		$$ = ast.BRIEOptionOnline+	}+|	"S3_FORCE_PATH_STYLE"+	{+		$$ = ast.BRIEOptionS3ForcePathStyle+	}+|	"S3_USE_ACCELERATE_ENDPOINT"+	{+		$$ = ast.BRIEOptionS3UseAccelerateEndpoint+	}++BRIEStringOptionName:+	"S3_ENDPOINT"+	{+		$$ = ast.BRIEOptionS3Endpoint+	}+|	"S3_REGION"+	{+		$$ = ast.BRIEOptionS3Region+	}+|	"S3_STORAGE_CLASS"+	{+		$$ = ast.BRIEOptionS3StorageClass+	}+|	"S3_SSE"+	{+		$$ = ast.BRIEOptionS3SSE+	}+|	"S3_ACL"+	{+		$$ = ast.BRIEOptionS3ACL+	}+|	"S3_ACCESS_KEY"+	{+		$$ = ast.BRIEOptionS3AccessKey+	}+|	"S3_SECRET_ACCESS_KEY"+	{+		$$ = ast.BRIEOptionS3SecretAccessKey+	}+|	"S3_PROVIDER"+	{+		$$ = ast.BRIEOptionS3Provider+	}+|	"GCS_ENDPOINT"+	{+		$$ = ast.BRIEOptionGCSEndpoint+	}+|	"GCS_STORAGE_CLASS"+	{+		$$ = ast.BRIEOptionGCSStorageClass+	}+|	"GCS_PREDEFINED_ACL"+	{+		$$ = ast.BRIEOptionGCSPredefinedACL+	}+|	"GCS_CREDENTIALS_FILE"+	{+		$$ = ast.BRIEOptionGCSCredentialsFile+	}++BRIEOption:+	BRIEIntegerOptionName EqOpt LengthNum+	{+		$$ = &ast.BRIEOption{+			Tp:        $1.(ast.BRIEOptionType),+			UintValue: $3.(uint64),+		}+	}+|	BRIEStringOptionName EqOpt stringLit+	{+		$$ = &ast.BRIEOption{+			Tp:       $1.(ast.BRIEOptionType),+			StrValue: $3,+		}+	}+|	"SNAPSHOT" EqOpt LengthNum TimestampUnit "AGO"+	{+		unit, err := $4.(ast.TimeUnitType).Duration()+		if err != nil {+			yylex.AppendError(err)+			return 1+		}+		// TODO: check overflow?+		$$ = &ast.BRIEOption{+			Tp:        ast.BRIEOptionBackupTimeAgo,+			UintValue: $3.(uint64) * uint64(unit),+		}+	}+|	"RATE_LIMIT" EqOpt LengthNum "MB" '/' "SECOND"+	{+		// TODO: check overflow?

@tangenta RATE_LIMIT = 9223372036854775807 MB/SECOND... admittedly this is not a real issue 😅

kennytm

comment created time in 12 hours

push eventpingcap/parser

Yuanjia Zhang

commit sha d376009d7d9f99c480e0cf6e1b1f2d6969405ba5

parser: support the `SELECT ... INTO OUTFILE` syntax (#745) * SELECT OUTFILE * fix CI * update * add more tests

view details

push time in 13 hours

PR merged pingcap/parser

Reviewers
parser: support the `SELECT ... INTO OUTFILE` syntax status/LGT2 type/new-feature

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Fix #722.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
+7107 -6942

2 comments

5 changed files

qw4990

pr closed time in 13 hours

issue closedpingcap/parser

Support the `SELECT ... INTO OUTFILE` syntax

Feature Request

Is your feature request related to a problem? Please describe: <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> No.

Describe the feature you'd like: <!-- A clear and concise description of what you want to happen. --> From MySQL documentation 13.2.10.1 SELECT ... INTO OUTFILE Syntax:

SELECT ... INTO OUTFILE writes the selected rows to a file. Column and line terminators can be specified to produce a specific output format.

It helps us to store the query result in a CSV form.

Here is a discarded PR that may as a reference for you.

NOTE: do not support INTO DUMPFILE and INTO VARIABLES, supporting the INTO OUTFILE syntax is enough in this issue.

closed time in 13 hours

qw4990

push eventkennytm/br

kennytm

commit sha 9c3b0ae58a3f62b522ac16006bfea0be99fa9c25

restore: fix test build failure

view details

push time in 13 hours

PR opened tikv/sysinfo

sysinfo: build cache-size on x86 only

Workaround lovesegfault/cache-size#3.

+19 -0

0 comment

2 changed files

pr created time in 13 hours

create barnchtikv/sysinfo

branch : kennytm/make-cache-size-test-only

created branch time in 13 hours

issue commenttikv/tikv

Cannot compile TiKV on ARM64

Thanks for the report.

The build failure is caused by lovesegfault/cache-size#3. We'll workaround it by making cache-size (and thus raw-cpuid) an x86-only dependency.

n0vad3v

comment created time in 13 hours

issue openedlovesegfault/cache-size

Using `cache-size` outside x86 target causes it fail to build, not just unusable.

The README mentioned

Currently this crate only supports x86 CPUs, since it relies on the CPUID instruction, via the raw_cpuid crate. It is a goal to support other architectures; PRs are welcome!

We expected that this means outside of x86, this crate simply would return None on everything. Instead, it just fail to build due to the hard dependency of raw_cpuid

error[E0433]: failed to resolve: could not find `arch` in `self`
  --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/raw-cpuid-7.0.3/src/lib.rs:50:37
   |
50 |         let result = unsafe { self::arch::__cpuid_count(a, c) };
   |                                     ^^^^ could not find `arch` in `self`

It would be nice if depending on cache-size is still possible outside x86.

created time in 13 hours

Pull request review commentpingcap/br

BR support TLS

+# config of tidb++# Schema lease duration+# There are lot of ddl in the tests, setting this+# to 360s to test whther BR is gracefully shutdown.
# to 360s to test whether BR is gracefully shutdown.
3pointer

comment created time in 14 hours

Pull request review commentpingcap/br

BR support TLS

 const ( )  // ResetTS resets the timestamp of PD to a bigger value-func ResetTS(pdAddr string, ts uint64) error {+func ResetTS(pdAddr string, ts uint64, tlsConf *tls.Config) error { 	req, err := json.Marshal(struct { 		TSO string `json:"tso,omitempty"` 	}{TSO: fmt.Sprintf("%d", ts)}) 	if err != nil { 		return err 	}-	// TODO: Support TLS-	reqURL := "http://" + pdAddr + resetTSURL+	prefix := "http://"+	if tlsConf != nil {+		prefix = "https://"+		http.DefaultClient.Transport = &http.Transport{+			TLSClientConfig: tlsConf,+		}

Do not change global variables outside your module!

3pointer

comment created time in 14 hours

Pull request review commentpingcap/br

BR support TLS

+-----BEGIN CERTIFICATE-----

This certificate expires in 5 years at Feb 11 04:12:00 2025 GMT.

If you're going to hard-code the certs please make the expiry date longer (for comparison TiKV's test cert expires in 100 years at Oct 31 21:12:44 2117 GMT.)

3pointer

comment created time in 14 hours

Pull request review commentpingcap/br

BR support TLS

+-----BEGIN CERTIFICATE REQUEST-----

any reason we need to commit the CSRs?

3pointer

comment created time in 14 hours

Pull request review commentpingcap/br

BR support TLS

 func pdRequest( }  // NewMgr creates a new Mgr.-func NewMgr(ctx context.Context, pdAddrs string, storage tikv.Storage) (*Mgr, error) {+func NewMgr(+	ctx context.Context,+	pdAddrs string,+	storage tikv.Storage,+	tlsConf *tls.Config,+	securityOption pd.SecurityOption) (*Mgr, error) { 	addrs := strings.Split(pdAddrs, ",")  	failure := errors.Errorf("pd address (%s) has wrong format", pdAddrs) 	cli := &http.Client{Timeout: 30 * time.Second}+	if tlsConf != nil {+		cli = &http.Client{+			Timeout: 30 * time.Second,+			Transport: &http.Transport{+				TLSClientConfig: tlsConf,+			},+		}

You could just mutate cli here.

cli.Transport = &http.Transport{TLSClientConfig: tlsConf}

But I suggest cloning the DefaultTransport because of those non-trivial default settings.

3pointer

comment created time in 15 hours

Pull request review commentpingcap/br

BR support TLS

 func (cfg *Config) ParseFromFlags(flags *pflag.FlagSet) error { }  // newMgr creates a new mgr at the given PD address.-func newMgr(ctx context.Context, pds []string) (*conn.Mgr, error) {+func newMgr(ctx context.Context, pds []string, tlsConfig TLSConfig) (*conn.Mgr, error) {+	var (+		tlsConf *tls.Config+		err     error+	) 	pdAddress := strings.Join(pds, ",") 	if len(pdAddress) == 0 { 		return nil, errors.New("pd address can not be empty") 	} +	securityOption := pd.SecurityOption{}+	if tlsConfig.Enable() {+		conf := config.GetGlobalConfig()+		conf.Security.ClusterSSLCA = tlsConfig.CA+		conf.Security.ClusterSSLCert = tlsConfig.Cert+		conf.Security.ClusterSSLKey = tlsConfig.Key+		config.StoreGlobalConfig(conf)

please put these in the glue, we don't need to touch TiDB's own global config when integrating BR into TiDB.

3pointer

comment created time in 14 hours

Pull request review commentpingcap/br

BR support TLS

 type TLSConfig struct { 	Key  string `json:"key" toml:"key"` } +// Enable checks if TLS open or not+func (tls *TLSConfig) Enable() bool {
func (tls *TLSConfig) IsEnabled() bool {

The current name sounds like you can enable TLS with this method.

3pointer

comment created time in 14 hours

created tagtikv/importer

tagv3.0.10

tikv-importer is a front-end to help ingesting large number of KV pairs into a TiKV cluster

created time in 15 hours

release tikv/importer

v3.0.10

released time in 15 hours

delete branch tikv/importer

delete branch : kennytm/set-version-to-3.0.10

delete time in 15 hours

push eventtikv/importer

kennytm

commit sha f81848414e5c703a74fb27a354ae431628cce833

Cargo.toml: set version to 3.0.10 and up tikv deps to 3.0.9 (#42) Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 15 hours

PR merged tikv/importer

Cargo.toml: set version to 3.0.10 and up tikv deps to 3.0.9 status/LGT1

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

  • Set version to 3.0.10
  • Update TiKV deps to 3.0.9

What are the type of the changes? (mandatory)

  • Improvement (change which is an improvement to an existing feature)

How has this PR been tested? (mandatory)

N/A

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+58 -58

1 comment

2 changed files

kennytm

pr closed time in 15 hours

PR opened tikv/importer

Cargo.toml: set version to 3.0.10 and up tikv deps to 3.0.9 status/PTAL

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

  • Set version to 3.0.10
  • Update TiKV deps to 3.0.9

What are the type of the changes? (mandatory)

  • Improvement (change which is an improvement to an existing feature)

How has this PR been tested? (mandatory)

N/A

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+58 -58

0 comment

2 changed files

pr created time in 15 hours

create barnchtikv/importer

branch : kennytm/set-version-to-3.0.10

created branch time in 15 hours

push eventpingcap/parser

Jack Yu

commit sha 604cb05cfbe97467f37383085f134444897b3349

mysql: sort the error code (#737)

view details

crazycs

commit sha d65f5147dd9fe24f6f479451caacb0f3cf072b14

*: remove until timestamp syntax in flashback since we not support this (#733)

view details

张某

commit sha ca9c6dfc8b3ed473b8316ed2bb6b3b8dfdee4ce8

add `show table $table_name next_row_id` syntax (#738) * add `show table $table_name next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> add show table next row id Signed-off-by: zhang555 <4598181@qq.com> add show table next row id Signed-off-by: zhang555 <4598181@qq.com> add show table next row id Signed-off-by: zhang555 <4598181@qq.com> add show table next row id Signed-off-by: zhang555 <4598181@qq.com> add show table next row id Signed-off-by: zhang555 <4598181@qq.com> add `show table next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> add `show table $table_name next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> add `show table next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> add `show table $table_name next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> add `show table next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> * add `show table $table_name next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> * add `show table $table_name next_row_id` syntax Signed-off-by: zhang555 <4598181@qq.com> * add syntax Signed-off-by: zhang555 <4598181@qq.com> * Merge branch 'master' into mytidb1 Signed-off-by: zhang555 <4598181@qq.com> # Conflicts: # parser.go

view details

Arenatlx

commit sha ed18580a5c44a67d54acb2c3ad791ebda9f4d77a

add errcode and errname of unsupported sequence default value for column type (#739)

view details

tangenta

commit sha 77fd7e3e8fa098b58bb97a81d23174f944615f9d

mysql/errname.go: update error message of ErrCantGetValidID (#741)

view details

lysu

commit sha e45da55eb72982ce565caed843b59e74d624b0f7

support alter instance reload tls (#740)

view details

crazycs

commit sha 0829643f461ced7f80f9160824630d14d33ed957

model: add partition replica available info to support partition table in tiflash (#742) Signed-off-by: crazycs <crazycs520@gmail.com>

view details

bb7133

commit sha 68af3da1f0528304e686ea0e06dd911876a4d634

parser: add WEIGHT_STRING() function (#743)

view details

kennytm

commit sha f9e078c007ec0f056ea6f5b7ccf78b933645c9d7

*: support parsing BACKUP and RESTORE statements

view details

push time in a day

PR opened pingcap/parser

*: support parsing BACKUP and RESTORE statements status/PTAL type/enhancement

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Added the BACKUP, RESTORE and SHOW BACKUP/RESTORE statements to parser.

BACKUP DATABASE * TO 'local:///tmp/archive01/';
BACKUP DATABASE `a` TO 'local:///tmp/archive02/';
BACKUP TALBE `a`.`b` TO 'local:///tmp/archive03/';
RESTORE DATABASE * FROM 'local:///tmp/archive01/';
RESTORE DATABASE `a` FROM 'local:///tmp/archive02/';
RESTORE TALBE `a`.`b` FROM 'local:///tmp/archive03/';
BACKUP DATABASE * FULL TO 'local:///tmp/archive04/';
BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP '2020-02-02 14:15:16' TO 'local:///tmp/archive05/';
BACKUP DATABASE * INCREMENTAL UNTIL TIMESTAMP_ORACLE 1234567890 TO 'local:///tmp/archive06/';

BACKUP DATABASE * TO 's3://my_bucket/path' 
    RATE_LIMIT = 100 MB/SECOND 
    SNAPSHOT = 2 HOUR AGO 
    CONCURRENCY = 32 
    S3_REGION = 'us-west-1' 
    S3_USE_ACCELERATE_ENDPOINT = 1;

SHOW BACKUP;
SHOW RESTORE;
SHOW BACKUP LIKE 'b01234';
SHOW BACKUP WHERE status <> 'running';

These are needed for integrating BR into TiDB (PR will come later).

(Note: because the new unresolved keyword S3_USE_ACCELERATE_ENDPOINT is longer than all existing tokens, forcing gofmt to replace the entire token map, I've taken advantage of this to sort the entire tokenMap)

What is changed and how it works?

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test

Code changes

Side effects

Related changes

  • Need to update the documentation
  • Need to be included in the release note
+9790 -8882

0 comment

7 changed files

pr created time in a day

push eventpingcap/parser

kennytm

commit sha e62fb52d80420174e2a80ddfc5a723c779637126

*: support parsing BACKUP and RESTORE statements

view details

push time in a day

create barnchpingcap/parser

branch : kennytm/backup-restore-statements

created branch time in a day

Pull request review commentpingcap/parser

parser: Support the `SELECT ... INTO OUTFILE` syntax

 func (s *testParserSuite) TestDMLStmt(c *C) { 		{"SELECT * from t lock in share mode", true, "SELECT * FROM `t` LOCK IN SHARE MODE"}, 		{"SELECT * from t for update nowait", true, "SELECT * FROM `t` FOR UPDATE NOWAIT"}, +		// select into outfile+		{"select a, b from t into outfile '/tmp/result.txt'", true, "SELECT `a`,`b` FROM `t` INTO OUTFILE '/tmp/result.txt'"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' fields terminated BY ','", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ','"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' fields terminated BY ',' enclosed BY '\"'", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' ENCLOSED BY '\"'"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' fields terminated BY ',' optionally enclosed BY '\"'", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"'"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' lines terminated BY '\n'", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt'"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' fields terminated BY ',' optionally enclosed BY '\"' lines terminated BY '\r'", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\r'"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' fields terminated BY ',' enclosed BY '\"' lines terminated BY '\r'", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\r'"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' fields terminated BY ',' optionally enclosed BY '\"' lines starting by 'xy' terminated BY '\r'", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES STARTING BY 'xy' TERMINATED BY '\r'"},+		{"select a,b,a+b from t into outfile '/tmp/result.txt' fields terminated BY ',' enclosed BY '\"' lines starting by 'xy' terminated BY '\r'", true, "SELECT `a`,`b`,`a`+`b` FROM `t` INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES STARTING BY 'xy' TERMINATED BY '\r'"},

please add a test case involving both INTO OUTFILE and FOR UPDATE clauses.

select 1 for update into outfile '/tmp/1.csv';
qw4990

comment created time in a day

pull request commentrust-lang/rfcs

RFC: Add a new attribute, `#[isa]`

Before reading the content I thought #[isa] means "is a" 😓.

ketsuban

comment created time in 2 days

Pull request review commentpingcap/tidb

expression: add builtin function `WEIGHT_STRING()`

 type loadFileFunctionClass struct { func (c *loadFileFunctionClass) getFunction(ctx sessionctx.Context, args []Expression) (builtinFunc, error) { 	return nil, errFunctionNotExists.GenWithStackByArgs("FUNCTION", "load_file") }++type weightStringPadding byte++const (+	// weightStringPaddingNone is used for WEIGHT_STRING(expr) if the expr is non-numeric.+	weightStringPaddingNone weightStringPadding = 0xFF+	// weightStringPaddingAsChar is used for WEIGHT_STRING(expr AS CHAR(x)) and the expr is non-numeric.+	weightStringPaddingAsChar = 0x20+	// weightStringPaddingAsBinary is used for WEIGHT_STRING(expr as BINARY(x)) and the expr is non-numeric.+	weightStringPaddingAsBinary = 0x00+	// weightStringPaddingNull is used for WEIGHT_STRING(expr [AS (CHAR|BINARY)]) if the expr is numeric.+	weightStringPaddingNull = 0xFE

On MySQL 8.0 this "NULL" transformation only applies to CHAR.

mysql> select weight_string(456789 as binary(4)), weight_string(3.1415926535e0 as binary(4)), weight_string(456789 as char(4)), weight_string(3.1415926535e0 as char(4))\G
*************************** 1. row ***************************
        weight_string(456789 as binary(4)): 4567
weight_string(3.1415926535e0 as binary(4)): 3.14
          weight_string(456789 as char(4)): NULL
  weight_string(3.1415926535e0 as char(4)): NULL
1 row in set, 2 warnings (0.00 sec)

mysql> show warnings;
+---------+------+-----------------------------------------------------+
| Level   | Code | Message                                             |
+---------+------+-----------------------------------------------------+
| Warning | 1292 | Truncated incorrect BINARY(4) value: '456789'       |
| Warning | 1292 | Truncated incorrect BINARY(4) value: '3.1415926535' |
+---------+------+-----------------------------------------------------+
2 rows in set (0.00 sec)

(The same for MariaDB.)

bb7133

comment created time in 2 days

pull request commentpingcap/docs-cn

tidb-lightning: add glossary

But is it still [DNM]?

anotherrachel

comment created time in 2 days

push eventpingcap/tidb-lightning

kennytm

commit sha 6066db29ad39e2017956ba1b878f9dbef3056915

config: the default csv.null should be a capital \N not small \n

view details

push time in 2 days

PR opened pingcap/tidb-lightning

Support TLS; Reduce the need of config.toml in integration tests Should Update Ansible Should Update Docs priority/important status/DNM type/feature

DNM: Blocked on tikv/importer#40.

What problem does this PR solve? <!--add issue link with summary if exists-->

Fix #262.

What is changed and how it works?

  1. Code changes:

    • Added the [security] section in the config to read CA, cert and key. These are used to construct the standard *tls.Config.
    • This config is used to:
    • All of these operations are grouped into a struct common.TLS for simplified management. This struct mainly acts as an http.Client to fetch JSON objects, plus methods to produce options for securing gRPC and MySQL protocols.
  2. Test changes:

    • The entire integration test is changed to use TLS for all communications.
    • The certs need be to pass into every tidb-lightning and tidb-lightning-ctl invocations. To simplify future changes, the run_lightning and run_lightning_ctl helper scripts now define most common settings in the command line.
    • Existing config.tomls are simplified to retain only the essential settings.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
    • check that web interface does serve HTTPS if required.

Side effects

Related changes

  • Need to update the documentation
    • Describe the new config and CLI parameters
  • Need to update the tidb-ansible repository
    • Copy the global TLS settings to config
  • Need to be included in the release note
    • Note TLS support
+700 -1439

0 comment

105 changed files

pr created time in 2 days

push eventpingcap/tidb-lightning

kennytm

commit sha bd1c1f85417a774e34343e46dbfbf0f901319b6e

*: go fmt

view details

kennytm

commit sha 25582d14adca72993c1aeb19765e2e89a991366c

*: support TLS

view details

kennytm

commit sha 2b3e0eebbe88ea676169e9b705be2f02258e57a3

tests: enable TLS for all components in the integration test

view details

kennytm

commit sha a3b1f06b66a4e0bad6675635c858eefe063f5ae1

tests: specify TLS and most default arguments via command line refactored the tests so only essential settings remained in config.toml

view details

push time in 2 days

push eventpingcap/tidb-lightning

kennytm

commit sha 6a12cce0e2c54e534ce828cc7a016ec789fd8104

tests: use TLS connection for everything

view details

kennytm

commit sha 1972ff38317b839a372e8357f5d43ee4585a54c0

tls2

view details

push time in 2 days

Pull request review commenttikv/tikv

Migrate to abstract TablePropertiesCollection types

 impl TablePropertiesExt for RocksEngine {         let ranges: Vec<_> = ranges.iter().map(util::range_to_rocks_range).collect();         let raw = self             .as_inner()-            .get_properties_of_tables_in_range(cf.as_inner(), &ranges);+            .get_properties_of_tables_in_range_rc(cf.as_inner(), &ranges);         let raw = raw.map_err(Error::Engine)?;         Ok(RocksTablePropertiesCollection::from_raw(raw))     } } -pub struct RocksTablePropertiesCollection(RawTablePropertiesCollection);+type IA = RocksTablePropertiesCollectionIter;+type PKeyA = RocksTablePropertiesKey;+type PA = RocksTableProperties;+type UCPA = RocksUserCollectedProperties;++pub struct RocksTablePropertiesCollection(raw::TablePropertiesCollection);  impl RocksTablePropertiesCollection {-    pub fn from_raw(raw: RawTablePropertiesCollection) -> RocksTablePropertiesCollection {+    fn from_raw(raw: raw::TablePropertiesCollection) -> RocksTablePropertiesCollection {         RocksTablePropertiesCollection(raw)     }+}++impl TablePropertiesCollection<IA, PKeyA, PA, UCPA> for RocksTablePropertiesCollection {+    fn iter(&self) -> RocksTablePropertiesCollectionIter {+        RocksTablePropertiesCollectionIter(self.0.iter())+    }++    fn len(&self) -> usize {+        self.0.len()+    }+}++pub struct RocksTablePropertiesCollectionIter(raw::TablePropertiesCollectionIter);++impl TablePropertiesCollectionIter<PKeyA, PA, UCPA> for RocksTablePropertiesCollectionIter {}++impl Iterator for RocksTablePropertiesCollectionIter {+    type Item = (RocksTablePropertiesKey, RocksTableProperties);++    fn next(&mut self) -> Option<Self::Item> {+        self.0+            .next()+            .map(|(key, props)| (RocksTablePropertiesKey(key), RocksTableProperties(props)))+    }+}++pub struct RocksTablePropertiesKey(raw::TablePropertiesKey);++impl TablePropertiesKey for RocksTablePropertiesKey {}++impl Deref for RocksTablePropertiesKey {+    type Target = str;++    fn deref(&self) -> &str {+        self.0.deref()+    }+}++pub struct RocksTableProperties(raw::TableProperties);++impl TableProperties<UCPA> for RocksTableProperties {+    fn num_entries(&self) -> u64 {+        self.0.num_entries()+    }++    fn user_collected_properties(&self) -> RocksUserCollectedProperties {+        RocksUserCollectedProperties(self.0.user_collected_properties())+    }+}++pub struct RocksUserCollectedProperties(raw::UserCollectedProperties);++impl UserCollectedProperties for RocksUserCollectedProperties {+    fn get<Q: AsRef<[u8]>>(&self, index: Q) -> Option<&[u8]> {+        self.0.get(index)+    } -    // for test-    pub fn get_raw(&self) -> &RawTablePropertiesCollection {-        &self.0+    fn len(&self) -> usize {+        self.0.len()     } } -impl TablePropertiesCollection for RocksTablePropertiesCollection {}+// FIXME: DecodeProperties doesn't belong in this crate,+// and it looks like the properties module has functional overlap+// with this module.+use crate::properties::DecodeProperties;++impl DecodeProperties for RocksUserCollectedProperties {+    fn decode(&self, k: &str) -> tikv_util::codec::Result<&[u8]> {+        match self.get(k.as_bytes()) {+            Some(v) => Ok(v),+            None => Err(tikv_util::codec::Error::KeyNotFound),+        }
        self.get(k.as_bytes())
            .ok_or(tikv_util::codec::Error::KeyNotFound)
brson

comment created time in 2 days

Pull request review commenttikv/tikv

Migrate to abstract TablePropertiesCollection types

 impl TablePropertiesExt for RocksEngine {         let ranges: Vec<_> = ranges.iter().map(util::range_to_rocks_range).collect();         let raw = self             .as_inner()-            .get_properties_of_tables_in_range(cf.as_inner(), &ranges);+            .get_properties_of_tables_in_range_rc(cf.as_inner(), &ranges);         let raw = raw.map_err(Error::Engine)?;         Ok(RocksTablePropertiesCollection::from_raw(raw))     } } -pub struct RocksTablePropertiesCollection(RawTablePropertiesCollection);+type IA = RocksTablePropertiesCollectionIter;+type PKeyA = RocksTablePropertiesKey;+type PA = RocksTableProperties;+type UCPA = RocksUserCollectedProperties;

ditto

brson

comment created time in 2 days

Pull request review commenttikv/tikv

Migrate to abstract TablePropertiesCollection types

 pub trait TablePropertiesExt: CFHandleExt {     } } -pub trait TablePropertiesCollection {}+pub trait TablePropertiesCollection<I, PKey, P, UCP>+where+    I: TablePropertiesCollectionIter<PKey, P, UCP>,+    PKey: TablePropertiesKey,+    P: TableProperties<UCP>,+    UCP: UserCollectedProperties,+{+    fn iter(&self) -> I;++    fn len(&self) -> usize;++    fn is_empty(&self) -> bool {+        self.len() == 0+    }+}++pub trait TablePropertiesCollectionIter<PKey, P, UCP>+where+    Self: Iterator<Item = (PKey, P)>,

ditto

brson

comment created time in 2 days

Pull request review commenttikv/tikv

Migrate to abstract TablePropertiesCollection types

 pub trait TablePropertiesExt: CFHandleExt {     } } -pub trait TablePropertiesCollection {}+pub trait TablePropertiesCollection<I, PKey, P, UCP>+where+    I: TablePropertiesCollectionIter<PKey, P, UCP>,+    PKey: TablePropertiesKey,+    P: TableProperties<UCP>,+    UCP: UserCollectedProperties,+{+    fn iter(&self) -> I;++    fn len(&self) -> usize;++    fn is_empty(&self) -> bool {+        self.len() == 0+    }+}++pub trait TablePropertiesCollectionIter<PKey, P, UCP>+where+    Self: Iterator<Item = (PKey, P)>,+    PKey: TablePropertiesKey,+    P: TableProperties<UCP>,+    UCP: UserCollectedProperties,+{+}++pub trait TablePropertiesKey+where+    Self: Deref<Target = str>,+{+}

why not write it like

pub trait TablePropertiesKey: Deref<Target = str> {}
brson

comment created time in 2 days

Pull request review commenttikv/tikv

Migrate to abstract TablePropertiesCollection types

 impl TablePropertiesExt for PanicEngine {     } } +type IA = PanicTablePropertiesCollectionIter;+type PKeyA = PanicTablePropertiesKey;+type PA = PanicTableProperties;+type UCPA = PanicUserCollectedProperties;

I find it odd that these aliases are only used in generics but not elsewhere. These aliases are introduced so that the two impl at lines 33 and 45 can be kept as single lines, but the unaliased versions are probably not really bad enough for these short aliases?

impl
    TablePropertiesCollection<
        PanicTablePropertiesCollectionIter,
        PanicTablePropertiesKey,
        PanicTableProperties,
        PanicUserCollectedProperties,
    > for PanicTablePropertiesCollection
{
}

impl
    TablePropertiesCollectionIter<
        PanicTablePropertiesKey,
        PanicTableProperties,
        PanicUserCollectedProperties,
    > for PanicTablePropertiesCollectionIter
{
}
brson

comment created time in 2 days

issue commentrust-lang/rfcs

Possible new feature: Option::is_any()

I don't see how .is_some(x) is "sweeter" than == Some(x)

let is_manager = taxpayer.manager.is_some(true);
let is_manager = taxpayer.manager == Some(true);
let is_manager = taxpayer.manager.eq(&Some(true)); // another spelling of ==

If the method takes a closure instead of values then it could have the advantage of lazily evaluating the compared term.

let is_manager = taxpayer.is_some_with(|tp| tp.is_manager());
let is_manager = taxpayer.map_or_default(|tp| tp.is_manager()); // a better name?
let is_manager = taxpayer.map_or(false, |tp| tp.is_manager());
silvioprog

comment created time in 3 days

delete branch pingcap/tidb-tools

delete branch : WangXiangUSTC-patch-1

delete time in 3 days

push eventpingcap/tidb-tools

WangXiangUSTC

commit sha 41c903a5bb6e88761d54edad810d8d925ee32e48

Update common.go (#317)

view details

push time in 3 days

PR merged pingcap/tidb-tools

diff: increase the default timeout for db operate status/LGT2

What problem does this PR solve? <!--add issue link with summary if exists-->

users feedback that the 5 second is not enough for database operate some times.

What is changed and how it works?

increase to 10 second

+1 -1

1 comment

1 changed file

WangXiangUSTC

pr closed time in 3 days

pull request commentpingcap/tidb-tools

diff: increase the default timeout for db operate

/run-integration-tests

WangXiangUSTC

comment created time in 4 days

push eventkennytm/tikv

gengliqi

commit sha e4807144f495d7d514501ef2d8852c5d4c67cf37

raftstore: set wait_merge_state to none after resuming pending state (#6615) Signed-off-by: Liqi Geng <gengliqiii@gmail.com>

view details

Jay

commit sha 8c3f41a99ed163d731a3a55415b0fa0b0b5f3c5b

fix test target (#6619) Signed-off-by: Jay Lee <BusyJayLee@gmail.com>

view details

gengliqi

commit sha a7af9469d4af92b409ddf75cadcbbdd356c9a144

raftstore: learner load merge target & fix a merge network recovery bug (#6598) Signed-off-by: Liqi Geng <gengliqiii@gmail.com> Signed-off-by: Jay Lee <BusyJayLee@gmail.com>

view details

kennytm

commit sha a96e89143a32dea7fb91563449604000b846bd42

Merge branch 'master' into sst-importer-write-to-memory

view details

push time in 4 days

pull request commenttikv/tikv

sst_importer: perform key rewrite in memory rather than on disk

/release

kennytm

comment created time in 4 days

PR opened tikv/tikv

sst_importer: perform key rewrite in memory rather than on disk C: Backup-Restore S: WIP

<!-- Thank you for contributing to TiKV!

If you haven't already, please read TiKV's CONTRIBUTING document.

If you're unsure about anything, just ask; somebody should be along to answer within a day or two. -->

What have you changed?

During BR restore, we may need to rewrite the entire SST file because the prefix is changed. Previously we directly write to an SstWriter wrapping a local file. This was suspected to slow down restore speed due to I/O operation.

This PR attempts to move the I/O to a single bulk copy by creating the SstWriter in memory first.

What is the type of the changes?

  • Improvement (a change which is an improvement to an existing feature)

How is the PR tested?

  • Unit test
  • (To be done: tested against a 1 TB table)

Does this PR affect documentation (docs) or should it be mentioned in the release notes?

No.

Does this PR affect tidb-ansible?

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Any examples? (optional)

+56 -62

0 comment

2 changed files

pr created time in 4 days

create barnchkennytm/tikv

branch : sst-importer-write-to-memory

created branch time in 4 days

pull request commentpingcap/tidb

server: properly support status port over TLS (#14785)

😕

[2020-02-14T12:11:10.399Z] + GOPATH=/home/jenkins/agent/workspace/tidb_ghpr_build/go

[2020-02-14T12:11:10.399Z] + /home/jenkins/agent/workspace/tidb_ghpr_build/go/src/github.com/pingcap/tidb-build-plugin/cmd/pluginpkg/pluginpkg -pkg-dir /home/jenkins/agent/workspace/tidb_ghpr_build/go/src/github.com/pingcap/enterprise-plugin/audit -out-dir /home/jenkins/agent/workspace/tidb_ghpr_build/go/src/github.com/pingcap/enterprise-plugin/audit

[2020-02-14T12:11:10.960Z] # github.com/pingcap/enterprise-plugin/audit

[2020-02-14T12:11:10.960Z] ./audit.go:31:20: undefined: "github.com/pingcap/tidb/plugin".RejectReasonCtxValue

[2020-02-14T12:11:10.960Z] 2020/02/14 20:11:10 compile plugin source code failure, exit status 2

script returned exit code 1

BTW if we don't need to cherry-pick to release-4.0 (i.e. fast-forward to master is enough), please close this PR.

sre-bot

comment created time in 4 days

delete branch kennytm/tidb

delete branch : fix-tls-status-port

delete time in 4 days

created tagtikv/importer

tagv3.1.0-beta.2

tikv-importer is a front-end to help ingesting large number of KV pairs into a TiKV cluster

created time in 4 days

release tikv/importer

v3.1.0-beta.2

released time in 4 days

delete branch tikv/importer

delete branch : kennytm/set-version-to-3.1.0-beta.2

delete time in 4 days

push eventtikv/importer

kennytm

commit sha da0a0018669171d5bcbbc8ae0409b8a55769ab4e

Cargo.toml: set version to v3.1.0-beta.2 and upgrade TiKV deps to v3.1.0-beta.1 (#41) * Cargo.toml: set ver = v3.1.0-beta.2 and up tikv deps to v3.1.0-beta.1 Signed-off-by: kennytm <kennytm@gmail.com> * stream: update code for pingcap/rust-rocksdb#412 Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

PR merged tikv/importer

Cargo.toml: set version to v3.1.0-beta.2 and upgrade TiKV deps to v3.1.0-beta.1 status/LGT1

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to v3.1.0-beta.2

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

N/A

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+84 -87

1 comment

3 changed files

kennytm

pr closed time in 4 days

issue commentrust-lang/rfcs

Make alloc-free string manipulation more ergonomic with String::mutate(&mut self, ...)

What about the less ambitious alternative i.e. introduce the specific in-place modification functions?

impl String {
    pub fn trim_start_in_place(&mut self);
    pub fn trim_end_in_place(&mut self);
    pub fn trim_in_place(&mut self);
    pub fn trim_start_matches_in_place<'a>(&'a mut self, pat: impl Pattern<'a>);
    pub fn trim_end_matches_in_place<'a>(&'a mut self, pat: impl Pattern<'a>);
    pub fn trim_matches_in_place<'a>(&'a mut self, pat: impl Pattern<'a>);
}

@burdges

impl<T> Vec<T> {
    pub fn shift_left(&mut self, mid: usize);
}

This already exists, x.shift_left(mid) is the same as x.drain(..mid). (The same exists on String.)

mqudsi

comment created time in 4 days

Pull request review commentpingcap/docs-cn

*: replace loader by lightning

+---+title: 使用 mydumper/tidb lightning 进行备份与恢复+category: how-to+aliases: ['/docs-cn/dev/how-to/maintain/backup-and-restore/mydumper-lightning']+---++# 备份与恢复++本文档将详细介绍如何使用 `mydumper`/`tidb lightning` 对 TiDB 进行全量备份与恢复。增量备份与恢复可使用 [TiDB Binlog](/dev/reference/tidb-binlog/overview.md)。++这里我们假定 TiDB 服务信息如下:++|Name|Address|Port|User|Password|+|----|-------|----|----|--------|+|TiDB|127.0.0.1|4000|root|*|++在这个备份恢复过程中,我们会用到下面的工具:++- [Mydumper](/dev/reference/tools/mydumper.md) 从 TiDB 导出数据+- [TiDB Lightning](/dev/reference/tools/tidb-lightning/overview.md) 导入数据到 TiDB++## 使用 `mydumper`/`tidb lightning` 全量备份恢复数据++`mydumper` 是一个强大的数据备份工具,具体可以参考 [`maxbube/mydumper`](https://github.com/maxbube/mydumper)。++可使用 [`mydumper`](/dev/reference/tools/mydumper.md) 从 TiDB 导出数据进行备份,然后用 [TiDB Lightning](/dev/reference/tools/tidb-lightning/overview.md) 将其导入到 TiDB 里面进行恢复。++> **注意:**+>+> PingCAP 研发团队对 `mydumper` 进行了针对 TiDB 的适配性改造,建议使用 PingCAP 官方提供的 [`mydumper`](/dev/reference/tools/mydumper.md)。由于使用 `mysqldump` 进行数据备份和恢复都要耗费许多时间,这里也并不推荐。++### `mydumper`/`tidb lightning` 全量备份恢复最佳实践++为了快速地备份恢复数据 (特别是数据量巨大的库),可以参考以下建议:++* 使用 `mydumper` 导出来的数据文件尽可能的小, 可以通过设置参数 `-F` 来控制导出来的文件大小。如果后续使用  TiDB Lightning 对备份文件进行恢复,建议把 `mydumper` -F 参数的值设置为 `256`(单位 MB);如果使用 `loader` 恢复,则建议设置为 `64`(单位 MB)。

Do we still need to talk about loader?

GregoryIan

comment created time in 4 days

Pull request review commentpingcap/docs-cn

*: replace loader by lightning

+---+title: 使用 mydumper/tidb lightning 进行备份与恢复+category: how-to+aliases: ['/docs-cn/dev/how-to/maintain/backup-and-restore/mydumper-lightning']+---++# 备份与恢复++本文档将详细介绍如何使用 `mydumper`/`tidb lightning` 对 TiDB 进行全量备份与恢复。增量备份与恢复可使用 [TiDB Binlog](/dev/reference/tidb-binlog/overview.md)。

Either tidb-lightning (the exe name) or "TiDB Lightning" (the product name)

GregoryIan

comment created time in 4 days

Pull request review commentpingcap/docs-cn

*: replace loader by lightning

+---+title: 使用 mydumper/tidb lightning 进行备份与恢复+category: how-to+aliases: ['/docs-cn/dev/how-to/maintain/backup-and-restore/mydumper-lightning']+---++# 备份与恢复++本文档将详细介绍如何使用 `mydumper`/`tidb lightning` 对 TiDB 进行全量备份与恢复。增量备份与恢复可使用 [TiDB Binlog](/dev/reference/tidb-binlog/overview.md)。++这里我们假定 TiDB 服务信息如下:++|Name|Address|Port|User|Password|+|----|-------|----|----|--------|+|TiDB|127.0.0.1|4000|root|*|

* = no password?

GregoryIan

comment created time in 4 days

Pull request review commentpingcap/docs-cn

*: replace loader by lightning

   + 运维     - [Ansible 常见运维操作](/dev/how-to/maintain/ansible-operations.md)     + 备份与恢复-      - [使用 Mydumper/Loader 进行备份与恢复](/dev/how-to/maintain/backup-and-restore/mydumper-loader.md)+      - [使用 Mydumper/TiDB-lightning 进行备份与恢复](/dev/how-to/maintain/backup-and-restore/mydumper-lightning.md)
      - [使用 Mydumper/TiDB Lightning 进行备份与恢复](/dev/how-to/maintain/backup-and-restore/mydumper-lightning.md)
GregoryIan

comment created time in 4 days

pull request commenttikv/importer

Cargo.toml: set version to v3.1.0-beta.2 and upgrade TiKV deps to v3.1.0-beta.1

PTAL @WangXiangUSTC

kennytm

comment created time in 5 days

push eventtikv/importer

kennytm

commit sha d9b3c4b6b8338991a6837d537196396078e1747e

stream: update code for pingcap/rust-rocksdb#412 Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 5 days

PR opened tikv/importer

Cargo.toml: set version to v3.1.0-beta.2 and upgrade TiKV deps to v3.1.0-beta.1 status/PTAL

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to v3.1.0-beta.2

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

N/A

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+81 -81

0 comment

2 changed files

pr created time in 5 days

create barnchtikv/importer

branch : kennytm/set-version-to-3.1.0-beta.2

created branch time in 5 days

pull request commentpingcap/tidb

server: properly support status port over TLS

PTAL @crazycs520 @lonng

kennytm

comment created time in 5 days

PR opened pingcap/tidb

server: properly support status port over TLS component/server needs-cherry-pick-4.0 status/PTAL type/bug-fix

<!-- Thank you for contributing to TiDB! Please read TiDB's CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Fix #14784.

What is changed and how it works?

Previously CMux was placed on top of gRPC / HTTPS listeners, so CMux was not able to inspect the decrypted bytes to determine which listener it should dispatch to. The result is all traffic are sent to the gRPC listener, making all HTTP requests fail.

Now we extracted the TLS layer out to become: TLS → CMux → (gRPC Insecure / HTTP 1.1), allowing CMux to properly distinguish between HTTP and gRPC traffic.

(The use of CMux was introduced in #13693 which only exists in 4.0+)

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Manual test (add detailed scripts or steps below)
    • executing the script in #14784

Code changes

Side effects

Related changes

Release note

+66 -30

0 comment

4 changed files

pr created time in 5 days

create barnchkennytm/tidb

branch : fix-tls-status-port

created branch time in 5 days

issue openedpingcap/tidb

Status port does not work when cluster-TLS (SSL) is enabled

Bug Report

Please answer these questions before submitting your issue. Thanks!

  1. What did you do?

<details><summary>Start TiDB with cluster TLS:</summary>

#!/bin/sh
set -eu
export MSYS2_ARG_CONV_EXCL=\*

# Generate certs

cat - > ipsan.cnf <<EOF
[dn]
CN = localhost
[req]
distinguished_name = dn
[EXT]
subjectAltName = @alt_names
keyUsage = digitalSignature
extendedKeyUsage = clientAuth,serverAuth
[alt_names]
DNS.1 = localhost
IP.1 = 127.0.0.1
EOF

openssl ecparam -out ca.key -name prime256v1 -genkey
openssl req -new -batch -sha256 -subj '/CN=localhost' -key ca.key -out ca.csr
openssl x509 -req -sha256 -days 2 -in ca.csr -signkey ca.key -out ca.pem 2> /dev/null

openssl ecparam -out cluster.key -name prime256v1 -genkey
openssl req -new -batch -sha256 -subj '/CN=localhost' -key cluster.key -out cluster.csr
openssl x509 -req -sha256 -days 1 -extensions EXT -extfile ipsan.cnf -in cluster.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out cluster.pem 2> /dev/null

# Start TiDB

cat - > tidb-config.toml <<EOF
host = "127.0.0.1"
port = 4000
[status]
status-host = "127.0.0.1"
status-port = 10080
[security]
cluster-ssl-ca = "ca.pem"
cluster-ssl-cert = "cluster.pem"
cluster-ssl-key = "cluster.key"
EOF

../bin/tidb-server --config tidb-config.toml

</details>

Then try to connect to the status port

curl --cacert ca.pem 'https://127.0.0.1:10080/status'
  1. What did you expect to see?

JSON output like

{"connections":0,"version":"5.7.25-TiDB-v4.0.0-beta-136-g8c804f40d","git_hash":"8c804f40dda5ef231a73f459ecd812fb135b9fba"}
  1. What did you see instead?

Connecting using HTTP/2 returned failure:

curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)

Furthermore, connecting using HTTP/1.1 does not succeed either, and the server is definitely serving HTTPS.

$ curl --http1.1 --cacert ca.pem 'https://127.0.0.1:10080/status'
curl: (1) Received HTTP/0.9 when not allowed

$ curl --http0.9 --cacert ca.pem 'https://127.0.0.1:10080/status'
curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)

$ curl 'http://127.0.0.1:10080/status'
Client sent an HTTP request to an HTTPS server.
curl: (56) Recv failure: Connection was reset

Note that the gRPC server on the same port works correctly via TLS.

  1. What version of TiDB are you using (tidb-server -V or run select tidb_version(); on TiDB)?

master version

created time in 5 days

delete branch pingcap/tidb-lightning

delete branch : kennytm/upgrade-deps-2020-02

delete time in 6 days

issue openedtikv/tikv

backup: upload to S3 may have been corrupted

Bug Report

<!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. -->

What version of TiKV are you using?

release-3.1 or master

What operating system and CPU are you using?

--

Steps to reproduce

Use BR to backup a cluster to an S3 storage.

What did you expect?

The archive can be successfully restored.

What did happened?

The archive is corrupted. Turns out all *.sst files written out are filled with zeroes.


Note: this is not clear the actual steps causing all *.sst to be zeroed. We haven't reproduced the issue yet. More detail will come later.

created time in 6 days

push eventtikv/importer

kennytm

commit sha 3868975fa9f9be729f1d4645a5e8c638aeddbf8e

tests/integrations: make the entire test run on TLS connection also added a test to ensure non-TLS-client <-> TLS-server fails. Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 6 days

issue commentkennytm/shary

How to use in LInux

@vk0xOrg Hi. This program depends on the latest Rust stable release (1.41). Please install rustc using rustup instead of the one provided by the distro.

vk0xOrg

comment created time in 6 days

delete branch pingcap/dm

delete branch : xiang/fix_dm_syncer

delete time in 6 days

push eventpingcap/dm

WangXiangUSTC

commit sha 340487f620babc502bc8cfa20727ff205eca6050

dm-syncer: minor fix on password && print parse error (#474) * minor fix * address comment * add arg for decrypt password

view details

push time in 6 days

PR merged pingcap/dm

dm-syncer: minor fix on password && print parse error priority/normal status/LGT2 type/bug-fix

What problem does this PR solve? <!--add issue link with summary if exists-->

  1. don't decrypt password, will connect to database failed
  2. log is not initial, need to use fmt to print parse error

What is changed and how it works?

fix these two problems

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Manual test
+26 -16

4 comments

7 changed files

WangXiangUSTC

pr closed time in 6 days

Pull request review commentpingcap/parser

parser: add WEIGHT_STRING() function

 func (n *FuncCallExpr) Restore(ctx *format.RestoreCtx) error { 				return errors.Annotatef(err, "An error occurred while restore FuncCallExpr.Args[0]") 			} 		}+	case WeightString:+		if err := n.Args[0].Restore(ctx); err != nil {+			return errors.Annotatef(err, "An error occurred while restore FuncCallExpr.(WEIGHT_STRING).Args[0]")+		}+		if len(n.Args) == 3 {+			ctx.WritePlain(" ")+			ctx.WriteKeyWord("AS")+			ctx.WritePlain(" ")
ctx.WriteKeyWord(" AS ")
bb7133

comment created time in 6 days

issue openedpingcap/br

Create a tool to generate archive for large-scale testing

Feature Request

Describe your feature request related problem:

We do not have a simple tool to generate large-scale example archives. For large-scale tests, we need to use dbgen to produce SQL dump and then use TiDB Lightning to import into the cluster. This is very time consuming — for 10T-scale test we need almost 2 days for this preparation step.

Describe the feature you'd like:

We should be able to directly generate the backup archive (create SSTs directly and populate the corresponding backupmeta).

Either we create a dedicated tool (focusing on a few selected schemas, e.g. sysbench or TPC-C), or extend dbgen to create SSTs (hard, since dbgen is schema-less and won't generate indices).

Describe alternatives you've considered:

<!-- A description of any alternative solutions or features you've considered. -->

Teachability, Documentation, Adoption, Migration Strategy:

<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->

created time in 7 days

pull request commentpingcap/br

backup: add raw backup command

[2020-02-12T05:46:28.121Z] GC safepoint 0 exceed TS 0
[2020-02-12T05:46:28.121Z] github.com/pingcap/br/pkg/backup.CheckGCSafepoint
[2020-02-12T05:46:28.121Z] 	/home/jenkins/agent/workspace/br_ghpr_unit_and_integration_test/go/src/github.com/pingcap/br/pkg/backup/safe_point.go:32
[2020-02-12T05:46:28.121Z] github.com/pingcap/br/pkg/backup.(*Client).BackupRanges
[2020-02-12T05:46:28.121Z] 	/home/jenkins/agent/workspace/br_ghpr_unit_and_integration_test/go/src/github.com/pingcap/br/pkg/backup/client.go:278
[2020-02-12T05:46:28.121Z] github.com/pingcap/br/pkg/task.RunBackupRaw
[2020-02-12T05:46:28.121Z] 	/home/jenkins/agent/workspace/br_ghpr_unit_and_integration_test/go/src/github.com/pingcap/br/pkg/task/backup_raw.go:117
[2020-02-12T05:46:28.121Z] github.com/pingcap/br/cmd.runBackupRawCommand
[2020-02-12T05:46:28.121Z] 	/home/jenkins/agent/workspace/br_ghpr_unit_and_integration_test/go/src/github.com/pingcap/br/cmd/backup.go:26
[2020-02-12T05:46:28.121Z] github.com/pingcap/br/cmd.newRawBackupCommand.func1
[2020-02-12T05:46:28.121Z] 	/home/jenkins/agent/workspace/br_ghpr_unit_and_integration_test/go/src/github.com/pingcap/br/cmd/backup.go:106
3pointer

comment created time in 7 days

push eventpingcap/parser

crazycs

commit sha 0829643f461ced7f80f9160824630d14d33ed957

model: add partition replica available info to support partition table in tiflash (#742) Signed-off-by: crazycs <crazycs520@gmail.com>

view details

push time in 7 days

PR merged pingcap/parser

Reviewers
model: add partition replica available info to support partition table in tiflash. status/LGT2

Signed-off-by: crazycs crazycs520@gmail.com

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

related TiDB PR: https://github.com/pingcap/tidb/pull/14735

What is changed and how it works?

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • No code
+14 -3

1 comment

1 changed file

crazycs520

pr closed time in 7 days

pull request commenttikv/importer

*: support TLS

PTAL @overvenus @3pointer cc @DanielZhangQD

kennytm

comment created time in 7 days

pull request commentpingcap/docs-cn

tidb-lightning: document backend and that system DBs are filtered

Most of these features do not exist in 2.1.

anotherrachel

comment created time in 7 days

Pull request review commentpingcap/dm

dm-syncer: minor fix on password && print parse error

 func main() { 	// 1. init conf 	commonConfig := newCommonConfig() 	conf, err := commonConfig.parse(os.Args[1:])-	switch errors.Cause(err) {-	case nil:-	case flag.ErrHelp:-		os.Exit(0)-	default:-		log.L().Error("parse cmd flags err " + err.Error())-		os.Exit(2)+	if err != nil {+		switch errors.Cause(err) {+		case nil:

No need to keep the case nil here since err != nil... Why add the if err != nil anyway? errors.Cause(nil) == nil.

WangXiangUSTC

comment created time in 7 days

Pull request review commentpingcap/br

backup: add raw backup command

+package task++import (+	"bytes"+	"context"++	"github.com/pingcap/errors"+	kvproto "github.com/pingcap/kvproto/pkg/backup"+	"github.com/spf13/cobra"+	"github.com/spf13/pflag"++	"github.com/pingcap/br/pkg/backup"+	"github.com/pingcap/br/pkg/storage"+	"github.com/pingcap/br/pkg/summary"+	"github.com/pingcap/br/pkg/utils"+)++// BackupRawConfig is the configuration specific for backup tasks.+type BackupRawConfig struct {+	Config++	StartKey []byte+	EndKey   []byte+	CF       string+}

@MyonKeminta the JSON and TOML tags are for forward compatibility in case we do want to support config file.

3pointer

comment created time in 7 days

Pull request review commenttikv/tikv

Makefile: add a doc test rule and fix a failed doc test case

 test: 	cd tests && cargo test --no-default-features --features "${ENABLE_FEATURES}" ${EXTRA_CARGO_ARGS} -- --nocapture  # This is used for CI test-ci_test:+ci_test: doc_test 	cargo test --no-default-features --features "${ENABLE_FEATURES}" --all --exclude tests --all-targets --no-run --message-format=json 	cd tests && cargo test --no-default-features --features "${ENABLE_FEATURES}" --no-run --message-format=json +doc_test:+	cargo test --no-default-features --features "${ENABLE_FEATURES}" --all --exclude tests --doc

@breeswish Perhaps you could abuse rust-lang/rust#64245 to save the generated exe. Note that one exe is generated for each doc test, so if there are 100 doc tests there will be 100 exes.

mahjonp

comment created time in 7 days

Pull request review commenttikv/tikv

Makefile: add a doc test rule and fix a failed doc test case

 //!   trait IOLimiter { } //!   ``` //!-//!   ```

@mahjonp While IOLimiter has been removed there should still be other existing Xxxx/XxxxExt trait pairs.

mahjonp

comment created time in 7 days

Pull request review commentpingcap/br

restore: enhance error handling

 func (importer *FileImporter) Import(file *backup.File, rewriteRules *RewriteRul 		log.Debug("scan regions", zap.Stringer("file", file), zap.Int("count", len(regionInfos))) 		// Try to download and ingest the file in every region 		for _, regionInfo := range regionInfos {-			var downloadMeta *import_sstpb.SSTMeta 			info := regionInfo 			// Try to download file.-			err = withRetry(func() error {-				var err2 error-				var isEmpty bool-				downloadMeta, isEmpty, err2 = importer.downloadSST(info, file, rewriteRules)-				if err2 != nil {-					if err != errRewriteRuleNotFound {-						log.Warn("download file failed",-							zap.Stringer("file", file),-							zap.Stringer("region", info.Region),-							zap.Binary("startKey", startKey),-							zap.Binary("endKey", endKey),-							zap.Error(err2),-						)-					}-					return err2-				}-				if isEmpty {-					log.Info(-						"file don't have any key in this region, skip it",-						zap.Stringer("file", file),-						zap.Stringer("region", info.Region),-					)-					return errRangeIsEmpty-				}-				return nil-			}, func(e error) bool {-				// Scan regions may return some regions which cannot match any rewrite rule,-				// like [t{tableID}, t{tableID}_r), those regions should be skipped-				return e != errRewriteRuleNotFound && e != errRangeIsEmpty-			}, downloadSSTRetryTimes, downloadSSTWaitInterval, downloadSSTMaxWaitInterval)-			if err != nil {-				if err == errRewriteRuleNotFound || err == errRangeIsEmpty {+			var downloadMeta *import_sstpb.SSTMeta+			err1 = utils.WithRetry(importer.ctx, func() error {+				var e error+				downloadMeta, e = importer.downloadSST(info, file, rewriteRules)+				return e+			}, newDownloadSSTBackoffer())+			if err1 != nil {+				if err1 == errRewriteRuleNotFound || err1 == errRangeIsEmpty { 					// Skip this region 					continue 				}-				return err+				log.Error("download file failed",+					zap.Stringer("file", file),+					zap.Stringer("region", info.Region),+					zap.Binary("startKey", startKey),+					zap.Binary("endKey", endKey),+					zap.Error(err1))+				return err1+			}+			err1 = importer.ingestSST(downloadMeta, info)+			// If error is `NotLeader`, update the region info and retry+			for err1 == errNotLeader {
			for errors.Cause(err1) == errNotLeader {
5kbpers

comment created time in 7 days

Pull request review commentpingcap/br

restore: enhance error handling

 func (importer *FileImporter) Import(file *backup.File, rewriteRules *RewriteRul 		log.Debug("scan regions", zap.Stringer("file", file), zap.Int("count", len(regionInfos))) 		// Try to download and ingest the file in every region 		for _, regionInfo := range regionInfos {-			var downloadMeta *import_sstpb.SSTMeta 			info := regionInfo 			// Try to download file.-			err = withRetry(func() error {-				var err2 error-				var isEmpty bool-				downloadMeta, isEmpty, err2 = importer.downloadSST(info, file, rewriteRules)-				if err2 != nil {-					if err != errRewriteRuleNotFound {-						log.Warn("download file failed",-							zap.Stringer("file", file),-							zap.Stringer("region", info.Region),-							zap.Binary("startKey", startKey),-							zap.Binary("endKey", endKey),-							zap.Error(err2),-						)-					}-					return err2-				}-				if isEmpty {-					log.Info(-						"file don't have any key in this region, skip it",-						zap.Stringer("file", file),-						zap.Stringer("region", info.Region),-					)-					return errRangeIsEmpty-				}-				return nil-			}, func(e error) bool {-				// Scan regions may return some regions which cannot match any rewrite rule,-				// like [t{tableID}, t{tableID}_r), those regions should be skipped-				return e != errRewriteRuleNotFound && e != errRangeIsEmpty-			}, downloadSSTRetryTimes, downloadSSTWaitInterval, downloadSSTMaxWaitInterval)-			if err != nil {-				if err == errRewriteRuleNotFound || err == errRangeIsEmpty {+			var downloadMeta *import_sstpb.SSTMeta+			err1 = utils.WithRetry(importer.ctx, func() error {+				var e error+				downloadMeta, e = importer.downloadSST(info, file, rewriteRules)+				return e+			}, newDownloadSSTBackoffer())+			if err1 != nil {+				if err1 == errRewriteRuleNotFound || err1 == errRangeIsEmpty { 					// Skip this region 					continue 				}-				return err+				log.Error("download file failed",+					zap.Stringer("file", file),+					zap.Stringer("region", info.Region),+					zap.Binary("startKey", startKey),+					zap.Binary("endKey", endKey),+					zap.Error(err1))+				return err1+			}
switch errors.Cause(err1) {
case nil:
        // proceed to ingest
case errRewriteRuleNotFound, errRangeIsEmpty:
        // skip this region
        continue
default:
        log.Error(...)
        return err1
}
5kbpers

comment created time in 7 days

more