profile
viewpoint
Keith Chambers keithchambers Microsoft Seattle Principal PM Lead, Microsoft

keithchambers/cloud-patterns-00-bake-vm-image 1

Bake VM images for reliable cloud native deployments.

keithchambers/el7-packer 1

Build minimal Enterprise Linux 7 image with Packer

keithchambers/amazon-ecs-agent 0

Amazon EC2 Container Service Agent

keithchambers/centos7-autoinstall-iso 0

Script to create automatically installing CentOS 7 ISO

keithchambers/consul-package 0

RPM packaging for consul.

keithchambers/cp-docker-images 0

Docker images for Confluent Platform.

keithchambers/dcos-commons 0

Simplifying stateful services

keithchambers/DistCPPlus 0

An improvement on the Hadoop distcp tool

keithchambers/docker 0

Docker - the open-source application container engine

pull request commentapache/flink

[FLINK-20209][web] Add missing checkpoint configuration to Flink UI

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit aeeb9de83a7afde1a3a1cde06187307f2cc1c58e (Fri Dec 04 02:40:05 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

<sub>Mention the bot in a comment to re-run the automated checks.</sub>

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.<details> The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary> The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier </details>
coolderli

comment created time in a few seconds

pull request commentapache/spark

[SPARK-33142][SQL] Store SQL text for SQL temp view

Test build #132178 has started for PR 30567 at commit db9f0ba.

linhongliu-db

comment created time in a few seconds

pull request commentapache/spark

[MINOR][ML] Improve LogisticRegression test error tolerance

Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/132169/

WeichenXu123

comment created time in a minute

push eventpytorch/pytorch

pritam

commit sha 2166595b20fd60794289abfbafe896a1a9eff45e

Use group.WORLD appropriately in process group initialization. Pull Request resolved: https://github.com/pytorch/pytorch/pull/48767 As part of investigating https://github.com/pytorch/pytorch/issues/48464, I realized some weird inconsistency in how we use `_default_pg` and `group.WORLD`. `group.WORLD` apparently was an `object()` and never changed despite `_default_pg` changing. In this sense, `group.WORLD` was being used a constant to refer to the default pg, but wasn't of type PG at all. In fact the passed in group is also compared via `==` to `group.WORLD` in many places, and it just worked since the default argument was `group.WORLD`. To clean this up, I got rid of `_default_pg` completely and instead used `group.WORLD` as the default pg throughout the codebase. This also fixes the documentation issues mentioned in https://github.com/pytorch/pytorch/issues/48464. #Closes: https://github.com/pytorch/pytorch/issues/48464 ghstack-source-id: 117834116 Differential Revision: [D25292893](https://our.internmc.facebook.com/intern/diff/D25292893/)

view details

push time in a minute

push eventpytorch/pytorch

pritam

commit sha 50884ed07033d8a01d67c8a33a515f7531f31a18

Update on "Use group.WORLD appropriately in process group initialization." As part of investigating https://github.com/pytorch/pytorch/issues/48464, I realized some weird inconsistency in how we use `_default_pg` and `group.WORLD`. `group.WORLD` apparently was an `object()` and never changed despite `_default_pg` changing. In this sense, `group.WORLD` was being used a constant to refer to the default pg, but wasn't of type PG at all. In fact the passed in group is also compared via `==` to `group.WORLD` in many places, and it just worked since the default argument was `group.WORLD`. To clean this up, I got rid of `_default_pg` completely and instead used `group.WORLD` as the default pg throughout the codebase. This also fixes the documentation issues mentioned in https://github.com/pytorch/pytorch/issues/48464. #Closes: https://github.com/pytorch/pytorch/issues/48464 Differential Revision: [D25292893](https://our.internmc.facebook.com/intern/diff/D25292893/) [ghstack-poisoned]

view details

push time in a minute

issue commentcockroachdb/cockroach

jobs: refactor job storage

We'll likely need to fix https://github.com/cockroachdb/cockroach/issues/57531 before we start adding lots of new tables for job state.

dt

comment created time in 2 minutes

pull request commentapache/spark

[MINOR][ML] Improve LogisticRegression test error tolerance

Test build #132169 has finished for PR 30587 at commit 6632ac4.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.
WeichenXu123

comment created time in 2 minutes

issue commentcockroachdb/cockroach

sql/catalog: allow creating system tables with dynamic IDs

Hi @ajwerner, please add a C-ategory label to your issue. Check out the label system docs.

<sub>:owl: Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.</sub>

ajwerner

comment created time in 3 minutes

issue openedcockroachdb/cockroach

sql/catalog: allow creating system tables with dynamic IDs

Today system tables are created with hard-coded IDs. This is problematic because we only reserved up to ID 50 and we're coming up on it. As of writing, we're at 39. This also tends to discourage creating system tables, which has also proven to be a mistake and forced us to shove more data into tables rather than designing subsystems as we might using the full power of the relational model.

Proposed Solution

Only a small number of system tables required for bootstrapping the cluster actually need to have a fixed ID. We'll have to assume that we already have all of the ones we need. Other system tables should be interacted with just like any other table.

A bit of a snag here has been that we haven't had great APIs around descriptor retrieval and management. Hopefully upcoming work will allow us to abstract descriptor retreival and management in rather low-dependency interface packages and gain wide adoption throughout the codebase.

created time in 3 minutes

PR opened apache/flink

[FLINK-20209][web] Add missing checkpoint configuration to Flink UI

<!-- Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.

Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.

Contribution Checklist

  • Make sure that the pull request corresponds to a JIRA issue. Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.

  • Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where FLINK-XXXX should be replaced by the actual issue number. Skip component if you are unsure about which is the best component. Typo fixes that have no associated JIRA issue should be named following this pattern: [hotfix] [docs] Fix typo in event time introduction or [hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator.

  • Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.

  • Make sure that the change passes the automated tests, i.e., mvn clean verify passes. You can set up Azure Pipelines CI to do that following this guide.

  • Each pull request should address only one issue, not mix up code from multiple issues.

  • Each commit in the pull request has a meaningful commit message (including the JIRA id)

  • Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.

(The sections below can be removed for hotfixes of typos) -->

What is the purpose of the change

  • Add Prefer Checkpoint For Recovery and Tolerable Failed Checkpoints to checkpoint configuration in Flink UI.

Brief change log

  • Adds preferCheckpointForRecovery and tolerableFailed config to REST endpoint, and web ui.

Verifying this change

  • This change is a trivial rework / code cleanup without any test coverage.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no) No
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no) No
  • The serializers: (yes / no / don't know) No
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know) No
  • The S3 file system connector: (yes / no / don't know) No

Documentation

  • Does this pull request introduce a new feature? (yes / no) No
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
+107 -6

0 comment

7 changed files

pr created time in 3 minutes

push eventpcingola/BigDataScript

Pablo Cingolani

commit sha 5a9357d313738f33f733d124da732ad16a03a877

Project updated

view details

push time in 4 minutes

pull request commentapache/spark

[SPARK-33142][SQL] Store SQL text for SQL temp view

Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/36769/

linhongliu-db

comment created time in 4 minutes

pull request commentapache/spark

[SPARK-33142][SQL] Store SQL text for SQL temp view

Kubernetes integration test status success URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/36769/

linhongliu-db

comment created time in 4 minutes

PR opened real-logic/aeron

Error Stack Experiment

More for discussion that immediate merge. This introduces a different approach for handling errors and capturing context as we traverse the stack. The idea being that if an error occurs, we set the error similar to normal, with an error code and a formatted message. As we return back up through the stack, when it looks like a call further down has caused the error, then we can append additional context to the current error stack. Adds 2 macros:

  • AERON_SET_ERROR(error_code, format, ...)
  • AERON_APPEND_ERROR(format, ...)

AERON_SET_ERROR should be used at the edge of the Aeron driver (or client), when we have tried to do something and it has failed. E.g. system call or some internal validation. At this point we set the error code and and capture any immediate context relevant data. This call will also reset the error stack discarding any previously captured messages.

AERON_APPEND_ERROR is used when another Aeron function has returned -1 in which case we just append some local context to the error stack and return -1 again. This allow for additional context information to be included in the error message without having to pass that information up or down the stack and break encapsulation for the purposes of error reporting.

I've demonstrated how this would work for initialising the UDP channel transport. On failure the error message reporting back to the client ends up being:

io.aeron.exceptions.RegistrationException: ERROR - (98) Address already in use
[aeron_udp_channel_transport_init, aeron_udp_channel_transport.c:85] unicast bind(127.0.0.1:24325)
[aeron_receive_destination_create, aeron_receive_destination.c:55] uri = aeron:udp?endpoint=localhost:24325
[aeron_driver_conductor_get_or_add_receive_channel_endpoint, aeron_driver_conductor.c:1644] correlation_id = 3

Similarly the distinct error log ends up containing:

1 observations from 2020-12-04 15:31:41.855+1300 to 2020-12-04 15:31:41.855+1300 for:
 0: (98) Address already in use
[aeron_udp_channel_transport_init, aeron_udp_channel_transport.c:85] unicast bind(127.0.0.1:24325)
[aeron_receive_destination_create, aeron_receive_destination.c:55] uri = aeron:udp?endpoint=localhost:24325
[aeron_driver_conductor_get_or_add_receive_channel_endpoint, aeron_driver_conductor.c:1644] correlation_id = 2

I've limited the depth of the stack to 16. If it overflows, then we track the lowest 15 elements of the error stack and the highest one. When dumping out the stack we would potentially see something similar to:

[TestBody, aeron_error_test.cpp:70] this is the root error
[TestBody, aeron_error_test.cpp:73] this is a nested error: 0
[TestBody, aeron_error_test.cpp:73] this is a nested error: 1
[TestBody, aeron_error_test.cpp:73] this is a nested error: 2
[TestBody, aeron_error_test.cpp:73] this is a nested error: 3
[TestBody, aeron_error_test.cpp:73] this is a nested error: 4
[TestBody, aeron_error_test.cpp:73] this is a nested error: 5
[TestBody, aeron_error_test.cpp:73] this is a nested error: 6
[TestBody, aeron_error_test.cpp:73] this is a nested error: 7
[TestBody, aeron_error_test.cpp:73] this is a nested error: 8
[TestBody, aeron_error_test.cpp:73] this is a nested error: 9
[TestBody, aeron_error_test.cpp:73] this is a nested error: 10
[TestBody, aeron_error_test.cpp:73] this is a nested error: 11
[TestBody, aeron_error_test.cpp:73] this is a nested error: 12
[TestBody, aeron_error_test.cpp:73] this is a nested error: 13
(11 lines omitted...)
[TestBody, aeron_error_test.cpp:73] this is a nested error: 25

So the error message would inform any user that some of the stack elements had to be omitted. I think 16 should be plenty. Our call stacks a pretty shallow.

+236 -26

0 comment

9 changed files

pr created time in 4 minutes

push eventpcingola/BigDataScript

Pablo Cingolani

commit sha a35fac7850cb76d4f858bd940822435e1f7f93cd

Project updated

view details

push time in 4 minutes

pull request commentjupyterlab/jupyterlab

Make css dependency graph of js modules

I opened #9423 about the issue, and fixed the underlying issue in #9427.

jasongrout

comment created time in 5 minutes

push eventpingcap/tidb

JmPotato

commit sha 43cccbb72fff3bbc4e086811cb7e2fc04786b444

*: dispatch the local and global transactions (#21353) Signed-off-by: JmPotato <ghzpotato@gmail.com>

view details

push time in 5 minutes

PR merged pingcap/tidb

*: dispatch the local and global transactions sig/DDL sig/execution status/LGT2 status/can-merge

Signed-off-by: JmPotato ghzpotato@gmail.com

What problem does this PR solve?

Part of #20448, close #20853.

What is changed and how it works?

Because the user SQLs will be all from the user session, and the corresponding TSO fetching progress is in the PrepareTSFuture, so we will check whether a SQL is an internal SQL and get a Local TSO if the session variable @@txn_scope is not "global" in that function.

As for the other SQLs such as DDL and meta info writing, it won't go through the PrepareTSFuture to get a TSO. So the transaction scope for any other SQLs will be global by default.

Also, to make sure the startTS and commitTS are in the same transaction scope, I added a txnScope field in the tikvTxn.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Integration test

Release note <!-- bugfixes or new feature need a release note -->

  • No release note.
+216 -52

7 comments

19 changed files

JmPotato

pr closed time in 5 minutes

issue closedpingcap/tidb

Start global transactions for some special SQLs

Development Task

Some special SQLs visit metadata or they are executed in background sessions, so they should always begin as a global transaction. This is a part of the global/local transaction roadmap #20448.

Rationale

The special SQLs mentioned above include:

  • Statements that write metadata, e.g. DDL, select nextval(seq), insert with an auto-increment column. Obtaining the next value from an auto-increment column or sequence may trigger an extra transaction, which writes the metadata to update the next start value. These statements create new KV transactions to write metadata, which is global and cannot be bound to any DC, so we need to guarantee that the new transactions are global.
  • Statements(except for DMLs) that write system tables, e.g. analyze tables, grant privileges, set global variables, create bindings. These statements create new transactions in background system sessions to write system tables. a.k.a. internal SQLs. We need to guarantee that the new transactions are global.
  • DDLs that write user tables, e.g. create/drop indexes, add columns, recover/cleanup indexes. Creating indexes and adding columns do the backfill jobs in background worker goroutines. Recover indexes and cleanup indexes do the backfill jobs in a user session. They all create new KV transactions to write user tables. We need to guarantee that all the new transactions are global.
  • DMLs that write system tables, e.g. update mysql.tidb directly. These statements won't create new transactions, but they will fail as described in #20827. It's acceptable.

To summarize, we need to consider 3 scenarios:

  • Internal SQLs. They always run in system sessions.
  • SQLs that write metadata. They may run in user sessions.
  • DDLs that write user tables. They may run in user sessions.

Implementations

Internal SQLs can be recognized by SessionVars.InRestrictedSQL or session.isInternal(). Internal statements are executed through session.ExecuteInternal() or execRestrictedSQL().

Transactions that write metadata are executed through Meta.txn API. We need to guarantee that the kv.Transaction passed to NewMeta is started as a global transaction.

DDLs that write user tables can be found in RecoverIndexExec, CleanupIndexExec, addIndexWorker, cleanUpIndexWorker, updateColumnWorker, etc.

closed time in 5 minutes

djshow832

pull request commentiterative/dvc

add targets support to dvc diff

@pmrowla thanks for the another round of suggestions and feedback - I think I've addressed everything in the last round.

I've opened PR for the documentation updates for this as well https://github.com/iterative/dvc.org/pull/2002.

sandeepmistry

comment created time in 6 minutes

pull request commentapache/spark

[SPARK-33655][SQL] Improve performance of processing FETCH_PRIOR

@juliuszsompolski I created PR.

cc : @HyukjinKwon , @dongjoon-hyun , @cloud-fan

Dooyoung-Hwang

comment created time in 6 minutes

pull request commentpingcap/tidb

ddl: check add-column error when it is in txn insert

/run-all-tests

AilinKid

comment created time in 7 minutes

pull request commentjupyterlab/jupyterlab

Use a more explicit styleModule key for js css imports

I think this is ready for review/merge. I'm happy to make a release after this is merged, which would replace the broken 3.0rc11.

jasongrout

comment created time in 7 minutes

issue openedcockroachdb/cockroach

sentry: hlc.go:303: log.Fatal: wall time 1607049184587657090 is not allowed to be greater than upper bound of 1607049184466848560. -- *errutil.leafError: log.Fatal: wall time 1607049184587657090 is not allowed to be greater than upper bound of 1607049184466848560. (1) hlc.go:303: *withstack.withStack (top exception) (check the extra data payloads)

This issue was autofiled by Sentry. It represents a crash or reported error on a live cluster with telemetry enabled.

Sentry link: https://sentry.io/organizations/cockroach-labs/issues/2071503121/?referrer=webhooks_plugin

Panic message:

hlc.go:303: log.Fatal: wall time 1607049184587657090 is not allowed to be greater than upper bound of 1607049184466848560. -- *errutil.leafError: log.Fatal: wall time 1607049184587657090 is not allowed to be greater than upper bound of 1607049184466848560. (1) hlc.go:303: *withstack.withStack (top exception) (check the extra data payloads)

<details> <summary>Stacktrace (expand for inline code snippets):</summary>

https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L302-L304 in pkg/util/hlc.(*Clock).enforceWallTimeWithinBoundLocked https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L349-L351 in pkg/util/hlc.(*Clock).Update https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/dist_sender.go#L1956-L1958 in pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/dist_sender.go#L1494-L1496 in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/dist_sender.go#L1365-L1367 in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatchAsync.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/stop/stopper.go#L405-L407 in pkg/util/stop.(*Stopper).RunLimitedAsyncTask.func2 /usr/local/go/src/runtime/asm_amd64.s#L1356-L1358 in runtime.goexit </details>

pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).enforceWallTimeWithinBoundLocked at line 303
pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).Update at line 350
pkg/kv/kvclient/kvcoord/dist_sender.go in pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas at line 1957
pkg/kv/kvclient/kvcoord/dist_sender.go in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch at line 1495
pkg/kv/kvclient/kvcoord/dist_sender.go in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatchAsync.func1 at line 1366
pkg/util/stop/stopper.go in pkg/util/stop.(*Stopper).RunLimitedAsyncTask.func2 at line 406
/usr/local/go/src/runtime/asm_amd64.s in runtime.goexit at line 1357
Tag Value
Cockroach Release v20.2.0
Cockroach SHA: 150c5918fb6e28e0ea5bfa4e8e94088986fbbf98
Platform linux amd64
Distribution CCL
Environment v20.2.0
Command server
Go Version ``
# of CPUs
# of Goroutines

created time in 7 minutes

issue openedcockroachdb/cockroach

sentry: hlc.go:266: log.Fatal: detected forward time jump of 8.333909 seconds is not allowed with tolerance of 0.250000 seconds -- *errutil.leafError: log.Fatal: detected forward time jump of 8.333909 seconds is not allowed with tolerance of 0.250000 seconds (1) hlc.go:266: *withstack.withStack (top exception) (check the extra data payloads)

This issue was autofiled by Sentry. It represents a crash or reported error on a live cluster with telemetry enabled.

Sentry link: https://sentry.io/organizations/cockroach-labs/issues/2071502841/?referrer=webhooks_plugin

Panic message:

hlc.go:266: log.Fatal: detected forward time jump of 8.333909 seconds is not allowed with tolerance of 0.250000 seconds -- *errutil.leafError: log.Fatal: detected forward time jump of 8.333909 seconds is not allowed with tolerance of 0.250000 seconds (1) hlc.go:266: *withstack.withStack (top exception) (check the extra data payloads)

<details> <summary>Stacktrace (expand for inline code snippets):</summary>

https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L265-L267 in pkg/util/hlc.(*Clock).checkPhysicalClock https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L244-L246 in pkg/util/hlc.(*Clock).getPhysicalClockAndCheck https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L281-L283 in pkg/util/hlc.(*Clock).Now https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/roachpb/batch.go#L49-L51 in pkg/roachpb.(*BatchRequest).SetActiveTimestamp https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvserver/store_send.go#L76-L78 in pkg/kv/kvserver.(*Store).Send https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvserver/stores.go#L176-L178 in pkg/kv/kvserver.(*Stores).Send https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/server/node.go#L883-L885 in pkg/server.(*Node).batchInternal.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/stop/stopper.go#L325-L327 in pkg/util/stop.(*Stopper).RunTaskWithErr https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/server/node.go#L871-L873 in pkg/server.(*Node).batchInternal https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/server/node.go#L909-L911 in pkg/server.(*Node).Batch https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/rpc/context.go#L497-L499 in pkg/rpc.internalClientAdapter.Batch https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/transport.go#L156-L158 in pkg/kv/kvclient/kvcoord.(*grpcTransport).sendBatch https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/transport.go#L138-L140 in pkg/kv/kvclient/kvcoord.(*grpcTransport).SendNext https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/dist_sender.go#L1869-L1871 in pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/dist_sender.go#L1494-L1496 in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvclient/kvcoord/dist_sender.go#L1365-L1367 in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatchAsync.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/stop/stopper.go#L405-L407 in pkg/util/stop.(*Stopper).RunLimitedAsyncTask.func2 /usr/local/go/src/runtime/asm_amd64.s#L1356-L1358 in runtime.goexit </details>

pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).checkPhysicalClock at line 266
pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).getPhysicalClockAndCheck at line 245
pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).Now at line 282
pkg/roachpb/batch.go in pkg/roachpb.(*BatchRequest).SetActiveTimestamp at line 50
pkg/kv/kvserver/store_send.go in pkg/kv/kvserver.(*Store).Send at line 77
pkg/kv/kvserver/stores.go in pkg/kv/kvserver.(*Stores).Send at line 177
pkg/server/node.go in pkg/server.(*Node).batchInternal.func1 at line 884
pkg/util/stop/stopper.go in pkg/util/stop.(*Stopper).RunTaskWithErr at line 326
pkg/server/node.go in pkg/server.(*Node).batchInternal at line 872
pkg/server/node.go in pkg/server.(*Node).Batch at line 910
pkg/rpc/context.go in pkg/rpc.internalClientAdapter.Batch at line 498
pkg/kv/kvclient/kvcoord/transport.go in pkg/kv/kvclient/kvcoord.(*grpcTransport).sendBatch at line 157
pkg/kv/kvclient/kvcoord/transport.go in pkg/kv/kvclient/kvcoord.(*grpcTransport).SendNext at line 139
pkg/kv/kvclient/kvcoord/dist_sender.go in pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas at line 1870
pkg/kv/kvclient/kvcoord/dist_sender.go in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch at line 1495
pkg/kv/kvclient/kvcoord/dist_sender.go in pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatchAsync.func1 at line 1366
pkg/util/stop/stopper.go in pkg/util/stop.(*Stopper).RunLimitedAsyncTask.func2 at line 406
/usr/local/go/src/runtime/asm_amd64.s in runtime.goexit at line 1357
Tag Value
Cockroach Release v20.2.0
Cockroach SHA: 150c5918fb6e28e0ea5bfa4e8e94088986fbbf98
Platform linux amd64
Distribution CCL
Environment v20.2.0
Command server
Go Version ``
# of CPUs
# of Goroutines

created time in 7 minutes

issue openedcockroachdb/cockroach

sentry: hlc.go:266: log.Fatal: detected forward time jump of 1.002040 seconds is not allowed with tolerance of 0.250000 seconds -- *errutil.leafError: log.Fatal: detected forward time jump of 1.002040 seconds is not allowed with tolerance of 0.250000 seconds (1) hlc.go:266: *withstack.withStack (top exception) (check the extra data payloads)

This issue was autofiled by Sentry. It represents a crash or reported error on a live cluster with telemetry enabled.

Sentry link: https://sentry.io/organizations/cockroach-labs/issues/2071502842/?referrer=webhooks_plugin

Panic message:

hlc.go:266: log.Fatal: detected forward time jump of 1.002040 seconds is not allowed with tolerance of 0.250000 seconds -- *errutil.leafError: log.Fatal: detected forward time jump of 1.002040 seconds is not allowed with tolerance of 0.250000 seconds (1) hlc.go:266: *withstack.withStack (top exception) (check the extra data payloads)

<details> <summary>Stacktrace (expand for inline code snippets):</summary>

https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L265-L267 in pkg/util/hlc.(*Clock).checkPhysicalClock https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L244-L246 in pkg/util/hlc.(*Clock).getPhysicalClockAndCheck https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/hlc/hlc.go#L281-L283 in pkg/util/hlc.(*Clock).Now https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvserver/store_send.go#L137-L139 in pkg/kv/kvserver.(*Store).Send.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvserver/store_send.go#L195-L197 in pkg/kv/kvserver.(*Store).Send https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/kv/kvserver/stores.go#L176-L178 in pkg/kv/kvserver.(*Stores).Send https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/server/node.go#L883-L885 in pkg/server.(*Node).batchInternal.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/util/stop/stopper.go#L325-L327 in pkg/util/stop.(*Stopper).RunTaskWithErr https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/server/node.go#L871-L873 in pkg/server.(*Node).batchInternal https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/server/node.go#L909-L911 in pkg/server.(*Node).Batch https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/roachpb/api.pb.go#L10683-L10685 in pkg/roachpb._Internal_Batch_Handler.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go#L43-L45 in github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/vendor/google.golang.org/grpc/server.go#L920-L922 in google.golang.org/grpc.getChainUnaryHandler.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/rpc/context.go#L198-L200 in pkg/rpc.NewServer.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/vendor/google.golang.org/grpc/server.go#L920-L922 in google.golang.org/grpc.getChainUnaryHandler.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/rpc/auth.go#L59-L61 in pkg/rpc.kvAuth.unaryInterceptor https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/vendor/google.golang.org/grpc/server.go#L906-L908 in google.golang.org/grpc.chainUnaryServerInterceptors.func1 https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/pkg/roachpb/api.pb.go#L10685-L10687 in pkg/roachpb._Internal_Batch_Handler https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/vendor/google.golang.org/grpc/server.go#L1081-L1083 in google.golang.org/grpc.(*Server).processUnaryRPC https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/vendor/google.golang.org/grpc/server.go#L1404-L1406 in google.golang.org/grpc.(*Server).handleStream https://github.com/cockroachdb/cockroach/blob/150c5918fb6e28e0ea5bfa4e8e94088986fbbf98/vendor/google.golang.org/grpc/server.go#L745-L747 in google.golang.org/grpc.(*Server).serveStreams.func1.1 /usr/local/go/src/runtime/asm_amd64.s#L1356-L1358 in runtime.goexit </details>

pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).checkPhysicalClock at line 266
pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).getPhysicalClockAndCheck at line 245
pkg/util/hlc/hlc.go in pkg/util/hlc.(*Clock).Now at line 282
pkg/kv/kvserver/store_send.go in pkg/kv/kvserver.(*Store).Send.func1 at line 138
pkg/kv/kvserver/store_send.go in pkg/kv/kvserver.(*Store).Send at line 196
pkg/kv/kvserver/stores.go in pkg/kv/kvserver.(*Stores).Send at line 177
pkg/server/node.go in pkg/server.(*Node).batchInternal.func1 at line 884
pkg/util/stop/stopper.go in pkg/util/stop.(*Stopper).RunTaskWithErr at line 326
pkg/server/node.go in pkg/server.(*Node).batchInternal at line 872
pkg/server/node.go in pkg/server.(*Node).Batch at line 910
pkg/roachpb/api.pb.go in pkg/roachpb._Internal_Batch_Handler.func1 at line 10684
vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go in github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1 at line 44
vendor/google.golang.org/grpc/server.go in google.golang.org/grpc.getChainUnaryHandler.func1 at line 921
pkg/rpc/context.go in pkg/rpc.NewServer.func1 at line 199
vendor/google.golang.org/grpc/server.go in google.golang.org/grpc.getChainUnaryHandler.func1 at line 921
pkg/rpc/auth.go in pkg/rpc.kvAuth.unaryInterceptor at line 60
vendor/google.golang.org/grpc/server.go in google.golang.org/grpc.chainUnaryServerInterceptors.func1 at line 907
pkg/roachpb/api.pb.go in pkg/roachpb._Internal_Batch_Handler at line 10686
vendor/google.golang.org/grpc/server.go in google.golang.org/grpc.(*Server).processUnaryRPC at line 1082
vendor/google.golang.org/grpc/server.go in google.golang.org/grpc.(*Server).handleStream at line 1405
vendor/google.golang.org/grpc/server.go in google.golang.org/grpc.(*Server).serveStreams.func1.1 at line 746
/usr/local/go/src/runtime/asm_amd64.s in runtime.goexit at line 1357
Tag Value
Cockroach Release v20.2.0
Cockroach SHA: 150c5918fb6e28e0ea5bfa4e8e94088986fbbf98
Platform linux amd64
Distribution CCL
Environment v20.2.0
Command server
Go Version ``
# of CPUs
# of Goroutines

created time in 7 minutes

push eventapache/incubator-superset

erik_ritter

commit sha 23cf62586555e2f215acde884117defeb9efee2c

fix: Remove expensive logs table migration

view details

push time in 7 minutes

Pull request review commentapache/kafka

KAFKA-10787: Introduce an import order in Java sources

  */ package org.apache.kafka.clients; -import java.util.HashSet;-import java.util.Set;--import java.util.stream.Collectors; import org.apache.kafka.common.errors.AuthenticationException; import org.apache.kafka.common.utils.ExponentialBackoff; import org.apache.kafka.common.utils.LogContext;+ import org.slf4j.Logger;

@chia7712 Oh, the description I had at first on 2nd April is outdated; After the discussion, we concluded that the following three-group ordering would be better:

  • kafka, org.apache.kafka
  • com, net, org
  • java, javax
dongjinleekr

comment created time in 7 minutes

create barnchpytorch/pytorch

branch : malfet/cp-46710

created branch time in 8 minutes

more