profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/nhoughto/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

nhoughto/ffmpeg-linux-static 1

A repo to wrap https://johnvansickle.com/ffmpeg/ builds and package them as zips instead of tar.xz so they are easier to use in Gradle

dearnavneet/IL-ISSUE-TRACKER-REPO 0

Issue Tracker for the Integration Layer Issues

nhoughto/camel 0

Mirror of Apache Camel

nhoughto/coveralls-jacoco-gradle-plugin 0

Coveralls JaCoCo Gradle plugin

nhoughto/dd-trace-java 0

Datadog APM client for Java

nhoughto/enhancements 0

Features tracking repo for Kubernetes releases

nhoughto/fake-gcs-server 0

Google Cloud Storage emulator & testing library.

nhoughto/gradle-gcs-build-cache 0

A Gradle build cache that uses Google Cloud Storage to store build artifacts

nhoughto/gradle-node-plugin 0

Gradle plugin for integrating NodeJS in your build. :rocket:

issue closedgetsandbox/feedback

UKEXIMCRYPTO

The United Khulmi Export Import Private Limited (Ukexim.Pvt.Ltd) is a company of an International Websites Affiliate Marketing since October 2013 and I am the Developer of Bitcoin, Blockchain Technology and Cryptocurrency, Paypal Sandbox, Github, Hyperwallet, Braintree Sandbox, Privateplacement.com, Marketbeat.com, Investing.com under Fusion Ltd. AWS, FBS, Clickfunnel, Cloudflares, Salesforce, Atlassian, GoogleLLC, Twitter, Linkin.com, Linkedin, Accointing.com, in India.

closed time in 22 days

mongoloidkhulmikuki366385

issue commentnode-gradle/gradle-node-plugin

Support (or don't proclude) the Gradle Worker API

Yep that’s what I’m doing, I hit a wall with NodeExecConfig tho and closed classes, so I’ll have another look when they get opened 👍🏼 On 30 Aug 2021, 10:41 PM +1000, Alex Nordlund ***@***.***>, wrote:

NodeExecConfiguration doesn't necessarily need Project though :-) In your case I think you might want to have Gradle inject the services that are in ProjectHelperApi instead of depending on our internal helper, the link provided shows how to get FileSystemOperations but you could also use ProjectHelperApi as a reference — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

nhoughto

comment created time in 22 days

issue commentnode-gradle/gradle-node-plugin

Support (or don't proclude) the Gradle Worker API

So current plan is basically to reimplement YarnTask as a WorkAction and WorkParameters, to do that you have to work within extra constraints:

https://docs.gradle.org/current/userguide/custom_tasks.html#worker_api

Basically you can only deal with managed objects so that gradle can reason about all the dependencies etc, no more just passing arbitrary stuff around.

It’s a bit tricky to POC with closed classes but my guess is getting the NodeExecConfiguration and configuring it won’t be possible as it requires the Project, and accessing Project is a no no (I’m pretty sure) they really want you to describe all inputs to the task in the work parameters and that basically needs to be serialisable, so no project, NodeExtensions etc.

That’s why that abstraction ProjectHelperApi is great as it can be satisfied by things available in the worker api.

Open classes will def help 👍🏼 Thanks On 30 Aug 2021, 8:50 PM +1000, Alex Nordlund ***@***.***>, wrote:

Opening up the classes makes sense, but I'm curious about what ProjectApiHelper would help you solve, the way we use it is to get compatibility with Gradle 5 and 6+ at the same time. It's pretty specifically tailored to just help us call the right thing depending on Gradle version. If you're looking at implementing something I'm guessing the abstraction is less useful and you probably just want to use things like ObjectFactory, ExecOperations and FileSystemOperations directly. The closed classes are going to get opened, but I haven't had enough time to get working on it yet, though I'm curious on why the NodeExtension relying on Project is a blocker. Do you intend to use it as a general data class? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

nhoughto

comment created time in 23 days

issue openednode-gradle/gradle-node-plugin

Support (or don't proclude) the Gradle Worker API

One inherent limitation of Gradle projects is only a single task can run at any given time, with the Worker API (https://docs.gradle.org/current/userguide/worker_api.html#converting_to_the_worker_api) to allow many jobs to be run within a single task, using the Worker API requires specifically coding for it though.

The current 3.x by accident (i assume?) has a number of the right abstractions to make using the worker API possible, but similar to https://github.com/node-gradle/gradle-node-plugin/issues/190 having classes not be open makes extending the various innards of the library hard/impossible. For example the ProjectApiHelper abstraction solves for a large part of the normal problems implementors have with using the Worker API, but closed classes and NodeExtension relying on Project are two other blockers.

Since the use of the Worker API is pretty specific (for ex run N jest test suites in parallel) and the intent of this library is quite broad (run npm/yarn stuff) I don't expect you would do the implementation to support this, just maybe make it more possible for this library be extended to support it? Open to PRs?

created time in 23 days

pull request commenttehlers/gradle-gcs-build-cache

Use ADC inplace of Service Account if no creds specified

yeah i just forked it, built it, same code as per the PR and just reference a local JAR instead of via maven dependency. Not great but functional.

nhoughto

comment created time in a month

pull request commenttehlers/gradle-gcs-build-cache

Use ADC inplace of Service Account if no creds specified

nope no update, i guess abandoned / low priority?

nhoughto

comment created time in a month

issue commentgetsandbox/worker-cli

Worker-cli is running on 2 ports

i'm confused, do you want it to run on 1 port or 2? if you override the default port with the desired value what do you need custom port for?

lqnhat97

comment created time in 2 months

issue closedgetsandbox/worker-cli

Do you still maintain this repository and update it to latest version used in Sandbox product?

Hello, thank you for impressive work and sorry for my rude question. I would like to know if the repository is still maintained, because I would like to build another project on top of this worker-cli. So that knowing the status of this project is important to me.

closed time in 2 months

cavoirom

issue commentgetsandbox/worker-cli

Do you still maintain this repository and update it to latest version used in Sandbox product?

It is maintained, this repository isn't as tightly related to the actually running product as it used to be though. Changes can happen and be deployed there without necessarily being reflected here, which unfortunately leads to drift.

cavoirom

comment created time in 2 months

issue commentgetsandbox/worker-cli

Worker-cli is running on 2 ports

Seems simple enough, could a workaround be java -Dapp.worker.port:<customPort> -jar <worker>.jar without the --port?

lqnhat97

comment created time in 2 months

issue commentgradle/gradle

Incorrect cache behavior when file is a symlink

interesting, we use yarn install maybe yarn and npm are different?

ketan

comment created time in 2 months

issue commentgradle/gradle

Incorrect cache behavior when file is a symlink

just doing this tar -cf ${tarredYarnDependencies} node_modules

ketan

comment created time in 2 months

created tagbsycorp/kees

tag0.1.32

created time in 2 months

push eventbsycorp/kees

Nick Houghton

commit sha 98f860026fbda2d8e64c55bd8258f9525e1b0ab4

Fix concurrent RSA secret generation

view details

push time in 2 months

issue closedDataDog/dd-trace-java

Noop CI Info throws NPE when finding git directory

Testing out CI Visibility product, and having a problem locally (when not in CI) trying to get the Junit5 integration going, seems like there is a provision for not running in CI via NoopCIInfo but the use of that class fails with an NPE during initialisation of the TracingListener for JUnit5, stacktrace:

 java.lang.ExceptionInInitializerError
        at datadog.trace.instrumentation.junit5.TracingListener.executionStarted(TracingListener.java:33)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry$CompositeTestExecutionListener.lambda$executionStarted$6(TestExecutionListenerRegistry.java:99)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry.lambda$notifyEach$1(TestExecutionListenerRegistry.java:67)
        at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry.notifyEach(TestExecutionListenerRegistry.java:65)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry.access$200(TestExecutionListenerRegistry.java:32)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry$CompositeTestExecutionListener.executionStarted(TestExecutionListenerRegistry.java:99)
        at org.junit.platform.launcher.core.ExecutionListenerAdapter.executionStarted(ExecutionListenerAdapter.java:46)
        at org.junit.platform.launcher.core.DelegatingEngineExecutionListener.executionStarted(DelegatingEngineExecutionListener.java:41)
        at org.junit.platform.launcher.core.OutcomeDelayingEngineExecutionListener.executionStarted(OutcomeDelayingEngineExecutionListener.java:53)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:123)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.executeNonConcurrentTasks(ForkJoinPoolHierarchicalTestExecutorService.java:155)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:135)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
        at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
        at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:295)
        at java.base/java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:401)
        at java.base/java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:726)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.joinConcurrentTasksInReverseOrderToEnableWorkStealing(ForkJoinPoolHierarchicalTestExecutorService.java:162)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:136)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
        at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
        at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:295)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
    Caused by: java.lang.NullPointerException
        at java.base/java.util.Objects.requireNonNull(Objects.java:208)
        at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:260)
        at java.base/java.nio.file.Path.of(Path.java:147)
        at java.base/java.nio.file.Paths.get(Paths.java:69)
        at datadog.trace.bootstrap.instrumentation.ci.CIProviderInfo.<init>(CIProviderInfo.java:35)
        at datadog.trace.bootstrap.instrumentation.ci.NoopCIInfo.<init>(NoopCIInfo.java:9)
        at datadog.trace.bootstrap.instrumentation.ci.CIProviderInfo.selectCI(CIProviderInfo.java:105)
        at datadog.trace.bootstrap.instrumentation.decorator.TestDecorator.<init>(TestDecorator.java:32)
        at datadog.trace.instrumentation.junit5.JUnit5Decorator.<init>(JUnit5Decorator.java:10)
        at datadog.trace.instrumentation.junit5.JUnit5Decorator.<clinit>(JUnit5Decorator.java:12)
        ... 45 more

closed time in 2 months

nhoughto

issue commentDataDog/dd-trace-java

Noop CI Info throws NPE when finding git directory

Thanks!

nhoughto

comment created time in 2 months

issue openedmicronaut-projects/micronaut-data

Position on extensions to data-processor via ServiceLoaders

We are looking at trying to build on micronaut-data and extending the AOT codegen behaviours to reduce a bunch of our boilerplate around custom pagination, ordering, filtering etc in our project. We expect to do this via tweaking the compile time AOT data-processor via ServiceLoaders.

Seems like we should be able to do this with the extension points that already exist, this was more a question on the projects position on this kind of thing? Is there a plan to support this kind of thing in the future via a more out of the box approach?

created time in 2 months

issue commentBackblaze/terraform-provider-b2

Support destroying non-empty buckets

excellent thanks! i imagine the 'dont fail if bucket already destroyed' is easier than cleaning and removing a bucket. 👍

Wrayos

comment created time in 2 months

issue commentBackblaze/terraform-provider-b2

Support destroying non-empty buckets

But our terraform definition, which is important to be correct, has N b2_bucket resource one for each bucket.. that definition is used in all our environments to provision the environment before it used. I don't want to have a 'production terraform definition' and a 'other environments terraform definition'. I have one definition and do everything as much like production as possible to ensure consistency, which means every environment gets N buckets and thats how i want it.

If i hit the 100 limit regularly I will be requesting the limit be increased =) this is definitely the right approach for us so supporting it would be great.

Wrayos

comment created time in 2 months

issue commentBackblaze/terraform-provider-b2

Support destroying non-empty buckets

Because we use Terraform (not surprising 😬 ) to spin up ephemeral CI and review app environments to conduct automated and exploratory testing, and the bucket is named after the environment.. and the environment is generated and ephemeral, so to provision the resources of these ephemeral environments we need ephemeral buckets, which means deleting buckets when the environment is removed, which means deleting them. We can't not delete them as we will hit the 100 bucket maximum (which has happened a few times).

We could clean and reuse buckets but they would have the wrong names, and the complexity of injecting this state into terraform during provisioning is painful, so we just keep Terraform happy and let it provision/destroy the buckets.. but at the moment it can't delete the bucket because of this issue =)

Wrayos

comment created time in 2 months

issue commentBackblaze/terraform-provider-b2

Support destroying non-empty buckets

We delete buckets constantly its a regular hourly occurrence, this annoys us everyday.

Wrayos

comment created time in 2 months

issue commentBackblaze/terraform-provider-b2

Support destroying non-empty buckets

its actually surprisingly difficult to delete a bucket (which i guess is by design?) so would be wonderful if it became a vendor problem and you solve it for me =)

Got to solve for

  • All the objects obviously, paginating the list-objects call with max 1000 objects per call
  • Versioned objects
  • Multipart uploads
  • Any of the three above that were created whilst you were deleting them since there is no way to block writes to a bucket.
Wrayos

comment created time in 2 months

issue openedBackblaze/terraform-provider-b2

Don't fail deleting an already deleted bucket

Currently when terraform destroying a terraform managed bucket, the provider fails with an error if the bucket has already been deleted.

Error is : Error: No such bucket: 629955074482c...

Since we are trying to delete the bucket, and the bucket is already deleted a more user-friendly behaviour would be to just complete successfully, this is the default behaviour for many providers including the AWS S3 bucket resource. At the moment the workaround is to terraform state rm.. the resource manually to fix the problem, this is obviously not ideal.

The reason the bucket is deleted outside of terraform is related to: https://github.com/Backblaze/terraform-provider-b2/issues/22 so a solution to either would work =)

created time in 2 months

issue openedDataDog/dd-trace-java

dd-trace-ot doesn't depend on dogstatsd from 0.83 throws ClassNotFound

It appears from version 0.83 onwards the dependencies for dd-trace-ot have been tweaked with dogstatsd-client being removed, this is obviously required for the Tracer to function. I expect that the majority of users are using the Tracer via automatic instrumentation via the java-agent and thus aren't using dd-trace-ot directly, the java-agent obviously bundles everything appropriately.

Version 0.82 and earlier had correct dependencies to allow using the Tracer without a java agent which is our setup, it would be good if dependencies could be correct to allow this to continue going forward. Otherwise we will have to manually specify the dependencies and risk diverging from what the package is actually expecting.

created time in 2 months

issue commentDataDog/dd-trace-java

Noop CI Info throws NPE when finding git directory

Yep java 16 On 21 Jul 2021, 4:34 PM +1000, Lev Priima ***@***.***>, wrote:

Are you using java 16 locally ? Before 16 Paths.get(null,"something") returned invalid path null/something. But they made it throw exception as specified: https://bugs.openjdk.java.net/browse/JDK-8254876 — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

nhoughto

comment created time in 2 months

issue commentDataDog/dd-trace-java

Agent doesn't start with read-only filesystem

Way late on the update, sorry about that! but yes version 0.83.2 does start with a read-only container, i'm sure it started working earlier than this, but i just tested the latest one and success!

Are the limitations of read-only agent understood? How do I know what isn't going to work?

nhoughto

comment created time in 2 months

issue openedDataDog/dd-trace-java

Noop CI Info throws NPE when finding git directory

Testing out CI Visibility product, and having a problem locally (when not in CI) trying to get the Junit5 integration going, seems like there is a provision for not running in CI via NoopCIInfo but the use of that class fails with an NPE during initialisation of the TracingListener for JUnit5, stacktrace:

 java.lang.ExceptionInInitializerError
        at datadog.trace.instrumentation.junit5.TracingListener.executionStarted(TracingListener.java:33)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry$CompositeTestExecutionListener.lambda$executionStarted$6(TestExecutionListenerRegistry.java:99)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry.lambda$notifyEach$1(TestExecutionListenerRegistry.java:67)
        at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry.notifyEach(TestExecutionListenerRegistry.java:65)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry.access$200(TestExecutionListenerRegistry.java:32)
        at org.junit.platform.launcher.core.TestExecutionListenerRegistry$CompositeTestExecutionListener.executionStarted(TestExecutionListenerRegistry.java:99)
        at org.junit.platform.launcher.core.ExecutionListenerAdapter.executionStarted(ExecutionListenerAdapter.java:46)
        at org.junit.platform.launcher.core.DelegatingEngineExecutionListener.executionStarted(DelegatingEngineExecutionListener.java:41)
        at org.junit.platform.launcher.core.OutcomeDelayingEngineExecutionListener.executionStarted(OutcomeDelayingEngineExecutionListener.java:53)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:123)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.executeNonConcurrentTasks(ForkJoinPoolHierarchicalTestExecutorService.java:155)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:135)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
        at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
        at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:295)
        at java.base/java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:401)
        at java.base/java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:726)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.joinConcurrentTasksInReverseOrderToEnableWorkStealing(ForkJoinPoolHierarchicalTestExecutorService.java:162)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:136)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
        at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
        at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
        at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
        at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185)
        at java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:295)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
    Caused by: java.lang.NullPointerException
        at java.base/java.util.Objects.requireNonNull(Objects.java:208)
        at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:260)
        at java.base/java.nio.file.Path.of(Path.java:147)
        at java.base/java.nio.file.Paths.get(Paths.java:69)
        at datadog.trace.bootstrap.instrumentation.ci.CIProviderInfo.<init>(CIProviderInfo.java:35)
        at datadog.trace.bootstrap.instrumentation.ci.NoopCIInfo.<init>(NoopCIInfo.java:9)
        at datadog.trace.bootstrap.instrumentation.ci.CIProviderInfo.selectCI(CIProviderInfo.java:105)
        at datadog.trace.bootstrap.instrumentation.decorator.TestDecorator.<init>(TestDecorator.java:32)
        at datadog.trace.instrumentation.junit5.JUnit5Decorator.<init>(JUnit5Decorator.java:10)
        at datadog.trace.instrumentation.junit5.JUnit5Decorator.<clinit>(JUnit5Decorator.java:12)
        ... 45 more

created time in 2 months

startedzalando/opentracing-toolbox

started time in 2 months

issue openedDataDog/dd-trace-js

Cypress tracing support

I am playing around with the Datadog CI visibility feature, and have successfully traced Jest tests and some other frameworks, the only outstanding framework which I have to upload manually via JUnit reports is Cypress. So would be good to get Cypress tracing supported 👍

created time in 2 months