profile
viewpoint

ttddyy/datasource-proxy 356

Provide listener framework for JDBC interactions and query executions via proxy.

r2dbc/r2dbc-proxy 40

R2DBC Proxying Framework

ttddyy/evernote-rest-webapp 26

Stateless MicroService Web Application which provides Restful APIs for Evernote.

ttddyy/datasource-proxy-r2dbc 23

proxy library for R2DBC-SPI

ttddyy/datasource-proxy-examples 19

examples for how to use datasource-proxy

ttddyy/datasource-assert 10

Provides assertion APIs for query executions with assertEquals and assertThat(AssertJ and Hamcrest)

ttddyy/demo 3

demo code

ttddyy/r2dbc-proxy-examples 3

Samples for r2dbc-proxy

ttddyy/junit5-extension-wait 2

JUnit5 Jupiter extension to delay finishing the test method

kmjung/xenon-workshop-samples 1

This repo contains sample code for Xenon coding workshops.

issue openedspring-projects/spring-framework

ExecutorConfigurationSupport API refinement to control internal executor shutdown

Currently, ExecutorConfigurationSupport#shutdown() encapsulate multiple scenarios for shutting down internal ExecutorService.

The single method shutdown() performs shutdown()/shutdownNow()(non-blocking) and awaitTermination()(blocking) based on its property.

I am writing a graceful shutdown logic for task executor/scheduler. The logic for graceful shutdown is to retrieve all task executor/schedulers and apply:

  • Call shutdown() to not accept anymore tasks
  • Wait currently running task for the duration of graceful period

With current available API, I need to do following:

Instant deadline = start.plus(gracefulShutdownTimeout);

// stop receiving anymore request while keep running active ones
for (ExecutorConfigurationSupport executorConfigurationSupport : this.executorConfigurationSupports) {
	executorConfigurationSupport.setWaitForTasksToCompleteOnShutdown(true);
	executorConfigurationSupport.shutdown(); // non-blocking
}

// Previously, executors are called "shutdown()"; so, no more new tasks are scheduled.
// Now, call "awaitTermination()" to wait current tasks to finish while
// the container is shutting down in parallel.
for (ExecutorConfigurationSupport executorConfigurationSupport : this.executorConfigurationSupports) {
	int awaitTerminationSeconds = Math.toIntExact(Duration.between(Instant.now(), deadline).getSeconds());
	executorConfigurationSupport.setAwaitTerminationSeconds(awaitTerminationSeconds);
	executorConfigurationSupport.shutdown();  // blocking
}

Since this calls shutdown() twice with different parameter in order to achieve shutdown() and awaitTermination() for underlying executor, it is not so intuitive. Also requires to know the detail about what ExecutorConfigurationSupport#shutdown() does.

Another workaround is to retrieve internal ExecutorService and call shutdown() and awaitTermination().

List<ExecutorService> executorServices = new ArrayList<>();

for (ExecutorConfigurationSupport executorConfigurationSupport : this.executorConfigurationSupports) {
	if (executorConfigurationSupport instanceof ThreadPoolTaskExecutor) {
		executorServices.add(((ThreadPoolTaskExecutor)executorConfigurationSupport).getThreadPoolExecutor());
	}
	else if (executorConfigurationSupport instanceof ThreadPoolTaskScheduler) {
		executorServices.add(((ThreadPoolTaskScheduler)executorConfigurationSupport).getScheduledExecutor());
	}
}

for(ExecutorService executorService : executorServices) {
	executorService.shutdown();
}

for(ExecutorService executorService : executorServices) {
	executorService.awaitTermination(...);
}

I think it would be nice to have some API refinement for task executor/scheduler to easily control underlying ExecutorService.

Simple solution is to to add a getter to ExecutorConfigurationSupport to expose internal ExecutorService. This way, in addition to existing shutdown(), if user needs to do more fine control on shutdown, getter can expose the ExecutorService. Another way is to provide blocking(awaitTermination) and non-blocking(shutdown/shutdownNow) methods on ExecutorConfigurationSupport instead or in addition to the current shutdown() method.

created time in 13 days

PR opened spring-projects/spring-framework

ExecutorConfigurationSupport to take Duration for await termination period

I noticed ExecutorConfigurationSupport(ThreadPoolTask[Executor|Scheduler]) only takes seconds for await termination.

I'm writing a graceful shutdown logic for k8s environment and awaiting by second is a bit large granularity to control the shutdown/await. Graceful shutdown get triggered by liveness probe and the frequency for liveness probe is not so long.

This PR changes the minimum unit to milliseconds and adds a method to take Duration to specify the await termination.

+17 -6

0 comment

1 changed file

pr created time in 13 days

create barnchttddyy/spring-framework

branch : await-duration

created branch time in 13 days

issue commentspring-projects/spring-boot

Investigate liveness and readiness support for Kubernetes

Hi @wilkinsona

When application becomes ready to serve traffic is not necessarily tied to ApplicationContext lifecycle. If application is well behaved to fit in spring lifecycle, application developers would put initialization logic to [Command|Application]Runner; however, it is not something we can enforce. Also, application received ApplicationReadyEvent doesn't always mean it is ready to serve traffic. It is more like it is ready to perform application logic. The ready to serve flag might depends on external resources. Application may connect to cache cluster upon ApplicationReadyEvent and form the cluster and warm up local cache, then it becomes ready to serve traffic. So, we think it is application's responsibility to decide when to flip the ready flag.

From spring-boot perspective, I think it is ok to set ready=true at ApplicationReadyEvent by default. But it needs a way to disable the default ready event and let users manually flip the ready flag; so that, application can determine the timing.

snicoll

comment created time in 18 days

issue openedspring-io/initializr

Include Dockerfile

Recent spring blog post "Creating Docker images with Spring Boot 2.3.0.M1" advocates usage of exploded layout and layering while creating container image.

I checked our applications. Some have applied custom script to build layer images but some are just using java -jar.

It would be nice to include such Dockerfile by Initializr when user choose container image creation to use Dockerfile.

I see such request has declined before(#712), but is it still same now?

If it is available in start.spring.io, exploded layout with layering would become common practice for spring-boot applications.

created time in 18 days

issue commentspring-projects/spring-boot

Investigate liveness and readiness support for Kubernetes

Hi,

I just want to put some input for how we implemented our readiness and liveness. It's pretty much similar to what @bclozel commented above.

Readiness:

We have ReadinessEdnpoint class that simply keep boolean value and also takes HealthIndicator beans for readiness.(Built on spring-boot 2.1, so not using indicator group yet)

This class is an application context event listener that receives our ApplicationReadinessEvent. Initial value for this readiness bean indicates NOT_READY because we do NOT want traffic until application is ready for serving. Once application starts-up and bootstrapped necessary things, user fires ApplicationReadinessEvent with value=READY, then readiness starts returning value=READY. Application needs to decide when to issue readiness event(value=READY) because being ready to serve traffic is upto the application to decide.

Another place that issues readiness event is our graceful shutdown logic. When graceful shutdown logic is initiated (e.g: by receiving ContextClosedEvent), first thing we do is to fire ApplicationReadinessEvent with value=NOT_READY. This will stop receiving any more requests while shutdown is in progress.

Liveness:

Similar to readiness class, we have simple boolean to indicate LIVE/NOT_LIVE. The differences from readiness are initial value is set to LIVE and no event is issued to change liveness status right after application is bootstrapped.

There are also consideration for initial delay and frequency(period) for readiness/liveness probes in k8s config. Once initial delay is passed, we check readiness more often than liveness. Also, this frequency(period) may affect duration for graceful shutdown.

Currently, our readiness and liveness are implemented as actuator Endpoint but it is not necessary this way. As long as there is a boolean value to keep the state and receive application context event, it can be a service bean. Then, if actuator is available, put it into a HealthIndicator(HealthContributor) and be part of each readiness/liveness health groups.

snicoll

comment created time in 18 days

pull request commentspring-projects/spring-boot

Add DeferredImportSelector that runs before/after auto configuration

@philwebb Thanks for the feedback.

First approach is a bit cumbersome for us to create such configuration extension points for each feature, and requires to consider rather many permutation of possibilities.

So, I'll probably go with the second suggestion since we need @Conditional and opt-in capability. Putting our configurations in auto-configuration semantics is better to take advantage of auto-configuration infrastructure. Probably I'll create custom annotation @Allow... or @Use... to enable feature which probably put some entry in in-memory map or a bean, and create a custom @Condition on our configurations to check the map to whether to activate the @Configuration. Basically, auto-configurations disabled by default and enabled by annotation.

I think writing library on top of spring-boot is one way of consuming spring-boot in order for micro-service applications in an organization to streamline the feature and adapting laid out infrastructure. While evolving spring-boot, keeping in such aspect would be appreciated.

Thanks,

ttddyy

comment created time in 23 days

issue commentspring-projects/spring-framework

PostProcessorRegistrationDelegate makes needed sorting impossible

Hi @rubasace

For workaround in your case, you can create a custom BeanPostProcessor that simply inherit/compose the one from open-tracing with ordering in place.

For example, assuming that what you are using is TracingWebClientBeanPostProcessor

public class MyTracingWebClientBeanPostProcessor extends TracingWebClientBeanPostProcessor implements Ordered {
  ...
}

If you are using spring-boot auto-configuration, to exclude the BPP bean coming from auto-configuration, either exclude the WebClientTracingAutoConfiguration itself, or if you really want to disable the TracingWebClientBeanPostProcessor bean registered by auto-configuration, you can remove the bean definition for the TracingWebClientBeanPostProcessor.

e.g.:

public class UnregisterBeanDefinitionBeanFactoryPostProcessor implements BeanFactoryPostProcessor, Ordered {

	@Override
	public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
		BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
		for (String beanName : this.beanNames) {
		  if(beanName.equals("tracingWebClientBeanPostProcessor")) {
				registry.removeBeanDefinition(beanName);
			}
		}
	}
	
	...
}

This way, even though auto-configuration registers the BPP from open-tracing, it removes from beanFactory and use the one you have specified. (https://github.com/spring-projects/spring-boot/issues/18228)

I have a java library that uses open-tracing and we have a case that we need to replace the one defined in open-tracing library. If it helps for your workaround.

rubasace

comment created time in 25 days

issue commentr2dbc/r2dbc-proxy

Issue when use the proxy on a ConnectionPool

Thanks @mp911de

I have also updated r2dbc-proxy to avoid NPE on cancelation scenario.

Using r2dbc-proxy, r2dbc-postgresq=0.8.1.BUILD-SNAPSHOT and r2dbc-pool=0.8.0, I don't see failures on my quick test on my local.

@deblockt Can you try with r2dbc-proxy, r2dbc-postgresql=0.8.1.BUILD-SNAPSHOT(or 0.9.0.BUILD-SNAPSHOT), or just wait a bit to 0.8.1 get released and try them?

deblockt

comment created time in a month

issue commentspring-projects/spring-security-oauth2-boot

Release for spring-boot 2.2.4

Thanks!!

ttddyy

comment created time in a month

pull request commentspring-projects/spring-security

Authorization Response should also match on query parameters

@jgrandja Hi, the PR has updated. Can you check. Thanks,

ttddyy

comment created time in a month

push eventr2dbc/r2dbc-proxy

Tadaya Tsuyukubo

commit sha 50abfa47c391331749abe6dcc7e9f818013b91e5

Avoid NPE when publisher operation is cancelled Avoid NPE when stopwatch is not started due to operation cancellation. [resolves #56]

view details

Tadaya Tsuyukubo

commit sha 2c60858e8123ba018d545a32508424bf9f184cd4

Update CHANGELOG

view details

push time in a month

push eventr2dbc/r2dbc-proxy

Tadaya Tsuyukubo

commit sha 0eac30dc93ad2d675ad289c0e95f788719785cb3

Avoid NPE when publisher operation is cancelled Avoid NPE when stopwatch is not started due to operation cancellation. [resolves #56] (cherry picked from commit 50abfa47c391331749abe6dcc7e9f818013b91e5)

view details

Tadaya Tsuyukubo

commit sha af6e8ec6c70d6a132d2a474ba82fc8c8de755ead

Update CHANGELOG (cherry picked from commit 2c60858e8123ba018d545a32508424bf9f184cd4)

view details

push time in a month

issue openedspring-projects/spring-security-oauth2-boot

Release for spring-boot 2.2.4

Please release one for spring-boot 2.2.4.

Thanks,

created time in a month

issue commentttddyy/datasource-proxy

Use unique connectionIds in DefaultConnectionIdManager

@gavlyukovskiy released datasource-proxy 1.6 which includes GlobalConnectionIdManager. available in maven central.

gavlyukovskiy

comment created time in a month

issue openedspring-projects/spring-framework

Out of the box MDC support in WebClient

When using WebClient in servlet environment, MDC is one of the pain point that requires boilerplate code. Since RestTemplate is in maintenance mode, more and more apps would choose to use WebClient. It would be very helpful that spring provides out of the box support for MDC for the use of WebClient. Then, it would be a smooth ride migrating RestTemplate to WebClient.

Since spring already detects underlying logging framework, it could be implemented agnostic to the actual logging framework.

What needs to be implemented:

  • Pass MDC values from main thread to reactor thread In servlet environment, it is required to propagate MDC context values from servlet thread to reactor thread. To implement, ExchangeFilterFunction or some sort of hook which should apply as the first action of response operators.

  • Pass around MDC values to subscriber context within reactor schedulers/operators My suggestion to reactor-addons. https://github.com/reactor/reactor-addons/issues/219

If such boilerplate code is provided from spring, then, spring-boot may auto-configure the MDC support.

Relates to https://github.com/reactor/reactor-core/issues/1985

created time in a month

push eventttddyy/datasource-proxy

travis-ci

commit sha b39ae12d38631612f8139fcf18123a6d4ff0a89c

Add new user-guide and javadoc for version=1.6 by travis build 387

view details

push time in a month

push eventttddyy/datasource-proxy

Tadaya Tsuyukubo

commit sha a759bda4709618a734860fe058c51a6599e99da8

[maven-release-plugin] prepare for next development iteration

view details

push time in a month

created tagttddyy/datasource-proxy

tagdatasource-proxy-1.6

Provide listener framework for JDBC interactions and query executions via proxy.

created time in a month

push eventttddyy/datasource-proxy

Tadaya Tsuyukubo

commit sha e59b4d95d7c0ac3bcbd5e9cf54c40e74db958199

[maven-release-plugin] prepare release datasource-proxy-1.6

view details

push time in a month

push eventttddyy/datasource-proxy

travis-ci

commit sha 98d6ad0eaba8559b71fc997309c7e04df2e6173e

Latest documentation on successful travis build 385 auto-pushed to gh-pages

view details

push time in a month

push eventttddyy/datasource-proxy

Tadaya Tsuyukubo

commit sha 04b2a2b9dab6935fa54a2627481370aea38b0a23

Fix DataSourceQueryCountListenerTest

view details

push time in a month

push eventttddyy/datasource-proxy

Tadaya Tsuyukubo

commit sha bba29a8ac93ca302608bdb8301ced36dceed2c3b

Switch oraclejdk8 to openjdk8 in travis (cherry picked from commit e1bea8fcb20c5fb0197a8b722671dec54e6a092f)

view details

push time in a month

push eventttddyy/datasource-proxy

Tadaya Tsuyukubo

commit sha e1bea8fcb20c5fb0197a8b722671dec54e6a092f

Switch oraclejdk8 to openjdk8 in travis

view details

push time in a month

push eventttddyy/datasource-proxy

Tadaya Tsuyukubo

commit sha 4c6e17d2354cb680b2dd773b969421e1a95fbf36

Add GlobalConnectionIdManager Fixes #64

view details

push time in a month

issue closedttddyy/datasource-proxy

Use unique connectionIds in DefaultConnectionIdManager

Hi,

I'm working on issue https://github.com/gavlyukovskiy/spring-boot-data-source-decorator/issues/41 where two different DataSource instances has generated the same connectionId. First thing I thought to use connectionId with some per data source unique string like dataSourceName, but it would be useful in general to generate JVM unique values.

Also I found that r2dbc-proxy has switched to use unique id - https://github.com/r2dbc/r2dbc-proxy/blob/master/src/main/java/io/r2dbc/proxy/callback/DefaultConnectionIdManager.java so why not make the same change here? :)

closed time in a month

gavlyukovskiy

push eventttddyy/datasource-proxy

Tadaya Tsuyukubo

commit sha cd885be724c70f7400fab8233df80dc89e5b4831

Add GlobalConnectionIdManager Fixes #64 (cherry picked from commit 4c6e17d2354cb680b2dd773b969421e1a95fbf36)

view details

Tadaya Tsuyukubo

commit sha eef4f7a5bf619de8d4accf6ce4b7a05ca0e9d297

Upgrade JUnit5 from 5.3.2 to 5.6.0

view details

Tadaya Tsuyukubo

commit sha 48563946ebf72d30890af795995957724b19d103

Upgrade TestContainers from 1.10.4 to 1.12.3

view details

Tadaya Tsuyukubo

commit sha 4b854ed53319a080d9b6fc92d14db89857177c67

Upgrade maven wrapper to 0.5.6

view details

Tadaya Tsuyukubo

commit sha a9c493eea303df487c8025a2046e6e60f9d9e011

Update GlobalConnectionIdManagerTest to use JUnit5

view details

Tadaya Tsuyukubo

commit sha c9bef47330717368f2e7b4191da504b290501e74

Use testcontainers BOM

view details

push time in a month

issue openedreactor/reactor-addons

Add-on to support MDC

Motivation

When application gets certain size, MDC becomes one of the requirement for operational perspective. I think it is an area lacking a support now. People usually search how to achieve MDC and just copy paste the custom code into application. Instead of proliferating such code, it would be very beneficial to have official MDC support.

Desired solution

Here is implementation ideas for MDC support module:

  • Logging framework agnostic Spring has logic to auto detects major logging framework. Similar way would be nice to make the MDC support agnostic to user's choice of logging framework.

  • Feature registration The MDC propagation logic can be registered to a hook or even bytecode manipulated when the feature is enabled.

Additional context

Relates: https://github.com/reactor/reactor-core/issues/1985

created time in a month

issue commentttddyy/datasource-proxy

Use unique connectionIds in DefaultConnectionIdManager

Thanks for the tip for openIds. Yeah, having it in instance field makes sense. I think I'll put GlobalConnectionIdManager to datasource-proxy. Let me try to do it in next week, and cut a release.

gavlyukovskiy

comment created time in a month

delete branch r2dbc/r2dbc-proxy

delete branch : 55-query-success

delete time in a month

push eventr2dbc/r2dbc-proxy

Tadaya Tsuyukubo

commit sha cc9b402f614475a4599b22960e4d85a17e239e10

Make query execution success when at least one element is emitted Update "QueryExecutionInfo#isSuccess" to consider success when not only completion of source publisher, but also at least one element is emitted. This is because downstream consumer might cancel the publisher after they have received sufficient data. [resolves #55] (cherry picked from commit 991364c3624abc7397f78e42085bb18fbddcd66f)

view details

Tadaya Tsuyukubo

commit sha f7e032f89479fe47372940fe8bee51aa96046715

Add more @Nullable annotation Add more @Nullable annotation where it applies. (cherry picked from commit d61f2a48e4f43b263583e183328b053d6aa60eb5)

view details

Tadaya Tsuyukubo

commit sha 9382e37ac9bc333e78b4854841c72709b7ba2da6

Update CHANGELOG (cherry picked from commit e7efcb3ffe5d0c31926a0a44739d3f2581bb778f)

view details

push time in a month

issue closedr2dbc/r2dbc-proxy

"QueryExecutionInfo#isSuccess" is reported as failure with "ReactiveCrudRepository#save"

From gitter discussion: https://gitter.im/R2DBC/r2dbc?at=5e1c726865540a529a0c9366

I have reproduced the issue using H2 with ReactiveCrudRepository#save

@GetMapping("/insert")
//    @Transactional
public Mono<City> insert() {
  City city = new City("name", "country");
  return this.repository.save(city)
    .doFinally(signal -> {
      System.out.println("CONTROLLER signal=" + signal);
    });
  };

The method are called in following sequence:

[ 33] [before-method] io.r2dbc.spi.ConnectionFactory#create
[ 34] [after-method] io.r2dbc.spi.ConnectionFactory#create
[ 35] [before-method] io.r2dbc.spi.Connection#createStatement
[ 36] [after-method] io.r2dbc.spi.Connection#createStatement
[ 37] [before-method] io.r2dbc.spi.Statement#bind
[ 38] [after-method] io.r2dbc.spi.Statement#bind
[ 39] [before-method] io.r2dbc.spi.Statement#bind
[ 40] [after-method] io.r2dbc.spi.Statement#bind
[ 41] [before-method] io.r2dbc.spi.Statement#returnGeneratedValues
[ 42] [after-method] io.r2dbc.spi.Statement#returnGeneratedValues
[ 43] [before-query] Query:["INSERT INTO city (name, country) VALUES ($1, $2)"]
[ 44] [before-method] io.r2dbc.spi.Statement#execute
[ 45] [before-method] io.r2dbc.spi.Result#map
[ 46] [before-method] io.r2dbc.spi.Connection#close
[ 47] [after-method] io.r2dbc.spi.Connection#close
CONTROLLER signal=onComplete
[ 48] [after-method] io.r2dbc.spi.Result#map
[ 49] [after-method] io.r2dbc.spi.Statement#execute
[ 50] [after-query] Query:["INSERT INTO city (name, country) VALUES ($1, $2)"]

The (34), 48, 49 receives cancel signal.

According to @mp911de

save(…) calls either insert or update methods and takes the first() element of the result See https://github.com/spring-projects/spring-data-r2dbc/blob/master/src/main/java/org/springframework/data/r2dbc/repository/support/SimpleR2dbcRepository.java#L74-L97

cc/ @aravindtga @squiry

closed time in a month

ttddyy

push eventr2dbc/r2dbc-proxy

Tadaya Tsuyukubo

commit sha 991364c3624abc7397f78e42085bb18fbddcd66f

Make query execution success when at least one element is emitted Update "QueryExecutionInfo#isSuccess" to consider success when not only completion of source publisher, but also at least one element is emitted. This is because downstream consumer might cancel the publisher after they have received sufficient data. [resolves #55]

view details

Tadaya Tsuyukubo

commit sha d61f2a48e4f43b263583e183328b053d6aa60eb5

Add more @Nullable annotation Add more @Nullable annotation where it applies.

view details

Tadaya Tsuyukubo

commit sha e7efcb3ffe5d0c31926a0a44739d3f2581bb778f

Update CHANGELOG

view details

push time in a month

issue commentr2dbc/r2dbc-proxy

Issue when use the proxy on a ConnectionPool

I have created a small project trying to repro. At some point, I saw portal "<somenumber>" does not exist. I thought I have reproduced, but now I am not getting the one but different one.

I put my sample project here. https://github.com/ttddyy/r2dbc-issues/tree/master/proxy-54-proxy_on_pool

I used my local postgres(9.6.15) on osx.

My error seems only happening when r2dbc-postgres, r2dbc-pool, and r2dbc-proxy, all 3 are combined. I didn't observe error only pool or proxy is applied, or none is applied. Also, didn't observe with r2dbc-h2 with combination of pool and proxy.

Issue-1

Env : r2dbc-pool & r2dbc-proxy =0.8.0, r2dbc-postgres=0.8.0.

I saw IndefiniteStatementCache throws ConcurrentHashModification error at this line.

stack trace

So, I updated the HashMap to ConcurrentHashMap and applied to current 0.8.x(0.8.1.BUILD-SNAPSHOT) and master.

Issue-2

Env : r2dbc-pool & r2dbc-proxy =0.8.0, r2dbc-postgres=0.8.x or master(0.9.0.BUILD-SNAPSHOT) with ConcurrentHashMap change.

Now, I see this warning:

2020-01-17 12:48:17.071  WARN 22239 --- [actor-tcp-nio-7] i.r.p.client.ReactorNettyClient          : Notice: SEVERITY_LOCALIZED=WARNING, SEVERITY_NON_LOCALIZED=WARNING, CODE=25P01, MESSAGE=there is no transaction in progress, FILE=xact.c, LINE=3623, ROUTINE=EndTransactionBlock

Then, sometimes I see this error:

java.lang.IllegalArgumentException: rowDescription must not be null
	at io.r2dbc.postgresql.util.Assert.requireNonNull(Assert.java:71) ~[r2dbc-postgresql-0.9.0.BUILD-SNAPSHOT.jar:0.9.0.BUILD-SNAPSHOT]
	at io.r2dbc.postgresql.PostgresqlRow.toRow(PostgresqlRow.java:114) ~[r2dbc-postgresql-0.9.0.BUILD-SNAPSHOT.jar:0.9.0.BUILD-SNAPSHOT]
	at io.r2dbc.postgresql.PostgresqlResult.lambda$map$1(PostgresqlResult.java:96) ~[r2dbc-postgresql-0.9.0.BUILD-SNAPSHOT.jar:0.9.0.BUILD-SNAPSHOT]
	at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:96) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxHandle$HandleConditionalSubscriber.onNext(FluxHandle.java:319) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxTakeUntil$TakeUntilPredicateSubscriber.onNext(FluxTakeUntil.java:77) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowFlux.drainRegular(FluxWindowPredicate.java:650) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowFlux.drain(FluxWindowPredicate.java:728) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowFlux.onNext(FluxWindowPredicate.java:770) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowPredicateMain.onNext(FluxWindowPredicate.java:249) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxFilter$FilterSubscriber.onNext(FluxFilter.java:107) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowFlux.drainRegular(FluxWindowPredicate.java:650) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowFlux.drain(FluxWindowPredicate.java:728) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowFlux.onNext(FluxWindowPredicate.java:770) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxWindowPredicate$WindowPredicateMain.onNext(FluxWindowPredicate.java:249) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:192) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:112) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213) ~[reactor-core-3.3.2.RELEASE.jar:3.3.2.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:218) ~[reactor-netty-0.9.3.RELEASE.jar:0.9.3.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:351) ~[reactor-netty-0.9.3.RELEASE.jar:0.9.3.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:348) ~[reactor-netty-0.9.3.RELEASE.jar:0.9.3.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:90) ~[reactor-netty-0.9.3.RELEASE.jar:0.9.3.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:321) ~[netty-codec-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:308) ~[netty-codec-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:422) ~[netty-codec-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
	at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]
deblockt

comment created time in a month

push eventttddyy/r2dbc-issues

Tadaya Tsuyukubo

commit sha 4865b7f03f0d427f0463706d100db7ef1df53b18

Add project

view details

push time in a month

push eventttddyy/r2dbc-issues

Tadaya Tsuyukubo

commit sha fb937e1f5e40c56530e7aafa60479b8a8160f5be

Add project

view details

push time in a month

create barnchttddyy/r2dbc-issues

branch : master

created branch time in a month

created repositoryttddyy/r2dbc-issues

R2DBC issues

created time in a month

issue commentr2dbc/r2dbc-proxy

"QueryExecutionInfo#isSuccess" is reported as failure with "ReactiveCrudRepository#save"

@gregturn @mp911de I have created a fix for this issue on 55-query-success branch.

What is the logistics for merging the fix to 0.8.1 release? I see there is 0.8.x branch. So, I assume push the commit to 0.8.x branch and merge it back to master for 0.9.0 future release?

ttddyy

comment created time in a month

create barnchr2dbc/r2dbc-proxy

branch : 55-query-success

created branch time in a month

issue commentr2dbc/r2dbc-proxy

"QueryExecutionInfo#isSuccess" is reported as failure with "ReactiveCrudRepository#save"

ok. @mp911de Thanks for the explanation.

I'll update condition for query execution success to be either completion or at least one item is emitted. This should handle the cancelation triggered by next() usecase.

ttddyy

comment created time in a month

CommitCommentEvent

issue commentttddyy/datasource-proxy

Use unique connectionIds in DefaultConnectionIdManager

Hi @gavlyukovskiy,

For r2dbc-proxy, it is still unique per datasource since idCount is instance variable. But I understand the needs for globally unique id.(simply id-counter to be static).

I can add such implementation in addition to the default one.

For now, you can specify something like following custom connection manager on builder.

/**
 * {@link ConnectionIdManager} implementation that emits globally unique(per JVM) connection IDs(sequential number).
 *
 * @author Tadaya Tsuyukubo
 */
public class GlobalConnectionIdManager implements ConnectionIdManager {

    private static AtomicLong ID_COUNTER = new AtomicLong(0);

    private static Set<String> OPEN_IDS = Collections.synchronizedSet(new HashSet<String>());

    @Override
    public String getId(Connection connection) {
        String id = String.valueOf(ID_COUNTER.incrementAndGet());
        OPEN_IDS.add(id);
        return id;
    }

    @Override
    public void addClosedId(String closedId) {
        OPEN_IDS.remove(closedId);
    }

    @Override
    public Set<String> getOpenConnectionIds() {
        return Collections.unmodifiableSet(OPEN_IDS);
    }

}

I'll find a time to polish and add.

gavlyukovskiy

comment created time in a month

issue commentr2dbc/r2dbc-proxy

"QueryExecutionInfo#isSuccess" is reported as failure with "ReactiveCrudRepository#save"

The SimpleR2dbcRepository#save(), in turn, calls DefaultFetchSpec#first() which performs all().next().

The Flux#next() does cancel the original source.

Flux#next marble (From Flux#next.)

I think that's why Statement#execute received cancel signal.

Currently, r2dbc-proxy sets query is executed successful when onComplete is called on the publisher returned by Statement#execute. Therefore, this cancel signal will trigger not to call onComplete and ended up considering the insert to be failed(not setting QueryExecutionInfo#success to true).

@mp911de Is this canceling ideal behavior on spring-data-r2dbc side? How about using one() instead of first() on SimpleR2dbcRepository#save()? I just locally tweaked the code and using one() properly completing the Statement#execute publisher.

ttddyy

comment created time in a month

issue openedr2dbc/r2dbc-proxy

"QueryExecutionInfo#isSuccess" is reported as failure with "ReactiveCrudRepository#save"

From gitter discussion: https://gitter.im/R2DBC/r2dbc?at=5e1c726865540a529a0c9366

I have reproduced the issue using H2 with ReactiveCrudRepository#save

@GetMapping("/insert")
//    @Transactional
public Mono<City> insert() {
  City city = new City("name", "country");
  return this.repository.save(city)
    .doFinally(signal -> {
      System.out.println("CONTROLLER signal=" + signal);
    });
  };

The method are called by following sequence:

[ 33] [before-method] io.r2dbc.spi.ConnectionFactory#create
[ 34] [after-method] io.r2dbc.spi.ConnectionFactory#create
[ 35] [before-method] io.r2dbc.spi.Connection#createStatement
[ 36] [after-method] io.r2dbc.spi.Connection#createStatement
[ 37] [before-method] io.r2dbc.spi.Statement#bind
[ 38] [after-method] io.r2dbc.spi.Statement#bind
[ 39] [before-method] io.r2dbc.spi.Statement#bind
[ 40] [after-method] io.r2dbc.spi.Statement#bind
[ 41] [before-method] io.r2dbc.spi.Statement#returnGeneratedValues
[ 42] [after-method] io.r2dbc.spi.Statement#returnGeneratedValues
[ 43] [before-query] Query:["INSERT INTO city (name, country) VALUES ($1, $2)"]
[ 44] [before-method] io.r2dbc.spi.Statement#execute
[ 45] [before-method] io.r2dbc.spi.Result#map
[ 46] [before-method] io.r2dbc.spi.Connection#close
[ 47] [after-method] io.r2dbc.spi.Connection#close
CONTROLLER signal=onComplete
[ 48] [after-method] io.r2dbc.spi.Result#map
[ 49] [after-method] io.r2dbc.spi.Statement#execute
[ 50] [after-query] Query:["INSERT INTO city (name, country) VALUES ($1, $2)"]

The (34), 48, 49 receives cancel signal.

According to @mp911de

save(…) calls either insert or update methods and takes the first() element of the result See https://github.com/spring-projects/spring-data-r2dbc/blob/master/src/main/java/org/springframework/data/r2dbc/repository/support/SimpleR2dbcRepository.java#L74-L97

cc/ @aravindtga

created time in a month

PR opened spring-projects-experimental/spring-boot-r2dbc

Enforce CGLIB less configuration

Enforce "proxyBeanMethod=false" to all @Configuration.

+10 -10

0 comment

6 changed files

pr created time in a month

create barnchttddyy/spring-boot-r2dbc

branch : cglib-less-config

created branch time in a month

issue commentspring-io/spring-javaformat

When using Gradle, upgrading from 0.0.15 to 0.0.17 results in unexpected formatting changes.

I started applying spring-javafomat plugin to our library yesterday, and noticed @param behavior too. It would be appreciated to decide either fix this or go with this change going forward. So that, I can decide whether to use 0.0.17 or 0.0.15 to our code base. I'd like to be consistent in our code style with spring-boot since my library is built on top of spring-boot; so code-style consistency is nice to have.

wilkinsona

comment created time in 2 months

issue commentspring-projects/spring-boot

Add list of effective PropertySources in EnvironmentEndpoint

Hi @snicoll

let's say:

application.properties has defined:

my.prop= dev
foo.prop = FOO
bar.prop = BAR

application-prod.properties has defined:

my.prop = prod
foo.prop = FOO Override

In application,

  • @Value("my.prop") resolves to prod
  • @Value("foo.prop") resolves to FOO Override
  • @Value("bar.prop") resolves to BAR

Current /actuator/env contains detailed list of properties but it is too detailed when need to get a glance of effective properties.

For example, it contains my.prop with value dev indicating it is from application.properties and prod from application-prod.properties. Without knowing the relationship or ordering of application.properties and application-prod.properties, reader will not sure which value (dev vs prod) is the value ultimately used by the application.

/actuator/env/{id} requires to know what {id} (prop key) is the target to see. While troubleshooting, having list of all props with effective values is rather handy to start.

So, having effective list of properties key-value gives better understanding of what property values are used by the application.

Something like this:

"effectiveProperties": {
  "my.prop" : "prod",
  "foo.prop" : "FOO Override",
  "bar.prop" : "BAR"
}

May add source info:

"effectiveProperties": {
  "my.prop" : {
      "value": "prod",
      "origin": "class path resource [application-prod.properties]:1:1"
    },
  ...
}
ttddyy

comment created time in 2 months

issue openedspring-projects/spring-boot

Add list of final(resolved) PropertySources in EnvironmentEndpoint

Currently, EnvironmentEndpoint shows where all of PropertySources are coming from including duplicated keys(overrides) if they are from different sources. (e.g.: one from application.yaml and same key from application-<profile>.yaml)

Usually, while troubleshooting or debugging, what people want to find out is the final value resolved by the Environment.(actual value resolved by @Value("x.y.z") etc.) Current /env can serve the purpose, but may need to go through several hoops to get the correct answer if there are overrides.

Having final property key-values as part of EnvironmentEndpoint would give clear answer what key-values are used in application.

Such property sources considering override can be obtained like this:

Map<String, Object> map = new HashMap<>();
for (PropertySource<?> propertySource : ((ConfigurableEnvironment) this.environment).getPropertySources()) {
    if (propertySource instanceof EnumerablePropertySource) {
        for (String key : ((EnumerablePropertySource<?>) propertySource).getPropertyNames()) {
            map.putIfAbsent(key, propertySource.getProperty(key));
        }
    }
}
// may sort the map with key...

created time in 2 months

issue commentttddyy/datasource-proxy

Related to my earlier query about proxy being able to implement other interfaces

One question is if we unwrap the proxy to get the target datasource will logging still work? because we will be using target object further to create connection, execute query etc.

No, original datasource is not a proxy, so logging would not work. Also, the retrieved Connection/Statement/etc will not be proxied since it is an original datasource. You need to use proxy datasource in order to get proxy connection, statement, etc.

We also have several other cases like OracleCallableStatement/OraclePreparedStatement interfaces etc which when casted to will throws similar errors. Is there a way to get target statement object with ttddyy impl?

All proxied objects implement ProxyJdbcObject interface that has a method to return corresponding original object.

laksharm-gs

comment created time in 2 months

PR opened spring-projects/spring-framework

Consider order on DeferredImportSelector when processing DeferredImportSelector.Group

Hi,

When I was writing custom DeferredImportSelector (https://github.com/spring-projects/spring-boot/pull/19400), I found an issue for handling DeferredImportSelector.Group with @Order on DeferredImportSelector.

Here is an example to describe the issue:

Let's say I have 3 DeferredImportSelectors with ordering specified 10, 20, 30. The one with order 10 and 30 returns same import-selector-group(GroupA), and the one with order 20 returns different import-selector-group(GroupB).

@Order(10)
static class DeferredImportSelectorA implements DeferredImportSelector {
  @Override
  public String[] selectImports(AnnotationMetadata importingClassMetadata) {
    return new String[]{MyConfigA.class.getName()};
  }

  @Override
  public Class<? extends Group> getImportGroup() {
    return GroupA.class;
  }
}

@Order(20)
static class DeferredImportSelectorB implements DeferredImportSelector {
  @Override
  public String[] selectImports(AnnotationMetadata importingClassMetadata) {
    return new String[]{MyConfigB.class.getName()};
  }

  @Override
  public Class<? extends Group> getImportGroup() {
    return GroupB.class;
  }
}

@Order(30)
static class DeferredImportSelectorC implements DeferredImportSelector {
  @Override
  public String[] selectImports(AnnotationMetadata importingClassMetadata) {
    return new String[]{MyConfigC.class.getName()};
  }

  @Override
  public Class<? extends Group> getImportGroup() {
    return GroupA.class;  // <== same group with selector-A
  }
}

It might be arguable to use same import-selector-group in differently ordered deferred import selector, but it is possible to write that way currently.

@Configuration(proxyBeanMethods = false)
@Import({DeferredImportSelectorA.class, DeferredImportSelectorB.class, DeferredImportSelectorC.class})
static class ImportConfig {
}

When ConfigurationClassParser parses this ImportConfig, I think expected order of returned ConfigurationClass are ordered by @Order:

  • ImportConfig
  • MyConfigA (from import-selector-A with order-10)
  • MyConfigB (from import-selector-B with order-20)
  • MyConfigC (from import-selector-C with order-30)

However, currently it returns this order:

  • ImportConfig
  • MyConfigA (from import-selector-A with order-10)
  • MyConfigC (from import-selector-C with order-30) <===
  • MyConfigB (from import-selector-B with order-20)

This is because, when deferred-import-selectors are sorted, it orders selectors to selector-A, selector-B, selector-C based on the @Order which is correct. However, when Group is processed, since selector-A and selector-C uses same GroupA, when selector-A's group is processed, it also processes selector-C's imports as well. (here uses group as key)

This would be a problem, for example, selector-B is spring-boot's auto-configuration and selector-A and selector-C are to be applied before/after auto-configurations.

To fix this issue, in my patch, I have added DeferredImportSelectorGroupingKey for the LinkedHashMap that handles groupings. The key object also takes into account the order specified on import-selector. This way, even same import-selector-group is specified in import-selector with different order, they are considered to be in different group and the one has higher order priority is processed first. Of course, same order with same group will be treated in same category.

+524 -9

0 comment

3 changed files

pr created time in 2 months

push eventttddyy/spring-framework

Rossen Stoyanchev

commit sha 0eacb443b01833eb1b34006d74c2ee6da04af403

Reuse InputStream in ResourceRegionHttpMessageConverter The converter now tries to keep reading from the same InputStream which should be possible with ordered and non-overlapping regions. When necessary the InputStream is re-opened. Closes gh-24214

view details

Rossen Stoyanchev

commit sha 44da77513444f8388397f93d057ad1b6187516d3

CorsInterceptor skips async dispatch Closes gh-24223

view details

Rossen Stoyanchev

commit sha 41f40c6c229d3b4f768718f1ec229d8f0ad76d76

Escape quotes in filename Closes gh-24220

view details

Rossen Stoyanchev

commit sha 15321a31633c7a9d0f838783640e7148472e4d9a

Fix checkstyle violations

view details

Tadaya Tsuyukubo

commit sha 7603d32a6afb0ea5e52ed0d2876d6ddfc0deb602

Consider order on DeferredImportSelector when processing DeferredImportSelector.Group When ConfigurationClassParser handles DeferredImportSelector, it was sorted by order on selector. However, when the parser processes DeferredImportSelector.Group, it does not consider the order on selector. Therefore, if there are two different orders on selector and using same group, the lower priority selector also gets processed when higher priority selector is processed. In this commit, when processing selector group, also takes into account the order on selector. Therefore, when two selectors have different order with same group, they are treated as different groups. When two selectors have same order and same group, they are treated in same group.

view details

push time in 2 months

create barnchttddyy/spring-framework

branch : deferred-import-group

created branch time in 2 months

push eventttddyy/spring-boot

Tadaya Tsuyukubo

commit sha 32762be84c9ba23e7eb3f9061ae6091ac189de44

Add DeferredImportSelector that runs before/after auto configuration Add "Import[Before|After]AutoConfigurationDeferredImportSelector" and "Import[Before|After]AutoConfiguration" annotations. These deferred import selectors make sure specified configurations run before/after auto configurations.

view details

push time in 2 months

PR opened spring-projects/spring-boot

Add DeferredImportSelector that runs before/after auto configuration

Hi,

I wrote custom DeferredImportSelectors and corresponding annotations that run before/after auto configurations for my library.

  • Import[Before|After]AutoConfigurationDeferredImportSelector: DeferedImportSelector implementation that runs before/after auto configuration
  • @Import[Before|After]AutoConfiguration: Use the above deferred selector to specify which configurations to run before/after auto configuration

The background is we have shared library on top of spring-boot and provides shared configurations. Applications will pick some of the shared configurations to enable features. Since those shared configurations are not autoconfigurations, because of our preference to explicitly enable each feature, I had some problems with configuration orders especially with @ConditionalOn[Missing]Bean. By using these DeferredImportSelectors, I have better control for my library's configurations to run after user's configurations, and then before/after auto configurations.

Sample usage are:

In library:

// Make this configuration runs after user config but before autoconfig
@Configuration(proxyBeanMethods = false)
class MyFeatureConfig {
  // @Bean to enable some feature
}

// annotation to enable my feature(config)
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@ImportBeforeAutoConfiguration(MyFeatureConfig.class)  // <=== meta annotate
@interface EnableMyFeature {
}

In application:

@EnableMyFeature
@SpringBootApplication
class Application {
}

I am going to apply these import selectors to our library. At the same time, I think this is also beneficial if exists in spring-boot itself, hence the PR here.

Thanks,

Relates to: #18228, #19343

+826 -0

0 comment

7 changed files

pr created time in 2 months

push eventttddyy/spring-boot

Tadaya Tsuyukubo

commit sha 9ec96d6fa947e65bfb6b7784666457d09adb30e8

Add DeferredImportSelector that runs before/after auto configuration Add "Import[Before|After]AutoConfigurationDeferredImportSelector" and "Import[Before|After]AutoConfiguration" annotations. These deferred import selectors make sure specified configurations run before/after auto configurations.

view details

push time in 2 months

push eventttddyy/spring-boot

Tadaya Tsuyukubo

commit sha b2d842dd8d26237d9da5e2cee523d840fad16242

Add DeferredImportSelector that runs before/after auto configuration Add "Import[Before|After]AutoConfigurationDeferredImportSelector" and "Import[Before|After]AutoConfiguration" annotations. These deferred import selectors make sure specified configurations run before/after auto configurations.

view details

push time in 2 months

push eventttddyy/spring-boot

Aaron Klish

commit sha 8b149dcea92bd2a38da912269c0e443d9ad1f572

Add Elide as 3rd party Spring Boot Starter See gh-19397

view details

Stephane Nicoll

commit sha b515d6ba9ab2e05916ebeec325b9729f87a1ac95

Merge pull request #19397 from aklish * pr/19397: Add Elide as 3rd party Spring Boot Starter Closes gh-19397

view details

Stephane Nicoll

commit sha dabb9b89c7dc541dd64deeaee7e778c3777cbd66

Merge branch '2.2.x'

view details

Tadaya Tsuyukubo

commit sha 7ecbbf4e96746f89e8d58d232961cf319447b07d

Add DeferredImportSelector that runs before/after auto configuration Add "Import[Before|After]AutoConfigurationDeferredImportSelector" and "Import[Before|After]AutoConfiguration" annotations. These deferred import selectors make sure specified configurations run before/after auto configurations.

view details

push time in 2 months

push eventttddyy/spring-boot

Tadaya Tsuyukubo

commit sha 69c7d5c134424912065f1acbaebde9442cf0e96e

Add DeferredImportSelector that runs before/after auto configuration Add "Import[Before|After]AutoConfigurationDeferredImportSelector" and "Import[Before|After]AutoConfiguration" annotations. These deferred import selectors make sure specified configurations run before/after auto configurations.

view details

push time in 2 months

push eventttddyy/spring-boot

Tadaya Tsuyukubo

commit sha 9fe3ffe5f126a35ad9f2c51574ad1945684c03d6

Add DeferredImportSelector that runs before/after auto configuration Add "Import[Before|After]AutoConfigurationDeferredImportSelector" and "Import[Before|After]AutoConfiguration" annotations. These deferred import selectors make sure specified configurations run before/after auto configurations.

view details

push time in 2 months

push eventttddyy/spring-boot

Tadaya Tsuyukubo

commit sha 24174e23641e1c1dedff1c22c3b6c84e41cb0b8f

Add DeferredImportSelector that runs before/after auto configuration Add "Import[Before|After]AutoConfigurationDeferredImportSelector" and "Import[Before|After]AutoConfiguration" annotations. These deferred import selectors make sure specified configurations run before/after auto configurations.

view details

push time in 2 months

create barnchttddyy/spring-boot

branch : selector-before-after-autoconfig

created branch time in 2 months

issue commentspring-projects/spring-boot

Allow specifying beanname on @EnableConfigurationProperties

ok, thanks @philwebb

I think it would be useful if there is some documentation about this as a guideline or best practice. So, that I can point other developers about it.

Something like "Referencing @ConfigurationProperties as a bean in SpEL"

  • use @Component or @Bean to give simple beanname
  • or write a custom ImportBeanDefinitionRegistrar to register such beans
  • alternatively, find programmatic way of filling such information
  • etc.
ttddyy

comment created time in 2 months

issue commentspring-projects/spring-boot

Allow specifying beanname on @EnableConfigurationProperties

Hi, @bclozel

Just for FYI, my usage is to reference to our own properties. (not 3rd party)

Something like this:

my:
  schedule:
    delay: 30s
@Scheduled(
      initialDelay = "#{@myProperties.getSchedule().getDelay().toMillis() + " +
         "T(java.util.concurrent.ThreadLocalRandom).current().nextInt(3*60*1000)}")

Here, getDelay() returns Duration then call toMillis() to get mili seconds.

ttddyy

comment created time in 2 months

issue openedspring-projects/spring-boot

Allow specifying beanname on @EnableConfigurationProperties

When @EnableConfigurationProperties is used, corresponding ConfigurationProperties class is registered as a bean with name <prefix>-<fqcn>. This bean name is generated by ConfigurationPropertiesBeanRegistrar#getName.

This auto-generated name is very inconvenient when the registered bean needs to be referenced by other places such as SpEL.

Current workaround to specify bean name is to use @ConfigurationProperties with @Bean instead of @EnableConfigurationProperties.

@Bean 
public MyProperties myProperties() {
  return new MyProperties();
}

@ConfigurationProperties("my")
public class MyProperties {
}

If it is declared as a bean, ConfigurationPropertiesBindingPostProcessor performs binding and the configuration properties bean exists with the bean name via @Bean.(myProperties in this example)

I would like to have a capability in @EnableConfigurationProperties to specify bean name (or may be alias to <prefix>-<fqcn>) for binding ConfigurationProperties beans.

For example:

@EnableConfigurationProperties(
                 value = {FooProperties.class, BarProperties.class},
                 beanNames = {"fooProps", "barProps"})

If this sounds ok, then I'll proceed to create a PR.

Thanks,

created time in 2 months

issue commentreactor/reactor-core

Guidance on logging with MDC

Hi @membersound

Thanks for checking the post.

There are couple of things: First, I had typo on my blog post. For mdcFilter, it should be doOnNext instead of doOnRequest. The intention here is to perform MDC-set-operation(mdcFilter) immediately after WebClient exchange has happened in reactor thread. (Sorry for the confusion, I fixed this on my blog post)

	public static ExchangeFilterFunction mdcFilter = (request, next) -> {
		// here runs on main(request's) thread
		Map<String, String> map = MDC.getCopyOfContextMap();
		return next.exchange(request)
				.doOnNext(value -> {       //   <======= HERE
					// here runs on reactor's thread
					if (map != null) {
						MDC.setContextMap(map);
					}
				});
	};

Another thing is the order of applying filters to WebClient. Since the MDC-set-operation(mdcFilter) needs to happen BEFORE logResponse filter, mdcFilter needs to be added AFTER logResponse filter.

WebClient webClient = WebClient.builder()
        .filter(logResponse)
        .filter(logRequest)
        .filter(mdcFilter)
        .build();

Please try with these two changes.

rstoyanchev

comment created time in 2 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha 850d40c83f8c9df15e4fabd63ed2d7dec403c6e1

Fix typo

view details

push time in 2 months

issue commentspring-projects/spring-boot

ApplicationContextRunner evaluates @Conditional on AutoConfiguration too early

W.r.t Conditional evaluation on ApplicationContextRunner, another workaround I found is to implement ConfigurationCondition with returning ConfigurationPhase.REGISTER_BEAN in my condition class. This allows conditional evaluation to happen only at bean registration time, not at import parsing. Since AutoConfigurations has low priority order, in a way this guarantees my auto config to be processed in deferred fashion(after user configuration is parsed).

For configuration classes in my library, I also tried to create a custom deferred import selector that runs on same AutoConfigurationGroup in order to be processed as part of autoconfiguration semantics(to use @AutoConfigure[Before|After]). However, ImportAutoConfigurationImportSelector, AutoConfigurationImportSelector, and AutoConfigurationGroup are pretty much tied to auto configuration classes; so it is hard to reuse them.

Now, I started thinking to use @AutoConfigure[Before|After] in my library maybe too much to do. Instead, create a custom deferred import selector and gives lower/higher order priority than AutoConfigurationImportSelector in order to run before/after auto configurations. This way, at least my library configurations will run after user application's configurations (to use @ConditionalOn[Missing]Bean), then have control to run either before or after spring-boot autoconfigurations.

ttddyy

comment created time in 2 months

PR opened spring-projects/spring-boot

Fix typo on ConditionMessage

<!-- Thanks for contributing to Spring Boot. Please review the following notes before submitting you pull request.

Security Vulnerabilities

STOP! If your contribution fixes a security vulnerability, please do not submit it. Instead, please head over to https://pivotal.io/security to learn how to disclose a vulnerability responsibly.

Dependency Upgrades

Please do not open a pull request for a straightforward dependency upgrade (one that only updates the version property). We have a semi-automated process for such upgrades that we prefer to use. However, if the upgrade is more involved (such as requiring changes for removed or deprecated API) your pull request is most welcome.

Describing Your Changes

If, having reviewed the notes above, you're ready to submit your pull request, please provide a brief description of the proposed changes. If they fix a bug, please describe the broken behaviour and how the changes fix it. If they make an enhancement, please describe the new functionality and why you believe it's useful. If your pull request relates to any existing issues, please reference them by using the issue number prefixed with #. -->

+1 -1

0 comment

1 changed file

pr created time in 2 months

create barnchttddyy/spring-boot

branch : typo

created branch time in 2 months

issue commentspring-projects/spring-boot

ApplicationContextRunner evaluates @Conditional on AutoConfiguration too early

Hi @wilkinsona,

The reason I make my configuration as auto config is mainly for ordering as well as process it as part of other auto-configurations.

I am writing a common library that is used by several applications to integrate with our infrastructure. So, I am in need for @ConditionalOnMissingBean as well as @AutoConfigure[Before|After] in my configuration classes. In order to use them properly, it needs to be auto configuration. Also we require explicitness for enabling such configuration/feature; hence we write @Enable... annotation. Then, each application can choose which features(configurations) to use.

Aside from my usage of conditional on auto configuration, I think it is important to align the behavior between AutoConfigurationImportSelector and ApplicationContextRunner. Especially context runner is used in test, having different behavior gives difficulty to developers.

ttddyy

comment created time in 2 months

issue openedspring-projects/spring-boot

ApplicationContextRunner evaluates @Conditional on AutoConfiguration too early

Hi there,

I faced another behavioral difference between AutoConfigurationImportSelector and ApplicationContextRunner.

When @Conditional exists on auto configuration class, with ApplcicationContextRunner#withConfiguration using AutoConfigurations.of, the evaluation of Conditional happens early(at import time, not at processing auto configurations).

Here is the usecase and sudo code:

I am trying to control autoconfiguration (enable/disable) based on the annotation on user config.

// User Configuration
@Configuration
@EnableX  // this enables MyAutoConfiguration
public static class MyUserConfig {
}

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Import(MyImportSelector.class)
public @interface EnableX {
}

public static class MyImportSelector implements ImportSelector {
  public static boolean enabled;  // flag to enable/disable ConditionalOnX on MyAutoConfiguration

  @Override
  public String[] selectImports(AnnotationMetadata metadata) {
    enabled = true;
    return new String[0];  // return empty
  }
}
// Auto Configuration
@Configuration
@ConditionalOnX
static class MyAutoConfiguration {
  @Bean
  public String foo() {
    return "FOO";
  }
}

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Conditional(OnXCondition.class)
public @interface ConditionalOnX {
}

public static class OnXCondition extends SpringBootCondition {
  @Override
  public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) {
    return MyImportSelector.enabled ? ConditionOutcome.match("enabled") : ConditionOutcome.noMatch("disabled");
  }
}
@Test
void contextRunnerWithConditionOnAutoConfiguration() {
  new ApplicationContextRunner()
      .withInitializer(new ConditionEvaluationReportLoggingListener(LogLevel.INFO))
      .withConfiguration(AutoConfigurations.of(MyAutoConfiguration.class))
      .withUserConfiguration(MyUserConfig.class)
      .run(context -> {
        assertThat(context)
            .hasNotFailed()
            .hasBean("foo")
        ;
      });
}

What I am trying here is based on the static boolean variable, MyImportSelector.enabled, activated via @EnableX annotation on user's config, decides whether to apply MyAutoConfiguration controlled by @ConditionalOnX annotation.

For normal case, since AutoConfigurationImportSelector defers the import of autoconfigurations, the evaludation of @Conditional happens at deferred import time. Thus, OnXCondition evaluation is guaranteed to be performed after normal configurations processing (MyUserConfig/@EnableX).

Therefore, it evaluates MyImportSelector first (where it sets boolean flag to true) triggered by @EnableX, then OnXCondition can read the updated value as part of processing autoconfiguration classes.

On the other hand, with ApplicationContextRunner, it processes OnXCondition then MyImportSelector. This is because in AbstractApplicationContextRunner#configureContext, it simply passes all configurations to context.register. The processing order of auto configurations is guaranteed by AutoConfigurations#getOrder but since it registers all configurations together, the evaluation of @Conditional happens at beginning(not deferred).

I have added some hacky change here: https://github.com/ttddyy/spring-boot/tree/context-runner-autoconfiguration commit: https://github.com/ttddyy/spring-boot/commit/fa16383b316dc7cf49cd7bf6499678f6e91f0925

With this change, the evaluation of auto configuration classes are deferred as well as evaluation of @Conditional on auto configurations. The one missing part is identifying autoconfigurations in configurations of AbstractApplicationContextRunner since spring-boot-test module doesn't have dependency to spring-boot-autoconfiguration module where AutoConfigurations class is defined.

Relates to #17963

created time in 2 months

create barnchttddyy/spring-boot

branch : context-runner-autoconfiguration

created branch time in 2 months

push eventttddyy/spring-security

Adrian Pena

commit sha ca8877c8c591c05388cc512749a912382382c877

Updates javadoc for InitializeUserDetailsBeanManagerConfigurer

view details

Josh Cummings

commit sha 22ae3eb76598b972c00234f6b80d838034895f61

Polish Error-handling Tests Tests should assert the error message content that Spring Security controls. Fixes gh-7647

view details

Paul Pazderski

commit sha 0d35194b47d1ed0e2558e106aa8658a6aeea17a5

Add sessionFixation Javadoc

view details

Pim Moerenhout

commit sha cd0bec48deb4e171648b00bcf0cebf14d0b15fdb

Fix typo in log message.

view details

Eleftheria Stein

commit sha 8a95e5798dee7e2bf98c09f93e9bc8a96f704121

Update @MessageMapping to match input/output cardinality

view details

Josh Cummings

commit sha 7cbd1665a6ba5df01ab676db7d945b6599c2000a

Isolate Jwt Test Support Isolating Jwt test support inside JwtRequestPostProcessor and JwtMutator. Fixes gh-7641

view details

Rob Winch

commit sha b3d177fc7e8d3c0f94ccca48834e9462589e2142

Extract HTTPS Documentation Fixes gh-7626

view details

杨博 (Yang Bo)

commit sha ea148d5feed22264179c1dd2ff1b5adb2135e461

Avoid toString in favor of getName for extract sid There are some more sophisticated implementations of `getName` in `AbstractAuthenticationToken` and other `Authentication` classes.

view details

Eleftheria Stein

commit sha c5b36664ce52d98144a0459c0c2833145d52cfa7

Polish PrincipalSid Remove reduntant UserDetails check and add tests

view details

Rob Winch

commit sha af47e730a03780bfa531b96da65913ec39dd3d20

Only Hello Spring Security Boot For those getting started, we really need to send the message of using Spring Boot. Fixes gh-7627

view details

ryenus

commit sha 42ab6736e18104f83d99507659ed00e21fb1e286

typo fix: consecutive-word duplications (#7673) * fix typo: require require * more typo fix: consecutive-word duplications Following previously finding, I then used `rg` to find other similar typos, with false positives manually excluded, using the following command: rg -t asciidoc -Pp '\b(\w+)\s+\1\b'

view details

Josh Cummings

commit sha 4954a229d6a2f6f82b1d08be015a747a78f382e5

Polish oauth2Login Sample Test Issue: gh-7618

view details

Josh Cummings

commit sha c76775159c2f95f745146f129196d2084980a4c9

Add OidcIdToken.Builder Fixes gh-7592

view details

Josh Cummings

commit sha 6ff71d811308533dae358740f50598c624420c7a

Add OidcUserInfo.Builder Fixes gh-7593

view details

Josh Cummings

commit sha b35e18ff3104703deee6b8ca6eccce9025a123b2

Add oidcLogin MockMvc Test Support Fixes gh-7618

view details

David Eisner

commit sha 56f524259588e225368327a4ee19eaaf2416ba99

Fix minor typo.

view details

Filip Hrisafov

commit sha 796859333fbcb1b013529b3a7db81ee570be4b1b

Log full failed authentication exception in BasicAuthenticationFilter

view details

Rob Winch

commit sha e5932131a94298aff26598a1e28c71c9a89b14cc

Next Development Version

view details

Rob Winch

commit sha a7871cfce4b9aceea772f9f626172b63c5233f35

Next Development Version

view details

Rob Winch

commit sha 17449cbf602df9445e15ff8b518d834ec735a3c1

Fix next development version

view details

push time in 2 months

pull request commentspring-projects/spring-security

Authorization Response should also match on query parameters

I removed the redundant validation in #7708 to help with this PR. Thanks, will work on this ticket early next week with rebasing to the latest.

ttddyy

comment created time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha f1b1588a2f51ee0d279dd7c5b195c33599316ca2

Update about MDC.clear

view details

push time in 3 months

issue commentreactor/reactor-core

Guidance on logging with MDC

Hi,

I had usecase using MDC with WebClient and summarized how to do it in my blog post, MDC with WebClient in WebMVC.

I put implementations with Schedulers.onScheduleHook and Schedulers.addExecutorServiceDecorator, as well as Hooks.onEachOperator.

The implementation is I only considerd for WebMVC environment(I haven't thought about pure reactive env), but it is helpful to have such info as well since usage of WebClient brings people to use reactor in servlet environment.

rstoyanchev

comment created time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha 8b080cdd5ca838d55a7ea81d40b5e2b272b96320

Update about MDC.clear

view details

push time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha 6ea958c92bc7f306a2b416ff11885ff05782a11b

Update about.md

view details

push time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha 2131c2a09217fb172e0c2968134d9018376fc605

Use plain jekyll

view details

push time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha f3663ae297717f8999a8b5f2f92f8ec97b293c5a

Update webclient adding Schedulers.addExecutorServiceDecorator impls

view details

push time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha 8a4c0cb0807fde50b3be452c7550cfe49262a7ca

Update webclient adding Schedulers.addExecutorServiceDecorator impls

view details

push time in 3 months

push eventttddyy/ttddyy.github.io

Italo Lelis

commit sha 824a230910312f662d21cab795a25184b0cdae24

Permalink on 404.md

view details

Erin Grand

commit sha 12cb8a2e97c3b63c4bc92d2a1ab050b35bf946b7

Merge pull request #993 from italohdc/master Permalink on 404.md

view details

Tadaya Tsuyukubo

commit sha f3db84814b1e4ce3383e225a7995069ddbcb4680

Update _config.yml

view details

Tadaya Tsuyukubo

commit sha 7e261cdcb564b5f0a36d144b17815a400b993af2

add logo image

view details

Tadaya Tsuyukubo

commit sha 6d56889f5d07f5625d01434eb79babb8ccdee0ca

update config

view details

Tadaya Tsuyukubo

commit sha 03436464018d7dd51c8086a8288b36228c1e544c

update config

view details

Tadaya Tsuyukubo

commit sha 6ce578ac1900aa452b4205e6e733e38570bd41ac

add about

view details

Tadaya Tsuyukubo

commit sha d4c25ca2192f8da6da923befd0616627c4ce8de1

remove hello world

view details

Tadaya Tsuyukubo

commit sha 1649285cff390eb876316bc132331a3da8004c0c

first post

view details

Tadaya Tsuyukubo

commit sha f30dd1e951380cd9abad642c5f540a00476510af

update about

view details

Tadaya Tsuyukubo

commit sha b799b4f1adc5cf62493ce01f80f398c37cac2367

add analytics

view details

Tadaya Tsuyukubo

commit sha 5208dadcb56479e056bb6511d7294698bf31eb4a

add my another old blog

view details

Tadaya Tsuyukubo

commit sha 76da79cb933327aefbd47eb74e4c93419de678d6

add post: Generate JUnit XML report from JUnitCore

view details

Tadaya Tsuyukubo

commit sha e6cb7bb0d104401de4b8e8d5fed6b43f4671b3c8

quick hack for backtick code highlight

view details

Tadaya Tsuyukubo

commit sha 2021ffbbbb860a0b2653d18a532b16eb14ee384c

post: datasource-proxy 1.4: focusing on JDBC test

view details

Tadaya Tsuyukubo

commit sha 7088cf4a84496e86ffe22ea4e4f41f74c67bb23c

post: Hamcrest Sugar Generation no longer available

view details

Tadaya Tsuyukubo

commit sha affee4fde89b68a462a0ff24f28ccc4e077efed1

post: Programmatically run JUnit in Parallel

view details

Tadaya Tsuyukubo

commit sha e2cebfb5693d68f78b7dd0432a6c5c06ec618bbb

post: Executable Jar for Integration Test

view details

Tadaya Tsuyukubo

commit sha 206cc0eeb66d1e3cc5bed9f8318771ed3bd9f9e1

Add datasource-assert project

view details

Tadaya Tsuyukubo

commit sha ac729806859b7c69ac41c5ebbe1262709039699f

post: MDC with WebClient in WebMVC

view details

push time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha 835b3a3921451c18825139741ca2d475cdb232dd

Update about

view details

push time in 3 months

push eventttddyy/ttddyy.github.io

Tadaya Tsuyukubo

commit sha a49400ee6dbc11130b12011b52cd9b8319310d7e

post: MDC with WebClient in WebMVC

view details

push time in 3 months

PR opened opentracing-contrib/java-spring-cloud

CustomAsyncConfigurerAutoConfiguration creates TracedAsyncConfigurer with null tracer

Hi,

Currently CustomAsyncConfigurerAutoConfiguration is broken. This was due to the change introduced by https://github.com/opentracing-contrib/java-spring-cloud/pull/251. This change seems incomplete; it ends up creating TracedAsyncConfigurer with null in tracer.

Since CustomAsyncConfigurerAutoConfiguration is @Configuration, BeanPostProcessor and PriorityOrdered, it is handled very early stage of the spring's context creation and @Autowired annotated Tracer field is null. (autowiring doesn't happen since this is @Configuration and enhanced by CGLIB and auto-wiring by post processor seems not happening.) At the end, when postProcessAfterInitialization is called, it creates TracedAsyncConfigurer with null in tracer argument. Thus, created TracedAsyncConfigurer throws NPE for async task executions.

To fix the issue, instead of auto-wireing Tracer bean, look it up from beanFactory since it is kind of lazy lookup of the Tracer bean at postProcessAfterInitialization time. Also, extended existing test to verify whether the tracer is set.

+35 -9

0 comment

3 changed files

pr created time in 3 months

push eventttddyy/java-spring-cloud

Tadaya Tsuyukubo

commit sha 12c73e968fc4e3ee616a13aa4f1c48b090fedff3

Lazily lookup Tracer in CustomAsyncConfigurerAutoConfiguration Instead of using @Autowired, lazily lookup Tracer bean from BeanFactory.

view details

push time in 3 months

create barnchttddyy/java-spring-cloud

branch : lazy-lookup-tracer

created branch time in 3 months

fork ttddyy/java-spring-cloud

Distributed tracing for Spring Boot, Cloud and other Spring projects

fork in 3 months

issue commentopentracing-contrib/java-spring-web

Clarify project status

I think saying "Use opentracing-spring-cloud instead" in title gives impression that the statement has concluded only by the title; thus people may think the title is saying - just to use opentracing-spring-cloud instead of java-spring-web.

I think it is better to state the difference between those two projects and gives user's choice that you may choose opentracing-spring-cloud if that is more suitable in user's use case.

May be making title like:

  • java-spring-web or opentracing-spring-cloud
  • comparison to opentracing-spring-cloud
  • java-spring-web vs opentracing-spring-cloud

The first statement in the section:

As it was mentioned above this library traces only inbound/outbound HTTP requests.

Please put comma between "above" and "this library", which would I think highlight more on "above" section. "As it was mentioned above, this library ..."

ttddyy

comment created time in 3 months

pull request commentopentracing-contrib/java-spring-web

Bump to OpenTracing 0.33

@geoand @pavolloffay Thanks!!

geoand

comment created time in 3 months

pull request commentopentracing-contrib/java-spring-web

Bump to OpenTracing 0.33

Hi, I am waiting this version bump to happen as well.

What is the plan here? Are you going to release new version soon? When would it be?

We are in need for this change, and if it doesn't happen, we need to re-evaluate our upgrades with opentracing API v0.33.

So, appreciate merge and release would happen soon.

Thanks,

geoand

comment created time in 3 months

issue openedopentracing-contrib/java-spring-web

Clarify project status

On the README, it says

"Use opentracing-spring-cloud instead"

what does it really mean?

Is it indicating this project is inactive; so, don't use java-spring-web and move to opentracing-spring-cloud??

It is very confusing.

Appreciate clarifying the intention there.

created time in 3 months

issue closedttddyy/datasource-proxy

While creating datasource proxy how to set other interfaces the proxy must implement

For example: if we want the proxy object implement oracle.ucp.jdbc.PoolDataSource, I do not see a way to pass in additional interfaces to implement so that we can cast to PoolDataSource instead of sql DataSource.

closed time in 3 months

laksharm-gs

issue commentttddyy/datasource-proxy

While creating datasource proxy how to set other interfaces the proxy must implement

Usually, you will wrap the target datasource. So, you can wrap the PoolDataSource instance with datasource-proxy.

Allowing to specify custom interface is an interesting idea, but there are problems such as having same method name. Anyway, you can unwrap to get the actual target datasource; so, I don't think you need to cast to the target type datasource.

Currently, ProxyDataSource is a concrete class whereas other jdbc objects are proxy. This is a discrepancy in current implementation. In 2.x implementation, DataSource also becomes proxy which usually doesn't matter to users, though.

laksharm-gs

comment created time in 3 months

issue commentttddyy/datasource-proxy

Throwing SQLException in QueryExecutionListener

hm, what is the use case for throwing such exception or invoking statement in query listener? If you happen to perform some operations on statement in listener, I think it's better to handle exceptions there? Otherwise, it may be confusing whether an exception is coming from the original invocation vs the one from listener.

whiskeysierra

comment created time in 3 months

more