profile
viewpoint
Mark Paluch mp911de Pivotal Software, Inc. Weinheim, Germany http://www.paluch.biz Finest hand-crafted software. Spring Data Project Lead @pivotal, Lettuce Redis Driver Lead. Open source and computers.

lettuce-io/lettuce-core 3103

Advanced Java Redis client for thread-safe sync, async, and reactive usage. Supports Cluster, Sentinel, Pipelining, and codecs.

mp911de/CleanArchitecture 182

CleanArchitecture Example

mp911de/atsoundtrack 25

IntelliJ IDEA Plugin providing @soundtrack

mp911de/iot-distancemeter 19

Transmit sonic sensor data using RaspberryPi/Logstash/MQTT/Python

mp911de/configurator-maven-plugin 3

Home of the configurator-maven-plugin.

mp911de/akka-actor-statistics 2

Pulls statistics (mailbox sizes/processing times) from Akka Actors

mp911de/CCD 2

Clean Code Examples

mp911de/central-logging-tracking-example 2

Example code for tracking requests in a distributed environment

issue commentr2dbc/r2dbc-mssql

Query String bigger than 4000 characters result in java.lang.UnsupportedOperationException

We have for now snapshots only. I expect a service release end of March.

gslulu

comment created time in a day

issue commentr2dbc/r2dbc-postgresql

ResourceLeakDetector reporting "LEAK: ByteBuf.release() was not called before it's garbage-collected"

We fixed the infinite loop issue with #242. I wouldn't be surprised if there's another issue that causes the infinite loop as we entirely rewrote command queueing between 0.8.0 and 0.8.1.

cambierr

comment created time in a day

issue openedspring-projects/spring-data-r2dbc

Add builder for ConnectionFactoryInitializer

ConnectionFactoryInitializer follows pretty much the pattern of DataSourceInitializer in the way the class is built. It would make sense to add a streamlined builder to improve configuration experience.

Right now, the initializer would be used as follows:

@Bean
public ConnectionFactoryInitializer initializer(ConnectionFactory connectionFactory) {
	ConnectionFactoryInitializer initializer = new ConnectionFactoryInitializer();
	initializer.setConnectionFactory(connectionFactory);
	initializer.setDatabasePopulator(new ResourceDatabasePopulator(new ClassPathResource("my-schema.sql"), new ClassPathResource("my-data.sql")));

	return initializer;
}

we should allow a streamlined construction that does not rely on setters but guides through the construction and that also helps with scanning resources through a ResourceLoader for script discovery using ant patterns.

created time in a day

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha e4ec4f30d82ad28b60773cb9039e9f4032eed3f9

Refactor thrown exceptions during dispatch to command completion failures #617 Exceptions that were used to signal a full queue or prohibited commands are now no longer thrown but used to complete a command exceptionally. This approach friendlier to users of asynchronous/reactive APIs as the command outcome aligns with the programming model for error handling.

view details

push time in a day

issue closedlettuce-io/lettuce-core

Use channel thread to enqueue commands

Using the channel thread to write the command makes a lot of sense because we could probably get rid of some synchronization code and subsequent channel I/O does not require further context switching. It would also unblock writes because we don't require synchronization to perform the actual channel write.

closed time in a day

mp911de

issue commentlettuce-io/lettuce-core

Use channel thread to enqueue commands

We refactored how exception signals are propagated into RedisCommand objects if a command is not eligible for being written to Redis. Previously, we relied on throws. As of Lettuce 6, we're propagating the failure via completeExceptionally(…) so the calls to async/reactive API's no longer need to care about handling exceptions with a separate approach.

mp911de

comment created time in a day

issue closedlettuce-io/lettuce-core

Refactor script content argument types to String and byte[] instead of V (value type)

Bug Report

Script load should take in a String, not an argument of type T.

Current Behavior

For the sync, async, and reactive apis command interfaces, the scriptLoad method takes in a script of type V, instead of a string. As redis can only eval with a lua script that is of type String, it does not make sense to use a generic.

Input Code

https://github.com/lettuce-io/lettuce-core/blob/c98108dd7fbe9bee8702e468efff4b085a976374/src/main/java/io/lettuce/core/api/sync/RedisScriptingCommands.java#L109

should be changed to String scriptLoad(String script);

closed time in a day

danielsomekh

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 578a16b30b37cb88b156ae9f9d1e5a66a72aa1d2

Refactor script content argument types to String and byte[] instead of V (value type) #1010 Scripting commands come now with two overloads that accept scripts in their String and binary representation. Previously, Lettuce accepted a mixed set of types, predominantly Strings which were converted using platform-default encoding. The script encoding is configurable via ClientOptions.

view details

push time in a day

issue closedlettuce-io/lettuce-core

Support JUSTID flag of XCLAIM command

Feature Request

The JUSTID flags allows to obtain the message id from the PEL without increasing the retry counter.

Is your feature request related to a problem? Please describe

Cannot use XCLAIM with JUSTID.

Describe the solution you'd like

RedisFuture<List<String>> xClaimJustId(K key, Consumer<K> consumer, XClaimArgs args, String... messageIds);

Flux<String> xClaimJustId(K key, Consumer<K> consumer, XClaimArgs args, String... messageIds);

Describe alternatives you've considered

Add justid to XClaimArgs and allow to use it with the existing methods so that it returns StreamMessage with an empty body.

closed time in a day

christophstrobl

issue commentlettuce-io/lettuce-core

Support JUSTID flag of XCLAIM command

That's fixed now. Calling XCLAIM with JUSTID returns StreamMessage objects whose body is null

christophstrobl

comment created time in a day

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 02330b2673fb0d223a73d9cb7baccafd590f3ca3

Support JUSTID flag of XCLAIM command #1233 We now support XClaimArgs.justid() that requests just the message id without returning the body.

view details

push time in 2 days

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 527f0a6ecb73e6a6910b264c758cf1695f691150

Support JUSTID flag of XCLAIM command #1233 We now support XClaimArgs.justid() that requests just the message id without returning the body.

view details

push time in 2 days

issue closedlettuce-io/lettuce-core

Add support for KEEPTTL with SET

See antirez/redis#6679

closed time in 2 days

mp911de

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 905fac455ecae90109e0c32cfed5d4bd0df3f8f2

Add support for KEEPTTL with SET #1234

view details

push time in 2 days

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha c1a2d551ac2d3df1e58c6f1a86c5d3d7eddefe65

Add support for KEEPTTL with SET #1234

view details

push time in 2 days

issue commentlettuce-io/lettuce-core

Contradictory documentation for createBoundedObjectPool

The documentation isn't contradictory but rather suggests the method of release. Wrapped connections can be released either by calling close() on the connection or by calling release() on the pool. The only thing that does not work is calling close() on a non-wrapped connection as that would close the connection instead of releasing it.

Happy to review a pull request to improve the docs if you have suggestions.

mvh77

comment created time in 2 days

issue closedlettuce-io/lettuce-core

RedisURI class does not parse password when using redis-sentinel

Bug Report

If our redis sentinel setup has password setup on the sentinel instances, providing the connection string of the format 'redis-sentinel://password@127.0.0.1:26379/0#mymaster' will cause it to report NOAUTH Authentication required, however if we remove the password setup on the sentinel instances, it will work as expected; that is it directs us to the redis master instance.

Current Behavior

13:10:52.374 [main] ERROR io.micronaut.runtime.Micronaut - Error starting Micronaut server: Error instantiating bean of type [io.lettuce.core.api.StatefulRedisConnection]: null
io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.lettuce.core.api.StatefulRedisConnection]: null
        at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1626)
        at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2307)
        at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:1989)
        at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:1963)
        at io.micronaut.context.DefaultBeanContext.findBean(DefaultBeanContext.java:1102)
        at io.micronaut.context.DefaultBeanContext.findBean(DefaultBeanContext.java:615)
        at io.micronaut.context.BeanLocator.findBean(BeanLocator.java:135)
        at io.micronaut.configuration.lettuce.RedisConnectionUtil.lambda$findRedisConnection$5(RedisConnectionUtil.java:56)
        at java.base/java.util.Optional.orElseGet(Optional.java:369)
        at io.micronaut.configuration.lettuce.RedisConnectionUtil.findRedisConnection(RedisConnectionUtil.java:56)
        at io.micronaut.configuration.lettuce.session.RedisSessionStore.findRedisConnection(RedisSessionStore.java:415)
        at io.micronaut.configuration.lettuce.session.RedisSessionStore.<init>(RedisSessionStore.java:129)
        at io.micronaut.configuration.lettuce.session.$RedisSessionStoreDefinition.build(Unknown Source)
        at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1598)
        at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2307)
        at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:1989)
        at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:1963)
        at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1082)
        at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:1013)
        at io.micronaut.session.binder.$SessionArgumentBinderDefinition.build(Unknown Source)
        at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1598)
        at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2630)
        at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2552)
        at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:911)
        at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$10(AbstractBeanDefinition.java:1120)
        at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1758)
        at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1115)
        at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:990)
        at io.micronaut.http.bind.$DefaultRequestBinderRegistryDefinition.build(Unknown Source)
        at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1598)
        at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2307)
        at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:1989)
        at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:1963)
        at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1082)
        at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:1013)
        at io.micronaut.http.server.netty.$NettyRequestArgumentSatisfierDefinition.build(Unknown Source)
        at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1598)
        at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2307)
        at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:1989)
        at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:1963)
        at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1082)
        at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:1013)
        at io.micronaut.http.server.netty.$NettyHttpServerDefinition.build(Unknown Source)
        at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1598)
        at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2307)
        at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:1989)
        at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:1963)
        at io.micronaut.context.DefaultBeanContext.findBean(DefaultBeanContext.java:1102)
        at io.micronaut.context.DefaultBeanContext.findBean(DefaultBeanContext.java:615)
        at io.micronaut.context.BeanLocator.findBean(BeanLocator.java:135)
        at io.micronaut.runtime.Micronaut.start(Micronaut.java:71)
        at io.micronaut.runtime.Micronaut.run(Micronaut.java:307)
        at io.micronaut.runtime.Micronaut.run(Micronaut.java:293)
        at redis.world.Application.main(Application.java:8)
Caused by: io.lettuce.core.RedisConnectionException: null
        at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:75)
        at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
        at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:235)
        at io.lettuce.core.RedisClient.connect(RedisClient.java:204)
        at io.lettuce.core.RedisClient.connect(RedisClient.java:189)
        at io.micronaut.configuration.lettuce.AbstractRedisClientFactory.redisConnection(AbstractRedisClientFactory.java:51)
        at io.micronaut.configuration.lettuce.DefaultRedisClientFactory.redisConnection(DefaultRedisClientFactory.java:52)
        at io.micronaut.configuration.lettuce.$DefaultRedisClientFactory$RedisConnectionDefinition.build(Unknown Source)
        at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1598)
        ... 53 common frames omitted
Caused by: io.lettuce.core.RedisCommandExecutionException: NOAUTH Authentication required.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
        at io.lettuce.core.RedisPublisher$SubscriptionCommand.complete(RedisPublisher.java:751)
        at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
        at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
        at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
        at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:834)

Environment

  • Lettuce version(s): 5
  • Redis version:5.0.7

closed time in 2 days

kyrogue

issue commentlettuce-io/lettuce-core

RedisURI class does not parse password when using redis-sentinel

The docs are updated now to reflect the actual behavior.

kyrogue

comment created time in 2 days

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 286a0d9a800ee5cfdc9677f0cf30b214857d085d

Improve RedisURI documentation for sentinel authentication #1232

view details

push time in 2 days

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 6970214e4126287c0a447b2ccc00f1692978cd6e

Improve RedisURI documentation for sentinel authentication #1232

view details

push time in 2 days

GollumEvent

issue closedlettuce-io/lettuce-core

Add CLIENT ID command

Obtain the client id.

closed time in 2 days

mp911de

PR closed lettuce-io/lettuce-core

Add support for client id command

<!-- Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. --> Make sure that:

  • [x] You have read the contribution guidelines.
  • [ ] You have created a feature request first to discuss your contribution intent. Please reference the feature request ticket number in the pull request.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [x] You submit test cases (unit or integration tests) that back your changes.

<!-- Great! Live long and prosper. -->

+62 -0

4 comments

10 changed files

dengliming

pr closed time in 2 days

pull request commentlettuce-io/lettuce-core

Add support for client id command

Thank you for your contribution. That's merged, polished, and backported now.

dengliming

comment created time in 2 days

push eventlettuce-io/lettuce-core

dengliming

commit sha 1e25abf8ed296eadc1e7e656e275c67fe76c7bb5

Add support for client id command #1197 Original pull request: #1230.

view details

Mark Paluch

commit sha 4a760fde5ccb89fc6cd5b2e0c29aacae9fd78c3b

Polishing #1197 Tweak documentation. Add since tags. Original pull request: #1230.

view details

push time in 2 days

push eventlettuce-io/lettuce-core

dengliming

commit sha 8a10a96b7618b632ded687d7265d426692847d22

Add support for client id command #1197 Original pull request: #1230.

view details

Mark Paluch

commit sha 7ed615ff0767d517645f0a91d4b9bb0523305b65

Polishing #1197 Tweak documentation. Add since tags. Original pull request: #1230.

view details

push time in 2 days

push eventr2dbc/r2dbc-mssql

Mark Paluch

commit sha 3dfc76ea40bf4d05cd0f9042c54fbde37cfcbc42

Polishing

view details

push time in 2 days

push eventr2dbc/r2dbc-mssql

Mark Paluch

commit sha 4b5d0d7d9f197b9bb4677517873948b6b974b6b0

Polishing

view details

push time in 2 days

push eventr2dbc/r2dbc-mssql

Mark Paluch

commit sha b9cb334fd148f796fa9d01ba0942684e1ee1e9ed

Propagate errors in direct query RPC flow [resolves #141]

view details

Mark Paluch

commit sha 78120826acb68a4c9fbcd04460d7739074061617

Polishing Use specific exception for TDS protocol errors. [#141]

view details

Mark Paluch

commit sha 232629c10bbdcbbd5f3598c9262bdc4379692101

Send large query strings using PLP-encoded strings We now use PLP-encoded strings to send large SQL queries when using RPC flows. Previously, we rejected large strings that exceeded 4000 chars. [resolves #140]

view details

push time in 2 days

issue closedr2dbc/r2dbc-mssql

Query String bigger than 4000 characters result in java.lang.UnsupportedOperationException

Bug Report

Sample code used to produce an error:

        Mono<MssqlConnection> connectionMono = dbConnectionService.getConnectionFactory().create();
        final Mono<MyRecord> myRecordMono = connectionMono.flatMap(mssqlConnection ->
                mssqlConnection.beginTransaction().then(
                        buildStatement(mssqlConnection, parm1, param2).execute().
                                flatMap(mssqlResult -> mssqlResult.map((row, rowMetadata) ->
                                        buildMyRecord(row))).switchIfEmpty(Mono.just(new MyRecord())).single()).
                        doOnError(throwable -> mssqlConnection.rollbackTransaction()).
                        doOnSuccess(r -> mssqlConnection.commitTransaction()));

When Query String bigger than 4000 characters result in

java.lang.UnsupportedOperationException 
java.lang.UnsupportedOperationException: Use ClobCodec
	at io.r2dbc.mssql.codec.CharacterEncoder.encodeBigVarchar(CharacterEncoder.java:97) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.codec.RpcEncoding.encodeString(RpcEncoding.java:57) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.message.token.RpcRequest$RpcString.encode(RpcRequest.java:669) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.message.token.RpcRequest.lambda$encode$6(RpcRequest.java:200) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:46) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:8143) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.netty.channel.MonoSendMany.subscribe(MonoSendMany.java:82) ~[reactor-netty-0.9.2.RELEASE.jar:0.9.2.RELEASE]
	at reactor.core.publisher.Mono.subscribe(Mono.java:4105) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.netty.NettyOutbound.subscribe(NettyOutbound.java:329) ~[reactor-netty-0.9.2.RELEASE.jar:0.9.2.RELEASE]
	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:441) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:243) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:618) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at io.r2dbc.mssql.client.ReactorNettyClient.lambda$null$11(ReactorNettyClient.java:525) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1630) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.MonoSupplier.subscribe(MonoSupplier.java:61) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:8143) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.Flux.subscribeWith(Flux.java:8307) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:8114) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:8041) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at io.r2dbc.mssql.client.ReactorNettyClient.lambda$null$13(ReactorNettyClient.java:518) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.core.publisher.FluxPeek$PeekSubscriber.onSubscribe(FluxPeek.java:154) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.EmitterProcessor.subscribe(EmitterProcessor.java:169) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:8143) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:188) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:156) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at io.r2dbc.mssql.client.ReactorNettyClient$ExchangeRequest$1.onSuccess(ReactorNettyClient.java:701) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.client.ReactorNettyClient$RequestQueue.run(ReactorNettyClient.java:614) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.core.publisher.FluxPeekFuseable$PeekConditionalSubscriber.onComplete(FluxPeekFuseable.java:936) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.FluxHandle$HandleConditionalSubscriber.onNext(FluxHandle.java:335) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:192) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at io.r2dbc.mssql.client.ReactorNettyClient$1.next(ReactorNettyClient.java:231) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.client.ReactorNettyClient$1.next(ReactorNettyClient.java:191) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.message.token.Tabular$TabularDecoder.decode(Tabular.java:425) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.client.ConnectionState$4$1.decode(ConnectionState.java:206) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.client.StreamDecoder.withState(StreamDecoder.java:137) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.client.StreamDecoder.decode(StreamDecoder.java:109) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.r2dbc.mssql.client.ReactorNettyClient.lambda$new$6(ReactorNettyClient.java:241) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:177) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:218) ~[reactor-netty-0.9.2.RELEASE.jar:0.9.2.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:351) ~[reactor-netty-0.9.2.RELEASE.jar:0.9.2.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:348) ~[reactor-netty-0.9.2.RELEASE.jar:0.9.2.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:89) ~[reactor-netty-0.9.2.RELEASE.jar:0.9.2.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.r2dbc.mssql.client.ssl.TdsSslHandler.channelRead(TdsSslHandler.java:402) ~[r2dbc-mssql-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) ~[netty-common-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.43.Final.jar:4.1.43.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.43.Final.jar:4.1.43.Final]
	at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

Versions

  • Driver: io.r2dbc:r2dbc-mssql:0.8.1.RELEASE / 0.8.0.RELEASE
  • Database: SQL Server
  • Java: 11
  • OS: Windows

Current Behavior

<!-- A clear and concise description of the behavior.-->

<details> <summary>Stack trace</summary>

// your stack trace here

</details>

Table schema

<!--- Provide the table schema. -->

<details> <summary>Input Code</summary>

-- your SQL here;

</details>

Steps to reproduce

<!-- Java/Kotlin/Scala/Groovy/… or Repo link to a Minimal, Reproducible Example if applicable. Preferably raw R2DBC code without any 3rd-party dependencies. -->

<details> <summary>Input Code</summary>

// your code here;

</details>

Expected behavior/code

<!-- A clear and concise description of what you expected to happen (or code). -->

Possible Solution

<!-- Only if you have suggestions on a fix for the bug -->

Additional context

<!-- Add any other context about the problem here. Do not add code as screenshots. -->

closed time in 2 days

gslulu

issue closedr2dbc/r2dbc-mssql

Protocol errors get swallowed in RPC message flow for direct queries

When a query yields a protocol error, then the error is swallowed.

closed time in 2 days

mp911de

push eventr2dbc/r2dbc-mssql

Mark Paluch

commit sha 24cf1c4c38772f819056d879a20a4b838bc49f6e

Propagate errors in direct query RPC flow [resolves #141]

view details

Mark Paluch

commit sha f8d96b3e9c2cc6a6d737c7a03f2cf667b401b25e

Polishing Use specific exception for TDS protocol errors. [#141]

view details

Mark Paluch

commit sha 402c94db0c9ace5493a417eb0b96faf1dcfcc350

Send large query strings using PLP-encoded strings We now use PLP-encoded strings to send large SQL queries when using RPC flows. Previously, we rejected large strings that exceeded 4000 chars. [resolves #140]

view details

push time in 2 days

issue openedr2dbc/r2dbc-mssql

Protocol errors get swallowed in RPC message flow for direct queries

When a query yields a protocol error, then the error is swallowed.

created time in 2 days

issue commentr2dbc/r2dbc-mssql

Query String bigger than 4000 characters result in java.lang.UnsupportedOperationException

Thanks for trying this out. The JDBC driver sends large queries as chunked values (PLP). We can do the same for R2DBC MSSQL.

gslulu

comment created time in 2 days

startedrusher/mariadb-connector-r2dbc

started time in 2 days

PR closed mp911de/CleanArchitecture

Add license scan report and status

Your FOSSA integration was successful! Attached in this PR is a badge and license report to track scan status in your README.

Below are docs for integrating FOSSA license checks into your CI:

+5 -0

1 comment

1 changed file

fossabot

pr closed time in 2 days

issue openedlettuce-io/lettuce-core

Add support for KEEPTTL with SET

See antirez/redis#6679

created time in 2 days

issue commentr2dbc/r2dbc-mssql

Query String bigger than 4000 characters result in java.lang.UnsupportedOperationException

Have you checked what the Microsoft JDBC driver does and how it behaves in such a case? It would make sense to align with mssql-jdbc in that regard.

gslulu

comment created time in 2 days

startedhcorona/diversity-inclusion

started time in 2 days

issue commentspring-projects/spring-data-r2dbc

TransientDataAccessResourceException(Failed to update table) issue

That's not entirely correct. If your entity implements Persistable, then save(…) will use the outcome of isNew() to determine whether to issue an INSERT or UPDATE.

Gompangs

comment created time in 2 days

issue commentr2dbc/r2dbc-mssql

Query String bigger than 4000 characters result in java.lang.UnsupportedOperationException

Indeed, that's how I understood the issue. By default, queries are sent via RPC (sp_prepexec, sp_cursoropen, …). The actual query is sent as well as a parameter.

gslulu

comment created time in 2 days

issue commentr2dbc/r2dbc-mssql

Query String bigger than 4000 characters result in java.lang.UnsupportedOperationException

We need to check how the JDBC driver behaves in that case. IIRC, the RPC query parameter for the query must be a (N)VARCHAR type.

gslulu

comment created time in 2 days

PR opened spring-projects/spring-data-jdbc

DATAJDBC-491 - Correctly combine schema- and table name

We no correctly combine schema and table name using separate SqlIdentifier objects instead of leaving the schema name as part of the table name. Previously, the concatenated name failed to resolve with quoted identifiers.


Related ticket: DATAJDBC-491.

+71 -29

0 comment

9 changed files

pr created time in 3 days

create barnchspring-projects/spring-data-jdbc

branch : issue/DATAJDBC-491

created branch time in 3 days

issue commentspring-projects/spring-boot

Add support for R2DBC

spring-projects/spring-data-r2dbc#311 and https://jira.spring.io/browse/DATAJDBC-492 are fixed now.

jabrena

comment created time in 3 days

issue closedspring-projects/spring-data-r2dbc

AbstractR2dbcConfiguration should use R2dbcMappingContext instead of RelationalMappingContext

To avoid non-unique bean lookup failures, we should require R2dbcMappingContext instead of RelationalMappingContext in AbstractR2dbcConfiguration.

closed time in 3 days

mp911de

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 7ed68ee532f2e3b634725272ad51c52d91e9a538

#311 - Use R2dbcMappingContext in AbstractR2dbcConfiguration instead of RelationalMappingContext. We use now a more concrete type in the configuration to avoid clashes when Spring Data JDBC is on the class path.

view details

push time in 3 days

issue closedspring-projects/spring-data-r2dbc

@Query definitions with SpEL expressions

Something like that https://spring.io/blog/2014/07/15/spel-support-in-spring-data-jpa-query-definitions:

@Query("select u from User u where u.firstname = :#{#customer.firstname}")
List<User> findUsersByCustomersFirstname(@Param("customer") Customer customer);

I thought this feature is common to all spring data projects, but it seems to be specific to jpa.

closed time in 3 days

a8t3r

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 6027e29305fa409b1352e2de88a5ab06991e3dfd

#164 - Support @Query definitions with SpEL expressions. We now support SpEL expressions in string-based queries to bind parameters for more dynamic queries. SpEL expressions are enclosed in :#{…} and rendered as synthetic named parameter so their values are substituted with bound parameters to avoid SQL injection attach vectors. interface PersonRepository extends Repository<Person, String> { @Query("SELECT * FROM person WHERE lastname = :#{'hello'}") Mono<Person> findHello(); @Query("SELECT * FROM person WHERE lastname = :#{[0]} and firstname = :firstname") Mono<Person> findByLastnameAndFirstname(@Param("value") String value, @Param("firstname") String firstname); @Query("SELECT * FROM person WHERE lastname = :#{#person.name}") Mono<Person> findByExample(@Param("person") Person person); }

view details

push time in 3 days

push eventspring-projects/spring-data-jdbc

Mark Paluch

commit sha 95428b7022f7c32c94582a0f9f8938db0b66df6a

DATAJDBC-492 - Use JdbcMappingContext in AbstractJdbcConfiguration instead of RelationalMappingContext. We use now a more concrete type in the configuration to avoid clashes when Spring Data R2DBC is on the class path.

view details

push time in 3 days

issue openedspring-projects/spring-data-r2dbc

AbstractR2dbcConfiguration should use R2dbcMappingContext instead of RelationalMappingContext

To avoid non-unique bean lookup failures, we should require R2dbcMappingContext instead of RelationalMappingContext in AbstractR2dbcConfiguration.

created time in 3 days

issue commentlettuce-io/lettuce-core

RedisURI class does not parse password when using redis-sentinel

userinfo is the part after :// and before the host name (password@, :password, username:password@).

Regarding the master Id, bot approaches, the fragment and sentinelMasterId are picked up. In some setups, the fragment was stripped down from the URL and so we had to fall back to the query string.

kyrogue

comment created time in 3 days

issue commentspring-projects/spring-data-r2dbc

Multiple DBConfigs with custom conversions called only once

Thanks a lot for the detail. Parallelism wasn’t obvious from the previous comment. Glad it worked out for you.

hfhbd

comment created time in 3 days

issue closedspring-projects/spring-data-r2dbc

NamedParameterUtils renders collection-parameters without reusing existing bind markers

Consider the following query for Postgres:

SELECT * FROM person where name IN (:ids) or lastname IN (:ids)

The expanded form should render into (assuming three items in the bound parameter):

SELECT * FROM person where name IN ($0, $1, $2) or lastname IN ($0, $1, $2)

instead, it renders:

SELECT * FROM person where name IN ($0, $1, $2) or lastname IN ($3, $4, $5)

Original report: r2dbc/r2dbc-postgresql#252

closed time in 3 days

mp911de

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 714dec3c94eb7d3afdb14740ca280dd9f45eaaf3

#310 - Fix bind marker reuse when expanding collection arguments using named parameters. We now correctly reuse bind markers for named parameter substitution when using collection arguments to create dynamic argument lists. Previously, we allocated a new bind marker for each item in the collection which left the parameters intended for reuse unbound.

view details

push time in 3 days

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 7e817acca6449e724a803603ac6d8267f26027ac

#310 - Fix bind marker reuse when expanding collection arguments using named parameters. We now correctly reuse bind markers for named parameter substitution when using collection arguments to create dynamic argument lists. Previously, we allocated a new bind marker for each item in the collection which left the parameters intended for reuse unbound.

view details

push time in 3 days

issue commentr2dbc/r2dbc-postgresql

multiple usage of same collection parameter in extended query

I filed spring-projects/spring-data-r2dbc#310 to address the issue in Spring Data R2DBC.

xqfgbc

comment created time in 3 days

issue openedspring-projects/spring-data-r2dbc

NamedParameterUtils renders collection-parameters without reusing existing bind markers

Consider the following query for Postgres:

SELECT * FROM person where name IN (:ids) or lastname IN (:ids)

The expanded form should render into (assuming three items in the bound parameter):

SELECT * FROM person where name IN ($0, $1, $2) or lastname IN ($0, $1, $2)

instead, it renders:

SELECT * FROM person where name IN ($0, $1, $2) or lastname IN ($3, $4, $5)

Original report: r2dbc/r2dbc-postgresql#252

created time in 3 days

issue closedr2dbc/r2dbc-postgresql

multiple usage of same collection parameter in extended query

I wanna to execute native query like below:

select user_name from user where user_id in(:userIds) and age > 35
union
select user_name from user where user_id in(:userIds) and age < 20

but it throw an error like this: java.lang.IllegalStateException: Bound parameter count does not match parameters in SQL statement

closed time in 3 days

xqfgbc

issue commentr2dbc/r2dbc-postgresql

multiple usage of same collection parameter in extended query

Indeed, Spring Data R2DBC renders a SQL statement that does not match the bindings:

select user_name from user where user_id in($1, $2, $3) and age > 35
union
select user_name from user where user_id in($4, $5, $6) and age < 20

while binding only positions $1, $2 and $3.

xqfgbc

comment created time in 3 days

issue commentlettuce-io/lettuce-core

RedisURI class does not parse password when using redis-sentinel

The URI format specifies [ userinfo "@" ] host [ ":" port ] as authority. password@host1:26379,password@host2 would help to specify a password per Sentinel but no longer the data node password.

kyrogue

comment created time in 4 days

Pull request review commentspring-projects/spring-data-mongodb

DATAMONGO-2478 - Fix NPE in Query.of when given a proxied source.

 public boolean isSorted() { 			} 		}; -		target.criteria.putAll(source.criteria);-		target.skip = source.skip;-		target.limit = source.limit;-		target.sort = Sort.unsorted().and(source.sort);-		target.hint = source.hint;-		target.collation = source.collation;-		target.restrictedTypes.addAll(source.restrictedTypes);+		Query theQuery = ProxyUtils.unwrapTargetSource(source);++		target.skip = theQuery.skip;

How about using accessor methods instead of field access? That way, we can entirely omit any kind of proxy handling.

christophstrobl

comment created time in 4 days

issue commentlettuce-io/lettuce-core

RedisURI class does not parse password when using redis-sentinel

The password configured via a String-based RedisURI is used for master/replica authentication only. Sentinel authentication is a relatively new feature and there's typically no correlation between Sentinel- and data node password. This is why we decided to not automatically apply the password to Sentinel nodes as it would break a lot of existing applications.

Please provide a RedisURI object for configuration like the one below:

RedisURI redisURI = RedisURI.create("redis-sentinel://password@localhost:26379/0#mymaster");
redisURI.getSentinels().forEach(it -> it.setPassword("my-sentinel-password"));

Since the URI format is not flexible enough to represent multiple passwords, especially in the light that Redis 6 is going to ship username and password support, we cannot really do anything useful here. Adding query parameters (sentinelUsername=…&sentinelPassword=…) do not seem a good fit.

kyrogue

comment created time in 4 days

issue closedmp911de/logstash-gelf

Can't load log handler

Hello everyone,

I would like to set this plugin inside of container Jenkins on java 11. It was working on Jenkins on Java 8, but after change it to Java 11 I got this message:

<pre><code> Can't load log handler "biz.paluch.logging.gelf.jul.GelfLogHandler" java.lang.ClassNotFoundException: biz.paluch.logging.gelf.jul.GelfLogHandler java.lang.ClassNotFoundException: biz.paluch.logging.gelf.jul.GelfLogHandler at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) at java.logging/java.util.logging.LogManager.createLoggerHandlers(LogManager.java:1000) at java.logging/java.util.logging.LogManager$4.run(LogManager.java:970) at java.logging/java.util.logging.LogManager$4.run(LogManager.java:966) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.logging/java.util.logging.LogManager.loadLoggerHandlers(LogManager.java:966) at java.logging/java.util.logging.LogManager.initializeGlobalHandlers(LogManager.java:2417) at java.logging/java.util.logging.LogManager$RootLogger.accessCheckedHandlers(LogManager.java:2511) at java.logging/java.util.logging.Logger.getHandlers(Logger.java:2089) at winstone.Launcher.main(Launcher.java:335) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at Main._main(Main.java:375) at Main.main(Main.java:151) </code></pre>

I builded the newest version on environment jdk-11 but it doesn't help.

Any idea?

Best regards,

closed time in 4 days

jakubwlodarczyk94

issue commentmp911de/logstash-gelf

Can't load log handler

This choice depends on how you intend to configure your application. We cannot help with that kind of questions.

Closing this ticket as the original issue isn't related to logstash-gelf.

jakubwlodarczyk94

comment created time in 4 days

issue closedspring-projects/spring-data-r2dbc

Multiple DBConfigs with custom conversions called only once

Hey,

I have 3 databases, each having its own DBConfig class, extending the AbstractR2dbcConfiguration.

From the docs: To add custom conversions override the getCustomConversions().

Unfortunally, there is only one global R2dbcCustomConversions Bean instead one for each configuration. This results into only one call of the overriden getCustomConversions() function.

Example:

@Configuration
@EnableR2dbcRepositories(ADBConfig.basePackage, databaseClientRef = ADBConfig.clientName)
class ADBConfig(private val properties: AProperties): AbstractR2dbcConfiguration() {

   companion object {
       const val basePackage = "org.example.a"
       const val clientName = "aDatabaseClient"
       const val factory = "aFactory"
   }

   @Bean(factory)
   override fun connectionFactory(): ConnectionFactory {
       return H2ConnectionFactory(
           H2ConnectionConfiguration.builder()
               .url(properties.datasource.jdbcURL)
               .username(properties.datasource.userName)
               .password(properties.datasource.password)
               .build()
       )
   }

   override fun customConverters() = listOf(AWritingConverter(), AReadConverter())

   @Bean(clientName)
   fun databaseClient(@Qualifier(factory) connectionFactory: ConnectionFactory): DatabaseClient = DatabaseClient.create(connectionFactory)
}

Alternative:

Don't override the getCustomConversions function, but instead the Bean directly with a given name.

@Bean(conversions)
    override fun r2dbcCustomConversions()=  R2dbcCustomConversions(storeConversions, listOf())

The @Bean R2dbcCustomConversions has now be called by @Qualifier in AbstractR2dbcConfiguration. The qualifier comes from a new attribute with name R2dbcCustomConversionsRefin @EnableR2dbcRepositories.

@Configuration
@EnableR2dbcRepositories(ADBConfig.basePackage, databaseClientRef = ADBConfig.clientName, R2dbcCustomConversionsRef = "aCustomConversions")
class ADBConfig(private val properties: AProperties): AbstractR2dbcConfiguration() {

closed time in 4 days

hfhbd

issue commentspring-projects/spring-data-r2dbc

Multiple DBConfigs with custom conversions called only once

There are multiple approaches. Each R2dbcTransactionManager is associated with a ConnectionFactory. If all your databases are of the same kind and you could reuse the same DatabaseClient, then AbstractRoutingConnectionFactory would be the way to go where you can route database calls based on contextual information.

If you're required to maintain multiple ConnectionFactory instances, then you also need to associate each of them with a R2dbcTransactionManager and TransactionalOperator. This pattern pretty much corresponds with what Spring provides for JDBC DataSource. You could also use an annotation-based programming model via @Transactional("txManager1")/@Transactional("txManager2") and so on.

Please note that .as(transactionalOperator::transactional) associates the transactional context with all upstream publishers. So in your case, findByUnique(…) and save(…) share the same transaction.

Please note that we use GitHub issues for feature requests and bug reports. We prefer Stack Overflow and Gitter for questions respective discussions.

As this ticket is neither a bug nor a feature request that we're going to implement, we're closing this ticket.

hfhbd

comment created time in 4 days

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 265f9ecb41599fc28008d641ccba75957dc47f23

Update project documentation Add code of conduct. Improve contribution guide.

view details

push time in 4 days

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha 254e0460b619c497025998b437d3da8933a29b51

Update project documentation Add code of conduct. Improve contribution guide.

view details

push time in 4 days

push eventlettuce-io/lettuce-core

Mark Paluch

commit sha df2777a3793e3d4b4550477d523baf888085d55a

Update project documentation Add code of conduct. Improve contribution guide.

view details

push time in 4 days

PR opened spring-projects/spring-data-cassandra

DATACASS-84 - Use NamingStrategy for table and column name derivation

We now use a configurable NamingStrategy to configure how table, user-defined type and column names are derived if the name is not explicitly configured. The default naming strategy uses the type/property name. Naming strategies allow customization with a transformation function (all lower-case/upper case, prepending/appending and more) and strategies can be provided by a custom implementation.


Related ticket: DATACASS-84.

+635 -86

0 comment

15 changed files

pr created time in 4 days

create barnchspring-projects/spring-data-cassandra

branch : issue/DATACASS-84

created branch time in 4 days

issue commentlettuce-io/lettuce-core

Randomly getting "RedisCommandTimeoutException: Command timed out"

Thanks for sharing that you feel offended by my 6 months old comment @bilalcaliskan. Mixing two issues that seem related regarding their effect but are different by their cause isn't helpful in terms of trying to understand what is going on in this ticket.

MatanRubin

comment created time in 4 days

issue commentr2dbc/r2dbc-postgresql

multiple usage of same collection parameter in extended query

The Postgres driver uses $1, $2, $n notation for parameters. This is likely a Spring Data issue. Can you attach the actual SQL query or run the native query using plain R2DBC API?

xqfgbc

comment created time in 4 days

issue commentmp911de/logstash-gelf

Can't load log handler

Likely you need to add the appender and it’s dependencies to the boot classloader as JUL logging uses that one instead of the application class loader.

jakubwlodarczyk94

comment created time in 4 days

issue closedspring-projects/spring-data-r2dbc

Why R2dbcEntityTemplate extends BeanFactoryAware

Is setbeanfactory() an API required by users?

closed time in 4 days

huodon

issue commentspring-projects/spring-data-r2dbc

Why R2dbcEntityTemplate extends BeanFactoryAware

Right now, the entity template uses the context for projections (interface/DTO projections) for SpEL evaluation. In a future version we’re going to add entity callbacks and lifecycle events that require an event publisher and the context for further bean lookup.

Regarding your second question, it depends on whether you’re interested in these features.

huodon

comment created time in 4 days

PR opened spring-projects/spring-data-r2dbc

#189 - Accept StatementFilterFunction in DatabaseClient

We now accept StatementFilterFunction and ExecuteFunction via DatabaseClient to filter Statement execution. StatementFilterFunctions can be used to pre-process the statement or post-process Result objects.

databaseClient.execute(…)
		.filter((s, next) -> next.execute(s.returnGeneratedValues("my_id")))
		.filter((s, next) -> next.execute(s.fetchSize(25)))

databaseClient.execute(…)
		.filter(s -> s.returnGeneratedValues("my_id"))
		.filter(s -> s.fetchSize(25))

Related ticket: #189.

+438 -44

0 comment

10 changed files

pr created time in 4 days

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 876a49e2b6714a974158a702a4ce1956bc458d1b

#189 - Accept StatementFilterFunction in DatabaseClient. We now accept StatementFilterFunction and ExecuteFunction via DatabaseClient to filter Statement execution. StatementFilterFunctions can be used to pre-process the statement or post-process Result objects. databaseClient.execute(…) .filter((s, next) -> next.execute(s.returnGeneratedValues("my_id"))) .filter((s, next) -> next.execute(s.fetchSize(25))) databaseClient.execute(…) .filter(s -> s.returnGeneratedValues("my_id")) .filter(s -> s.fetchSize(25))

view details

push time in 4 days

create barnchspring-projects/spring-data-r2dbc

branch : issue/gh-189

created branch time in 4 days

issue commentspring-projects/spring-data-r2dbc

execute(...) should be extended with returning generated keys

After investigating on this topic, we do not have proper use-cases for the related ticket #46. Therefore, we're going to introduce a slim version of filter functions for Statement objects. The simple (UnaryOperator) code would look like:

DatabaseClient databaseClient = …;
databaseClient.execute("SELECT")
		.filter(s -> s.returnGeneratedValues("foo"))

the bit more extended approach with a StatementFilterFunction would accept also an ExecuteFunction:

DatabaseClient databaseClient = …;
databaseClient.execute("SELECT")
		.filter((s, next) -> next.execute(s.returnGeneratedValues("foo")))
SzalkaiGabor

comment created time in 4 days

issue commentspring-projects/spring-data-r2dbc

Selecting @EnableR2dbcRepositories with repositoryFactoryBeanClass always need to select basePackages or basePackageClasses

But for scanning may be get base class from getRepositoryBaseClass method of RepositoryFactoryBeanSupport implementation? then basePackages variable not needed

Not sure what you mean. basePackages is used during the factory setup phase to identify repository interface candidates. Once bean definitions are registered, basePackages is no longer used.

LaoTsing

comment created time in 5 days

issue closedspring-projects/spring-data-r2dbc

column name escaping not working with default `findAll` method

Given a model class containing:

    @Column(value = "end")
    private long end;

the default findAll method doesn't work since it doesn't escape the column whose name is end.

If escaping the name in the column like so:

    @Column(value = "\"end\"")
    private long end;

Then the mapper can't convert the rows to entities anymore.

For now, the only obvious fix seems to redefine the findAll method in the repository interface with something like @query("select * from myEntity")

closed time in 5 days

cambierr

issue commentspring-projects/spring-data-r2dbc

column name escaping not working with default `findAll` method

We introduced escaping/quoting support with #291. Quotation is disabled by default. You can enable it via R2dbcMappingContext.setForceQuote(…) for all entities.

Note that quotation can come with case-sensitivity on certain databases.

cambierr

comment created time in 5 days

issue commentspring-projects/spring-data-r2dbc

Selecting @EnableR2dbcRepositories with repositoryFactoryBeanClass always need to select basePackages or basePackageClasses

Have you tried annotation aliasing? You can create your own annotation and use @AliasFor to forward properties. Example:

@EnableR2dbcRepositories(repositoryFactoryBeanClass = CustomRepositoryFactoryBean.class)
public @interface EnableMyR2dbcLibrary {

	@AliasFor(annotation = EnableR2dbcRepositories.class, value = "basePackages")
	String[] basePackages() default {};

	@AliasFor(annotation = EnableR2dbcRepositories.class, value = "basePackageClasses")
	Class<?>[] basePackageClasses() default {};
}

Adding another scanning mechanism would pollute the public API while serving only a single use case.

LaoTsing

comment created time in 5 days

issue commentspring-projects/spring-data-r2dbc

Combined AND and OR predicate in Criteria Builder

The issue is going to be addressed with #289 / #307.

LaoTsing

comment created time in 5 days

PR opened spring-projects/spring-data-r2dbc

Add support for Criteria composition

We now support composition of Criteria objects to create a Criteria from one or more top-level criteria and to compose nested AND/OR Criteria objects:

Criteria.where("name").is("Foo")).and(
        Criteria.where("name").is("Bar").or("age").lessThan(49).or(
                Criteria.where("name").not("Bar").and("age").greaterThan(49))

Related ticket: #289. Depends on spring-projects/spring-data-jdbc#193

+420 -37

0 comment

7 changed files

pr created time in 5 days

create barnchspring-projects/spring-data-r2dbc

branch : issue/gh-289

created branch time in 5 days

PR opened spring-projects/spring-data-jdbc

DATAJDBC-490 - Support condition grouping

We now support condition groups (WHERE (a = b OR b = c) AND (e = f)) with the SQL AST via Condition.group().

We should make sure that Condition.group() is a sufficiently expressive method name. Maybe asGroup() or any other variant is more appropriate?


Related ticket: DATAJDBC-490.

+175 -17

0 comment

9 changed files

pr created time in 5 days

push eventspring-projects/spring-data-jdbc

Mark Paluch

commit sha 4e713d70db529589af5d518d9fbaa025f4662d9e

DATAJDBC-490 - Prepare issue branch.

view details

Mark Paluch

commit sha aa1e571166f0eeea43ca52b917f2409f92e79a4d

DATAJDBC-490 - Support condition grouping. We now support condition groups (WHERE (a = b OR b = c) AND (e = f)) with the SQL AST via Condition.group().

view details

push time in 5 days

create barnchspring-projects/spring-data-jdbc

branch : issue/DATAJDBC-490

created branch time in 5 days

issue closedspring-projects/spring-data-r2dbc

Add Kotlin extensions for R2dbcEntityTemplate

We should provide Kotlin extensions for the Fluent API leveraging reified generics and support for Coroutines.

Depends on #287 and #220.

closed time in 5 days

mp911de

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 5a5a5fec9eac8e3cfc42419b6946c47a8573d465

#290 - Add Kotlin extensions for fluent R2dbcEntityTemplate API.

view details

push time in 5 days

issue closedspring-projects/spring-data-r2dbc

Extend unit tests for QueryMapper

See #220:

Since this method is quite complex I'd love to have some unit tests. Cases that should be covered IMHO:

entity =/!= null column w/wo alias demonstrating what the column-> field+table -> column roundtrip does. SimpleFunction w/wo recursion Aliased or not Error case.

closed time in 5 days

mp911de

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 1e64c6461ad0a4e1ba3e59df8696141cf11a7e59

#300 - Extend unit tests for QueryMapper.

view details

push time in 5 days

issue closedspring-projects/spring-data-r2dbc

Documentation request: How to use Reading / Writing converter w/ List<T>

I'm attempting to store a collection of objects into a single text (JSON) column in MySQL. The input is List<T> and I have a reading and writing converter that would take in said List<T> and save that out as a simple string in the database.

Here's my DTO that I want effectively embedded and stored in the images column.

@Builder
@Data
public final class ImageDTO {

    @Getter
    @JsonProperty
    private String url;

    @Getter
    @JsonProperty
    private Integer height;

    @Getter
    @JsonProperty
    private Integer width;
}

Here's my writing converter ... which is outputting a nested value of SettableValue<SettableValue<String>> for some weird reason.

@WritingConverter
public class ListOfImagesWritingConverter implements Converter<List<ImageDTO>, OutboundRow> {
    public final OutboundRow convert(final List<ImageDTO> images) {
        final OutboundRow row = new OutboundRow();
        final ObjectMapper mapper = new ObjectMapper();
        try {
            final String serializedImages = mapper.writeValueAsString(images);
            row.put("images", SettableValue.from(serializedImages));
        } catch (final JsonProcessingException e) {
            e.printStackTrace();
            row.put("images", SettableValue.from("[]"));
        }
        return row;
    }
}

If I compare this Writing Converter's output against another field like a string ... I see a SettableValue which contains a nested SettableValue in there which is causing the following exception: There was an unexpected error (type=Internal Server Error, status=500). Cannot encode value of type 'class org.springframework.data.r2dbc.mapping.OutboundRow' java.lang.IllegalArgumentException: Cannot encode value of type 'class org.springframework.data.r2dbc.mapping.OutboundRow' at dev.miku.r2dbc.mysql.codec.DefaultCodecs.encode(DefaultCodecs.java:182) Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has been observed at the following site(s):

Visually as I'm trying to step through the debugger I'm seeing this chunk that is definitely different than the output of a String converted value.

image

Hopefully this is a simple mistake on my end with the writing converter ... but it may be a bug. Any help would be appreciated! Thanks :-)

closed time in 5 days

jackdpeterson

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha 47374f690e879de2086004e56f2eb8e8685e7640

#298 - Refine documentation regarding Converter behavior for collections.

view details

push time in 5 days

push eventspring-projects/spring-data-r2dbc

Mark Paluch

commit sha bfff1f4aaad49ea6c4d8d8e4b4b2f80ba3bf5ab3

#298 - Refine documentation regarding Converter behavior for collections.

view details

push time in 5 days

issue commentspring-projects/spring-data-r2dbc

Documentation request: How to use Reading / Writing converter w/ List<T>

Spring Data unwraps collections into single elements retaining the nature of a Collection type. That being said, it's not possible to convert List<Something> into SomethingElse. Rather, your source type should be a complex type instead:

class Images { 
  List<Image> images;
}

Then, you can apply a converter Converter<Images, String> and vice versa.

I'm going to update the docs.

jackdpeterson

comment created time in 5 days

issue closedspring-projects/spring-data-r2dbc

Doc: Explicit Converters have to use Boxed Primitives

Hey,

If you use Kotlin and explicit converters, you have to use the boxed Objects for Java Primitives. Otherwise the following code does not work:

@ReadingConverter
class TestReadConverter : Converter<Row, Test> {
    override fun convert(source: Row) = Test(
        name = source.get("name", String::class.java)!!,
        id = source.get("id", Long::class.java) // change to Long::class.javaObjectType
    )
}

Maybe the documentation should include this.

closed time in 5 days

hfhbd
more