profile
viewpoint

artembilan/bookmarks 1

code to accompany a talk on Microservices

artembilan/disruptor 1

High Performance Inter-Thread Messaging Library

artembilan/AcceptOnceFileFilter-Test 0

Code to test question at http://stackoverflow.com/questions/39604652/acceptoncefilefilter-keeps-other-filters-from-working-in-a-compositefilelistfilt

artembilan/aggregator 0

The Spring Cloud Stream Aggregator Application Starter

artembilan/azure-spring-boot 0

Spring Boot Starters for Azure services

Pull request review commentspring-projects/spring-kafka

Add RecordMetadata to ProducerListener.onError

 default void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recor 	 * Invoked after an attempt to send a message has failed. 	 * @param producerRecord the failed record 	 * @param exception the exception thrown+	 * @deprecated in favor of {@link #onError(ProducerRecord, RecordMetadata, Exception)}. 	 */+	@Deprecated 	default void onError(ProducerRecord<K, V> producerRecord, Exception exception) { 	} +	/**+	 * Invoked after an attempt to send a message has failed.+	 * @param producerRecord the failed record+	 * @param recordMetadata The metadata for the record that was sent (i.e. the partition+	 * and offset). If an error occurred, metadata will contain only valid topic and maybe+	 * the partition. If the partition is not provided in the ProducerRecord and an error+	 * occurs before partition is assigned, then the partition will be set to+	 * RecordMetadata.UNKNOWN_PARTITION.+	 * @param exception the exception thrown+	 * @since 2.6.2+	 */+	@SuppressWarnings("deprecation")+	default void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
	default void onError(ProducerRecord<K, V> producerRecord, @Nullable RecordMetadata recordMetadata, Exception exception) {

?

garyrussell

comment created time in 2 days

PullRequestReviewEvent

Pull request review commentspring-projects/spring-kafka

Add RecordMetadata to ProducerListener.onError

 public void onError(ProducerRecord<K, V> record, Exception exception) { 			} 			logOutput.append(" to topic ").append(record.topic()); 			if (record.partition() != null) {-				logOutput.append(" and partition ").append(record.partition());+				logOutput.append(" and partition ").append(recordMetadata != null

So, @Nullable then on the method signature?

garyrussell

comment created time in 2 days

PullRequestReviewEvent

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+/*+ * Copyright 2020-2020 the original author or authors.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *      https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package org.springframework.cloud.fn.consumer.rsocket;++import java.util.function.Function;++import reactor.core.publisher.Mono;++import org.springframework.boot.context.properties.EnableConfigurationProperties;+import org.springframework.context.annotation.Bean;+import org.springframework.context.annotation.Configuration;+import org.springframework.messaging.Message;+import org.springframework.messaging.rsocket.RSocketRequester;++@Configuration+@EnableConfigurationProperties(RsocketConsumerProperties.class)+public class RsocketConsumerConfiguration {++	@Bean+	public Function<Message<?>, Mono<Void>> rsocketConsumer(RSocketRequester.Builder builder,+															RsocketConsumerProperties rsocketConsumerProperties) {+		final Mono<RSocketRequester> rSocketRequester = builder.connectTcp(rsocketConsumerProperties.getHost(),+				rsocketConsumerProperties.getPort());

If we stick with Spring Boot 2.3 for time being, please, consider to add cache() operator to this Mono. Otherwise we are going to reconnect on every single message passed to the function

sobychacko

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+/*+ * Copyright 2020-2020 the original author or authors.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *      https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package org.springframework.cloud.stream.app.rsocket.sink;++import org.junit.jupiter.api.Test;+import reactor.core.publisher.ReplayProcessor;+import reactor.test.StepVerifier;++import org.springframework.boot.WebApplicationType;+import org.springframework.boot.autoconfigure.SpringBootApplication;+import org.springframework.boot.builder.SpringApplicationBuilder;+import org.springframework.cloud.fn.consumer.rsocket.RsocketConsumerConfiguration;+import org.springframework.cloud.stream.binder.test.InputDestination;+import org.springframework.cloud.stream.binder.test.TestChannelBinderConfiguration;+import org.springframework.context.ConfigurableApplicationContext;+import org.springframework.context.annotation.ComponentScan;+import org.springframework.context.annotation.Import;+import org.springframework.integration.support.MessageBuilder;+import org.springframework.messaging.Message;+import org.springframework.messaging.handler.annotation.MessageMapping;+import org.springframework.stereotype.Controller;++public class RSocketSinkTests {++	@Test+	public void testRsocketSink() throws Exception {+		try (ConfigurableApplicationContext context = new SpringApplicationBuilder(

This test doesn't finish because you just send a data into an InputDestination. There is a function subscribed to that destination like this Function<Message<?>, Mono<Void>>. And looks like nothing subscribes to that Mono.

Probably time to talk to Oleg to see what we can do for this Reactive "consumer" to make a subscription automatic.

In MongoDbConsumer David did a hack like:

@Bean
public Consumer<Message<?>> mongodbConsumer(Function<Message<?>, Mono<Void>> mongodbConsumerFunction) {
	return message -> mongodbConsumerFunction.apply(message).subscribe();
}

Honestly I'm not a fun of this solution and would prefer to have something automatic on the framework side: some flatMap() or handle() should be done over there to honor back-pressure.

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+/*+ * Copyright 2020-2020 the original author or authors.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *      https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package org.springframework.cloud.fn.consumer.rsocket;++import org.springframework.boot.context.properties.ConfigurationProperties;++@ConfigurationProperties("rsocket.consumer")+public class RsocketConsumerProperties {++	/**+	 * RSocket host.+	 */+	private String host = "localhost";++	/**+	 * RSocket port.+	 */+	private int port = 7000;

we probably need to consider to have an URI uri option as well for WebSocket-based transport for that RsocketRequester.

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+/*+ * Copyright 2020-2020 the original author or authors.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *      https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package org.springframework.cloud.fn.consumer.rsocket;++import java.util.function.Function;++import org.junit.jupiter.api.Test;+import reactor.core.publisher.Mono;+import reactor.core.publisher.ReplayProcessor;+import reactor.test.StepVerifier;++import org.springframework.beans.factory.annotation.Autowired;+import org.springframework.boot.autoconfigure.SpringBootApplication;+import org.springframework.boot.test.context.SpringBootTest;+import org.springframework.context.annotation.ComponentScan;+import org.springframework.messaging.Message;+import org.springframework.messaging.handler.annotation.MessageMapping;+import org.springframework.messaging.support.GenericMessage;+import org.springframework.stereotype.Controller;++@SpringBootTest(properties = {"spring.rsocket.server.port=7000", "rsocket.consumer.route=test-route"})+public class RsocketConsumerTests {

What happened to @DirtiesContext? You start RSocket server over here and and you don't stop it. More over this is slightly dangerous to stick with the 7000: the port could be just busy on CI server... Consider to use a 0 to let the OS to select port.

This is the way how to get get port for the client then:

RSocketServerBootstrap serverBootstrap = applicationContext.getBean(RSocketServerBootstrap.class);
RSocketServer server = (RSocketServer) ReflectionTestUtils.getField(serverBootstrap, "server");

server.address().getPort();

I mean you probably need to think to divide server and client sides into different application context.

Probably you can consider to use an ApplicationContextRunner for the client side letting @SpringBootTest to warm up RSocket server for you.

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+{+  "data": {

I'm not sure what are these to JSON files about...

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+/*+ * Copyright 2020-2020 the original author or authors.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *      https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package org.springframework.cloud.fn.consumer.rsocket;++import java.util.function.Function;++import reactor.core.publisher.Mono;++import org.springframework.boot.context.properties.EnableConfigurationProperties;+import org.springframework.context.annotation.Bean;+import org.springframework.context.annotation.Configuration;+import org.springframework.messaging.Message;+import org.springframework.messaging.rsocket.RSocketRequester;++@Configuration

proxyBeanMethods = false.

I don't see a reason to defer such a task to the separate issue: not clear who and when is going to do that. But working out a good habit for new modules is the way to go.

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+/*+ * Copyright 2020-2020 the original author or authors.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *      https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package org.springframework.cloud.fn.consumer.rsocket;++import java.util.function.Function;++import reactor.core.publisher.Mono;++import org.springframework.boot.context.properties.EnableConfigurationProperties;+import org.springframework.context.annotation.Bean;+import org.springframework.context.annotation.Configuration;+import org.springframework.messaging.Message;+import org.springframework.messaging.rsocket.RSocketRequester;++@Configuration+@EnableConfigurationProperties(RsocketConsumerProperties.class)+public class RsocketConsumerConfiguration {++	@Bean+	public Function<Message<?>, Mono<Void>> rsocketConsumer(RSocketRequester.Builder builder,+															RsocketConsumerProperties rsocketConsumerProperties) {+		final Mono<RSocketRequester> rSocketRequester = builder.connectTcp(rsocketConsumerProperties.getHost(),

Huh? Don't we rely on Spring Boot 2.4 yet? This method is deprecated in Spring Framework 5.3

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+<?xml version="1.0" encoding="UTF-8"?>+<project xmlns="http://maven.apache.org/POM/4.0.0"+		 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"+		 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">+	<modelVersion>4.0.0</modelVersion>+	<artifactId>rsocket-consumer</artifactId>+	<version>1.0.0-SNAPSHOT</version>+	<name>rsocket-consumer</name>++	<parent>+		<groupId>org.springframework.cloud.fn</groupId>+		<artifactId>spring-functions-parent</artifactId>+		<version>1.0.0-SNAPSHOT</version>+		<relativePath>../../spring-functions-parent</relativePath>+	</parent>++	<dependencies>+		<dependency>+			<groupId>org.springframework.boot</groupId>+			<artifactId>spring-boot-starter-rsocket</artifactId>+		</dependency>+		<dependency>+			<groupId>org.springframework.boot</groupId>+			<artifactId>spring-boot-starter-validation</artifactId>

I don't think we need this dep since we don't validate props. And we probably don't need to: empty and null will definitely fall into defaults. Even that route can be as null...

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+# RSocket Consumer++A consumer that allows you to communicate to an RSocket route using its fire and forget strategy of execution.+The consumer uses the RSocket support from https://docs.spring.io/spring-integration/reference/html/rsocket.html[Spring Integration].

Well, doesn't look like this is true. I see that RsocketConsumerConfiguration has nothing about Spring Integration. Probably you need to reconsider this docs to refer to: https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/web-reactive.html#rsocket-requester

sobychacko

comment created time in 2 days

Pull request review commentspring-cloud/stream-applications

[Do-Not-Merge-Yet]Rsocket consumer

+# RSocket Consumer++A consumer that allows you to communicate to an RSocket route using its fire and forget strategy of execution.+The consumer uses the RSocket support from https://docs.spring.io/spring-integration/reference/html/rsocket.html[Spring Integration].++## Beans for injection++You can import `RSocketConsumerConfiguration` in the application and then inject the following bean.++`Function<Message<?>, Mono<Void>> rsocketConsumer`++You can use `rsocketConsumer` as a qualifier when injecting.++## Configuration Options++All configuration properties are prefixed with `rsocket.consumer`.++For more information on the various options available, please see link:src/main/java/org/springframework/cloud/fn/consumer/rsocket/RsocketConsumerProperties.java[RsocketConsumerProperties] and

Sounds like this sentence is not finished

sobychacko

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventartembilan/spring-integration

Artem Bilan

commit sha 5b54ece0fe3e93a89011113e6ca10eb3cc17864a

* Use `Class.getSimpleName()` instead to make gateway names less verbose

view details

push time in 2 days

pull request commentspring-projects/spring-integration

GH-3386: Add method signature for gateway proxy

Note: we may treat this as a breaking change, so after review probably a Migration Guide note to be added.

artembilan

comment created time in 2 days

PR opened spring-projects/spring-integration

GH-3386: Add method signature for gateway proxy

Fixes https://github.com/spring-projects/spring-integration/issues/3386

All the gateway proxy method invokers are supplied with the same bean name inherited from the proxy.

  • Add method signature for proxy method bean to fine-grain the management for those bean in the logs, message history and metrics

<!-- Thanks for contributing to Spring Integration. Please provide a brief description of your pull-request and reference any related issue numbers (prefix references with #).

See the Contributor Guidelines for more information. -->

+75 -62

0 comment

7 changed files

pr created time in 2 days

create barnchartembilan/spring-integration

branch : GH-3386

created branch time in 2 days

push eventspring-projects/spring-kafka

Gary Russell

commit sha be0cc7c0120a1f031668e24bf6fece14440dae9c

Add ListenerContainerNoLongerIdleEvent

view details

push time in 2 days

push eventspring-cloud/stream-applications

David Turanski

commit sha ddcc10363e708c93af010c1f3553ed7cca8a954d

List only enhancements * Add metadata.store.type Property and idempotent SftpSupplier for list-only * Implement list-only for S3 source and Optimize metadastore access * Fixed build and READMEs * Change to ConditionalOnProperty * Change to ReactiveMessageProducer * Update cdc-debezium-source/README.adoc * Make all MetadataStoreProperties visible

view details

push time in 3 days

PR merged spring-cloud/stream-applications

Reviewers
List only enhancements

This includes:

  • Use metadata.store.type property instead of class detection for auto configuration in metadata-store-common
  • Implement list-only for S3. Returns S3ObjectSummary
  • Sftp and s3 use metadata store to filter duplicates based on last modified time.
+657 -215

2 comments

27 changed files

dturanski

pr closed time in 3 days

push eventspring-cloud/stream-applications

Soby Chacko

commit sha 0378f7d3a0e50d6561804c85c06134ad6581e7ac

Migrating Mail Source * Mail supplier for receiving email from a URL (IMAP/POP3) * Generating mail source apps * Migrating tests * Addressing PR review comments * Addressing PR review comments

view details

push time in 3 days

PR merged spring-cloud/stream-applications

Migrating Mail Source
  • Mail supplier for receiving email from a URL (IMAP/POP3)
  • Generating mail source apps
  • Migrating tests
+1134 -0

0 comment

19 changed files

sobychacko

pr closed time in 3 days

push eventspring-projects/spring-amqp

Gary Russell

commit sha 914c6f845da24a80f46b4755cc61221e725fcb70

GH-1429: Improve ConditionalRejectingErrorHandler Resolves https://github.com/spring-projects/spring-amqp/issues/1249 Add protected getters for private fields; add `handleDiscarded()`.

view details

push time in 3 days

PR merged spring-projects/spring-amqp

GH-1429: Improve ConditionalRejectingErrorHandler

Resolves https://github.com/spring-projects/spring-amqp/issues/1249

Add protected getters for private fields; add handleDiscarded().

+40 -0

0 comment

1 changed file

garyrussell

pr closed time in 3 days

issue closedspring-projects/spring-amqp

Make ConditionalRejectingErrorHandler Easier to Subclass/Customize

https://stackoverflow.com/questions/63943342/park-xml-message-in-invalid-format-to-amqp-parking-lot-queue/63943873#63943873

closed time in 3 days

garyrussell

Pull request review commentspring-cloud/stream-applications

List only enhancements

  */ public class ImageRecognitionProcessorTests { -	@Test-	@EnabledOnOs(OS.MAC)

Why these image changes are here? What is the reason to make them as part of this PR? No commit message on the matter, so I can't understand the point of change and I don't know this code.

Thanks

dturanski

comment created time in 3 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 import org.springframework.integration.aws.support.filters.S3PersistentAcceptOnceFileListFilter; import org.springframework.integration.aws.support.filters.S3RegexPatternFileListFilter; import org.springframework.integration.aws.support.filters.S3SimplePatternFileListFilter;+import org.springframework.integration.core.GenericSelector; import org.springframework.integration.core.MessageSource; import org.springframework.integration.dsl.IntegrationFlows;+import org.springframework.integration.endpoint.ReactiveMessageSourceProducer; import org.springframework.integration.file.filters.ChainFileListFilter; import org.springframework.integration.file.filters.FileListFilter;-import org.springframework.integration.metadata.SimpleMetadataStore;+import org.springframework.integration.metadata.ConcurrentMetadataStore; import org.springframework.integration.util.IntegrationReactiveUtils; import org.springframework.messaging.Message;+import org.springframework.messaging.support.GenericMessage; import org.springframework.util.StringUtils;  /**  * @author Artem Bilan+ * @author David Turanski  */ @Configuration-@EnableConfigurationProperties({AwsS3SupplierProperties.class, FileConsumerProperties.class})-public class AwsS3SupplierConfiguration {+@EnableConfigurationProperties({ AwsS3SupplierProperties.class, FileConsumerProperties.class })+public abstract class AwsS3SupplierConfiguration { -	private final AwsS3SupplierProperties awsS3SupplierProperties;-	private final FileConsumerProperties fileConsumerProperties;-	private final AmazonS3 amazonS3;-	private final ResourceIdResolver resourceIdResolver;+	protected static final String METADATA_STORE_PREFIX = "s3-metadata-";++	protected final AwsS3SupplierProperties awsS3SupplierProperties;++	protected final FileConsumerProperties fileConsumerProperties;++	protected final AmazonS3 amazonS3;++	protected final ResourceIdResolver resourceIdResolver;++	protected final ConcurrentMetadataStore metadataStore;  	public AwsS3SupplierConfiguration(AwsS3SupplierProperties awsS3SupplierProperties,-									FileConsumerProperties fileConsumerProperties,-									AmazonS3 amazonS3,-									ResourceIdResolver resourceIdResolver) {+			FileConsumerProperties fileConsumerProperties,+			AmazonS3 amazonS3,+			ResourceIdResolver resourceIdResolver, ConcurrentMetadataStore metadataStore) { 		this.awsS3SupplierProperties = awsS3SupplierProperties; 		this.fileConsumerProperties = fileConsumerProperties; 		this.amazonS3 = amazonS3; 		this.resourceIdResolver = resourceIdResolver;+		this.metadataStore = metadataStore; 	} -	@Bean-	public S3InboundFileSynchronizer s3InboundFileSynchronizer() {-		S3SessionFactory s3SessionFactory = new S3SessionFactory(this.amazonS3, this.resourceIdResolver);-		S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(s3SessionFactory);-		synchronizer.setDeleteRemoteFiles(this.awsS3SupplierProperties.isDeleteRemoteFiles());-		synchronizer.setPreserveTimestamp(this.awsS3SupplierProperties.isPreserveTimestamp());-		String remoteDir = this.awsS3SupplierProperties.getRemoteDir();-		synchronizer.setRemoteDirectory(remoteDir);-		synchronizer.setRemoteFileSeparator(this.awsS3SupplierProperties.getRemoteFileSeparator());-		synchronizer.setTemporaryFileSuffix(this.awsS3SupplierProperties.getTmpFileSuffix());--		FileListFilter<S3ObjectSummary> filter = null;-		if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {-			filter = new S3SimplePatternFileListFilter(this.awsS3SupplierProperties.getFilenamePattern());+	@Configuration+	@ConditionalOnProperty(prefix = "s3.supplier", name = "list-only", havingValue = "false", matchIfMissing = true)+	static class SynchronizingConfiguation extends AwsS3SupplierConfiguration {

Typo

dturanski

comment created time in 3 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

         <version>3.0.0-SNAPSHOT</version>         <relativePath>../..</relativePath>     </parent>+    <properties>+        <jmockit.version>1.49</jmockit.version>+	</properties>+     <modelVersion>4.0.0</modelVersion>     <artifactId>stream-applications-micrometer-common</artifactId> +    <build>+        <plugins>+            <plugin>+                <artifactId>maven-surefire-plugin</artifactId>+                <configuration>+                    <argLine>+                        -javaagent:"${settings.localRepository}"/org/jmockit/jmockit/${jmockit.version}/jmockit-${jmockit.version}.jar

Why is this change here? Again: I don't understand this code: maybe you should ask someone else to review? Well, I'm OK to proceed, but I'd like to understand changes, so probably next time I won't ask questions like this. Thanks

dturanski

comment created time in 3 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 public class MetadataStoreAutoConfiguration {  	@Bean-	@ConditionalOnMissingBean+	@ConditionalOnProperty(prefix = "metadata.store", name = "type", havingValue = "memory", matchIfMissing = true) 	public ConcurrentMetadataStore simpleMetadataStore() { 		return new SimpleMetadataStore(); 	} -	@ConditionalOnClass(RedisMetadataStore.class)-	@ConditionalOnBean(RedisTemplate.class)

Iron argument. Accepting your solution on the matter

👍

dturanski

comment created time in 3 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 public void testSourceComposedWithSpelAndFilter() { 			OutputDestination target = context.getBean(OutputDestination.class); 			Message<byte[]> sourceMessage = target.receive(10000); 			final String actual = new String(sourceMessage.getPayload());-			System.out.println(actual);

Good catch! In Spring Integration we have a special Checkstyle rule to reject such a code 😄 :

<module name="Regexp">
	<property name="format" value="System.(out|err).print"/>
	<property name="illegalPattern" value="true"/>
	<property name="message" value="System.out or .err"/>
</module>
dturanski

comment created time in 3 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 import org.springframework.integration.aws.support.filters.S3PersistentAcceptOnceFileListFilter; import org.springframework.integration.aws.support.filters.S3RegexPatternFileListFilter; import org.springframework.integration.aws.support.filters.S3SimplePatternFileListFilter;+import org.springframework.integration.channel.QueueChannel;+import org.springframework.integration.core.GenericSelector; import org.springframework.integration.core.MessageSource; import org.springframework.integration.dsl.IntegrationFlows;+import org.springframework.integration.endpoint.MessageProducerSupport; import org.springframework.integration.file.filters.ChainFileListFilter; import org.springframework.integration.file.filters.FileListFilter;-import org.springframework.integration.metadata.SimpleMetadataStore;+import org.springframework.integration.metadata.ConcurrentMetadataStore; import org.springframework.integration.util.IntegrationReactiveUtils; import org.springframework.messaging.Message;+import org.springframework.messaging.MessageHeaders;+import org.springframework.messaging.PollableChannel;+import org.springframework.messaging.support.MessageBuilder; import org.springframework.util.StringUtils;  /**  * @author Artem Bilan+ * @author David Turanski  */ @Configuration-@EnableConfigurationProperties({AwsS3SupplierProperties.class, FileConsumerProperties.class})-public class AwsS3SupplierConfiguration {+@EnableConfigurationProperties({ AwsS3SupplierProperties.class, FileConsumerProperties.class })+@AutoConfigureAfter(MetadataStoreAutoConfiguration.class)+public abstract class AwsS3SupplierConfiguration { -	private final AwsS3SupplierProperties awsS3SupplierProperties;-	private final FileConsumerProperties fileConsumerProperties;-	private final AmazonS3 amazonS3;-	private final ResourceIdResolver resourceIdResolver;+	protected static final String METADATA_STORE_PREFIX = "s3-metadata-";++	protected final AwsS3SupplierProperties awsS3SupplierProperties;++	protected final FileConsumerProperties fileConsumerProperties;++	protected final AmazonS3 amazonS3;++	protected final ResourceIdResolver resourceIdResolver;++	protected final ConcurrentMetadataStore metadataStore;  	public AwsS3SupplierConfiguration(AwsS3SupplierProperties awsS3SupplierProperties,-									FileConsumerProperties fileConsumerProperties,-									AmazonS3 amazonS3,-									ResourceIdResolver resourceIdResolver) {+			FileConsumerProperties fileConsumerProperties,+			AmazonS3 amazonS3,+			ResourceIdResolver resourceIdResolver, ConcurrentMetadataStore metadataStore) { 		this.awsS3SupplierProperties = awsS3SupplierProperties; 		this.fileConsumerProperties = fileConsumerProperties; 		this.amazonS3 = amazonS3; 		this.resourceIdResolver = resourceIdResolver;+		this.metadataStore = metadataStore; 	}  	@Bean-	public S3InboundFileSynchronizer s3InboundFileSynchronizer() {-		S3SessionFactory s3SessionFactory = new S3SessionFactory(this.amazonS3, this.resourceIdResolver);-		S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(s3SessionFactory);-		synchronizer.setDeleteRemoteFiles(this.awsS3SupplierProperties.isDeleteRemoteFiles());-		synchronizer.setPreserveTimestamp(this.awsS3SupplierProperties.isPreserveTimestamp());-		String remoteDir = this.awsS3SupplierProperties.getRemoteDir();-		synchronizer.setRemoteDirectory(remoteDir);-		synchronizer.setRemoteFileSeparator(this.awsS3SupplierProperties.getRemoteFileSeparator());-		synchronizer.setTemporaryFileSuffix(this.awsS3SupplierProperties.getTmpFileSuffix());--		FileListFilter<S3ObjectSummary> filter = null;-		if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {-			filter = new S3SimplePatternFileListFilter(this.awsS3SupplierProperties.getFilenamePattern());+	public Supplier<Flux<Message<?>>> s3Supplier(Publisher<Message<Object>> s3SupplierFlow) {+		return () -> Flux.from(s3SupplierFlow);+	}++	@Configuration+	@ConditionalOnExpression("environment['s3.supplier.list-only'] != 'true'")+	static class SynchronizingConfiguation extends AwsS3SupplierConfiguration {++		@Bean+		public ChainFileListFilter<S3ObjectSummary> filter(AwsS3SupplierProperties awsS3SupplierProperties,+				ConcurrentMetadataStore metadataStore) {+			ChainFileListFilter<S3ObjectSummary> chainFilter = new ChainFileListFilter<>();+			FileListFilter<S3ObjectSummary> filter = null;+			if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {+				chainFilter.addFilter(+						new S3SimplePatternFileListFilter(this.awsS3SupplierProperties.getFilenamePattern()));+			}+			else if (this.awsS3SupplierProperties.getFilenameRegex() != null) {+				chainFilter+						.addFilter(new S3RegexPatternFileListFilter(this.awsS3SupplierProperties.getFilenameRegex()));+			}++			// chainFilter.addFilter(Arrays::asList);+			chainFilter.addFilter(new S3PersistentAcceptOnceFileListFilter(metadataStore, METADATA_STORE_PREFIX));+			return chainFilter; 		}-		else if (this.awsS3SupplierProperties.getFilenameRegex() != null) {-			filter = new S3RegexPatternFileListFilter(this.awsS3SupplierProperties.getFilenameRegex());++		SynchronizingConfiguation(AwsS3SupplierProperties awsS3SupplierProperties,+				FileConsumerProperties fileConsumerProperties,+				AmazonS3 amazonS3,+				ResourceIdResolver resourceIdResolver,+				ConcurrentMetadataStore concurrentMetadataStore) {+			super(awsS3SupplierProperties, fileConsumerProperties, amazonS3, resourceIdResolver,+					concurrentMetadataStore); 		}-		if (filter != null) {-			synchronizer.setFilter(new ChainFileListFilter<>(Arrays.asList(filter,-					new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "s3-metadata-"))));++		@Bean+		public Publisher<Message<Object>> s3SupplierFlow(MessageSource<?> s3MessageSource) {+			return FileUtils.enhanceFlowForReadingMode(IntegrationFlows+					.from(IntegrationReactiveUtils.messageSourceToFlux(s3MessageSource)), fileConsumerProperties)+					.toReactivePublisher(); 		}-		return synchronizer;-	} -	@Bean-	public MessageSource<File> s3MessageSource() {-		S3InboundFileSynchronizingMessageSource s3MessageSource =-				new S3InboundFileSynchronizingMessageSource(s3InboundFileSynchronizer());-		s3MessageSource.setLocalDirectory(this.awsS3SupplierProperties.getLocalDir());-		s3MessageSource.setAutoCreateLocalDirectory(this.awsS3SupplierProperties.isAutoCreateLocalDir());-		return s3MessageSource;-	}+		@Bean+		public S3InboundFileSynchronizer s3InboundFileSynchronizer(ChainFileListFilter<S3ObjectSummary> filter) {+			S3SessionFactory s3SessionFactory = new S3SessionFactory(this.amazonS3, this.resourceIdResolver);+			S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(s3SessionFactory);+			synchronizer.setDeleteRemoteFiles(this.awsS3SupplierProperties.isDeleteRemoteFiles());+			synchronizer.setPreserveTimestamp(this.awsS3SupplierProperties.isPreserveTimestamp());+			String remoteDir = this.awsS3SupplierProperties.getRemoteDir();+			synchronizer.setRemoteDirectory(remoteDir);+			synchronizer.setRemoteFileSeparator(this.awsS3SupplierProperties.getRemoteFileSeparator());+			synchronizer.setTemporaryFileSuffix(this.awsS3SupplierProperties.getTmpFileSuffix());+			synchronizer.setFilter(filter); -	@Bean-	public Publisher<Message<Object>> s3SupplierFlow() {-		return FileUtils.enhanceFlowForReadingMode(IntegrationFlows-				.from(IntegrationReactiveUtils.messageSourceToFlux(s3MessageSource())), fileConsumerProperties)-				.toReactivePublisher();+			return synchronizer;+		}++		@Bean+		public MessageSource<File> s3MessageSource(S3InboundFileSynchronizer s3InboundFileSynchronizer) {+			S3InboundFileSynchronizingMessageSource s3MessageSource = new S3InboundFileSynchronizingMessageSource(+					s3InboundFileSynchronizer);+			s3MessageSource.setLocalDirectory(this.awsS3SupplierProperties.getLocalDir());+			s3MessageSource.setAutoCreateLocalDirectory(this.awsS3SupplierProperties.isAutoCreateLocalDir());+			return s3MessageSource;+		} 	} -	@Bean-	public Supplier<Flux<Message<?>>> s3Supplier() {-		return () -> Flux.from(s3SupplierFlow());+	@Configuration+	@ConditionalOnExpression("environment['s3.supplier.list-only'] == 'true'")+	static class ListOnlyConfiguration extends AwsS3SupplierConfiguration {+		ListOnlyConfiguration(AwsS3SupplierProperties awsS3SupplierProperties,+				FileConsumerProperties fileConsumerProperties,+				AmazonS3 amazonS3,+				ResourceIdResolver resourceIdResolver, ConcurrentMetadataStore metadataStore) {+			super(awsS3SupplierProperties, fileConsumerProperties, amazonS3, resourceIdResolver, metadataStore);+		}++		@Bean+		public Publisher<Message<Object>> s3SupplierFlow(MessageSource<?> s3MessageSource,+				GenericSelector<S3ObjectSummary> listOnlyFilter) {+			return IntegrationFlows+					.from(IntegrationReactiveUtils.messageSourceToFlux(s3MessageSource))+					.filter(listOnlyFilter)+					.toReactivePublisher();+		}++		@Bean+		PollableChannel listingChannel() {+			return new QueueChannel();+		}++		@Bean+		GenericSelector<S3ObjectSummary> listOnlyFilter() {+			Predicate<S3ObjectSummary> predicate = s -> true;+			if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {+				Pattern pattern = Pattern.compile(this.awsS3SupplierProperties.getFilenamePattern());+				predicate = (S3ObjectSummary summary) -> pattern.matcher(summary.getKey()).matches();+			}+			else if (this.awsS3SupplierProperties.getFilenameRegex() != null) {+				predicate = (S3ObjectSummary summary) -> this.awsS3SupplierProperties.getFilenameRegex()+						.matcher(summary.getKey()).matches();+			}+			predicate = predicate.and((S3ObjectSummary summary) -> {+				final String key = METADATA_STORE_PREFIX + summary.getBucketName() + "-" + summary.getKey();+				final String lastModified = String.valueOf(summary.getLastModified().getTime());+				final String storedLastModified = this.metadataStore.get(key);+				boolean result = !lastModified.equals(storedLastModified);+				if (result) {+					metadataStore.put(key, lastModified);+				}+				return result;+			});++			GenericSelector<S3ObjectSummary> selector = predicate::test;++			return selector;+		}++		@Bean+		public MessageSource<?> s3MessageSource(PollableChannel listingChannel,+				S3ListingMessageProducer s3ListingMessageProducer) {+			return () -> {+				s3ListingMessageProducer.getObjectMetadata();+				return (Message<Object>) listingChannel.receive();

I wonder if you are OK to revise the solution in favor of ReactiveMessageSourceProducer.

I mean you leave the solution for list only with that S3ListingMessageProducer, but for other MessageSource-based, use this ReactiveMessageSourceProducer instead of that IntegrationReactiveUtils.messageSourceToFlux().

Well, in fact I see you really did an extra layer converting MessageProducerSupport to MessageSource and than to the Flux in the flow. You definitely need just an IntegrationFlows.from(MessageProducerSupport) for this list variant.

dturanski

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 import org.springframework.integration.aws.support.filters.S3PersistentAcceptOnceFileListFilter; import org.springframework.integration.aws.support.filters.S3RegexPatternFileListFilter; import org.springframework.integration.aws.support.filters.S3SimplePatternFileListFilter;+import org.springframework.integration.channel.QueueChannel;+import org.springframework.integration.core.GenericSelector; import org.springframework.integration.core.MessageSource; import org.springframework.integration.dsl.IntegrationFlows;+import org.springframework.integration.endpoint.MessageProducerSupport; import org.springframework.integration.file.filters.ChainFileListFilter; import org.springframework.integration.file.filters.FileListFilter;-import org.springframework.integration.metadata.SimpleMetadataStore;+import org.springframework.integration.metadata.ConcurrentMetadataStore; import org.springframework.integration.util.IntegrationReactiveUtils; import org.springframework.messaging.Message;+import org.springframework.messaging.MessageHeaders;+import org.springframework.messaging.PollableChannel;+import org.springframework.messaging.support.MessageBuilder; import org.springframework.util.StringUtils;  /**  * @author Artem Bilan+ * @author David Turanski  */ @Configuration-@EnableConfigurationProperties({AwsS3SupplierProperties.class, FileConsumerProperties.class})-public class AwsS3SupplierConfiguration {+@EnableConfigurationProperties({ AwsS3SupplierProperties.class, FileConsumerProperties.class })+@AutoConfigureAfter(MetadataStoreAutoConfiguration.class)+public abstract class AwsS3SupplierConfiguration { -	private final AwsS3SupplierProperties awsS3SupplierProperties;-	private final FileConsumerProperties fileConsumerProperties;-	private final AmazonS3 amazonS3;-	private final ResourceIdResolver resourceIdResolver;+	protected static final String METADATA_STORE_PREFIX = "s3-metadata-";++	protected final AwsS3SupplierProperties awsS3SupplierProperties;++	protected final FileConsumerProperties fileConsumerProperties;++	protected final AmazonS3 amazonS3;++	protected final ResourceIdResolver resourceIdResolver;++	protected final ConcurrentMetadataStore metadataStore;  	public AwsS3SupplierConfiguration(AwsS3SupplierProperties awsS3SupplierProperties,-									FileConsumerProperties fileConsumerProperties,-									AmazonS3 amazonS3,-									ResourceIdResolver resourceIdResolver) {+			FileConsumerProperties fileConsumerProperties,+			AmazonS3 amazonS3,+			ResourceIdResolver resourceIdResolver, ConcurrentMetadataStore metadataStore) { 		this.awsS3SupplierProperties = awsS3SupplierProperties; 		this.fileConsumerProperties = fileConsumerProperties; 		this.amazonS3 = amazonS3; 		this.resourceIdResolver = resourceIdResolver;+		this.metadataStore = metadataStore; 	}  	@Bean-	public S3InboundFileSynchronizer s3InboundFileSynchronizer() {-		S3SessionFactory s3SessionFactory = new S3SessionFactory(this.amazonS3, this.resourceIdResolver);-		S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(s3SessionFactory);-		synchronizer.setDeleteRemoteFiles(this.awsS3SupplierProperties.isDeleteRemoteFiles());-		synchronizer.setPreserveTimestamp(this.awsS3SupplierProperties.isPreserveTimestamp());-		String remoteDir = this.awsS3SupplierProperties.getRemoteDir();-		synchronizer.setRemoteDirectory(remoteDir);-		synchronizer.setRemoteFileSeparator(this.awsS3SupplierProperties.getRemoteFileSeparator());-		synchronizer.setTemporaryFileSuffix(this.awsS3SupplierProperties.getTmpFileSuffix());--		FileListFilter<S3ObjectSummary> filter = null;-		if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {-			filter = new S3SimplePatternFileListFilter(this.awsS3SupplierProperties.getFilenamePattern());+	public Supplier<Flux<Message<?>>> s3Supplier(Publisher<Message<Object>> s3SupplierFlow) {+		return () -> Flux.from(s3SupplierFlow);+	}++	@Configuration+	@ConditionalOnExpression("environment['s3.supplier.list-only'] != 'true'")+	static class SynchronizingConfiguation extends AwsS3SupplierConfiguration {++		@Bean+		public ChainFileListFilter<S3ObjectSummary> filter(AwsS3SupplierProperties awsS3SupplierProperties,+				ConcurrentMetadataStore metadataStore) {+			ChainFileListFilter<S3ObjectSummary> chainFilter = new ChainFileListFilter<>();+			FileListFilter<S3ObjectSummary> filter = null;+			if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {+				chainFilter.addFilter(+						new S3SimplePatternFileListFilter(this.awsS3SupplierProperties.getFilenamePattern()));+			}+			else if (this.awsS3SupplierProperties.getFilenameRegex() != null) {+				chainFilter+						.addFilter(new S3RegexPatternFileListFilter(this.awsS3SupplierProperties.getFilenameRegex()));+			}++			// chainFilter.addFilter(Arrays::asList);+			chainFilter.addFilter(new S3PersistentAcceptOnceFileListFilter(metadataStore, METADATA_STORE_PREFIX));+			return chainFilter; 		}-		else if (this.awsS3SupplierProperties.getFilenameRegex() != null) {-			filter = new S3RegexPatternFileListFilter(this.awsS3SupplierProperties.getFilenameRegex());++		SynchronizingConfiguation(AwsS3SupplierProperties awsS3SupplierProperties,+				FileConsumerProperties fileConsumerProperties,+				AmazonS3 amazonS3,+				ResourceIdResolver resourceIdResolver,+				ConcurrentMetadataStore concurrentMetadataStore) {+			super(awsS3SupplierProperties, fileConsumerProperties, amazonS3, resourceIdResolver,+					concurrentMetadataStore); 		}-		if (filter != null) {-			synchronizer.setFilter(new ChainFileListFilter<>(Arrays.asList(filter,-					new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "s3-metadata-"))));++		@Bean+		public Publisher<Message<Object>> s3SupplierFlow(MessageSource<?> s3MessageSource) {+			return FileUtils.enhanceFlowForReadingMode(IntegrationFlows+					.from(IntegrationReactiveUtils.messageSourceToFlux(s3MessageSource)), fileConsumerProperties)+					.toReactivePublisher(); 		}-		return synchronizer;-	} -	@Bean-	public MessageSource<File> s3MessageSource() {-		S3InboundFileSynchronizingMessageSource s3MessageSource =-				new S3InboundFileSynchronizingMessageSource(s3InboundFileSynchronizer());-		s3MessageSource.setLocalDirectory(this.awsS3SupplierProperties.getLocalDir());-		s3MessageSource.setAutoCreateLocalDirectory(this.awsS3SupplierProperties.isAutoCreateLocalDir());-		return s3MessageSource;-	}+		@Bean+		public S3InboundFileSynchronizer s3InboundFileSynchronizer(ChainFileListFilter<S3ObjectSummary> filter) {+			S3SessionFactory s3SessionFactory = new S3SessionFactory(this.amazonS3, this.resourceIdResolver);+			S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(s3SessionFactory);+			synchronizer.setDeleteRemoteFiles(this.awsS3SupplierProperties.isDeleteRemoteFiles());+			synchronizer.setPreserveTimestamp(this.awsS3SupplierProperties.isPreserveTimestamp());+			String remoteDir = this.awsS3SupplierProperties.getRemoteDir();+			synchronizer.setRemoteDirectory(remoteDir);+			synchronizer.setRemoteFileSeparator(this.awsS3SupplierProperties.getRemoteFileSeparator());+			synchronizer.setTemporaryFileSuffix(this.awsS3SupplierProperties.getTmpFileSuffix());+			synchronizer.setFilter(filter); -	@Bean-	public Publisher<Message<Object>> s3SupplierFlow() {-		return FileUtils.enhanceFlowForReadingMode(IntegrationFlows-				.from(IntegrationReactiveUtils.messageSourceToFlux(s3MessageSource())), fileConsumerProperties)-				.toReactivePublisher();+			return synchronizer;+		}++		@Bean+		public MessageSource<File> s3MessageSource(S3InboundFileSynchronizer s3InboundFileSynchronizer) {+			S3InboundFileSynchronizingMessageSource s3MessageSource = new S3InboundFileSynchronizingMessageSource(+					s3InboundFileSynchronizer);+			s3MessageSource.setLocalDirectory(this.awsS3SupplierProperties.getLocalDir());+			s3MessageSource.setAutoCreateLocalDirectory(this.awsS3SupplierProperties.isAutoCreateLocalDir());+			return s3MessageSource;+		} 	} -	@Bean-	public Supplier<Flux<Message<?>>> s3Supplier() {-		return () -> Flux.from(s3SupplierFlow());+	@Configuration+	@ConditionalOnExpression("environment['s3.supplier.list-only'] == 'true'")+	static class ListOnlyConfiguration extends AwsS3SupplierConfiguration {+		ListOnlyConfiguration(AwsS3SupplierProperties awsS3SupplierProperties,+				FileConsumerProperties fileConsumerProperties,+				AmazonS3 amazonS3,+				ResourceIdResolver resourceIdResolver, ConcurrentMetadataStore metadataStore) {+			super(awsS3SupplierProperties, fileConsumerProperties, amazonS3, resourceIdResolver, metadataStore);+		}++		@Bean+		public Publisher<Message<Object>> s3SupplierFlow(MessageSource<?> s3MessageSource,+				GenericSelector<S3ObjectSummary> listOnlyFilter) {+			return IntegrationFlows+					.from(IntegrationReactiveUtils.messageSourceToFlux(s3MessageSource))+					.filter(listOnlyFilter)+					.toReactivePublisher();+		}++		@Bean+		PollableChannel listingChannel() {+			return new QueueChannel();+		}++		@Bean+		GenericSelector<S3ObjectSummary> listOnlyFilter() {+			Predicate<S3ObjectSummary> predicate = s -> true;+			if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {+				Pattern pattern = Pattern.compile(this.awsS3SupplierProperties.getFilenamePattern());+				predicate = (S3ObjectSummary summary) -> pattern.matcher(summary.getKey()).matches();+			}+			else if (this.awsS3SupplierProperties.getFilenameRegex() != null) {+				predicate = (S3ObjectSummary summary) -> this.awsS3SupplierProperties.getFilenameRegex()+						.matcher(summary.getKey()).matches();+			}+			predicate = predicate.and((S3ObjectSummary summary) -> {+				final String key = METADATA_STORE_PREFIX + summary.getBucketName() + "-" + summary.getKey();+				final String lastModified = String.valueOf(summary.getLastModified().getTime());+				final String storedLastModified = this.metadataStore.get(key);+				boolean result = !lastModified.equals(storedLastModified);+				if (result) {+					metadataStore.put(key, lastModified);+				}+				return result;+			});++			GenericSelector<S3ObjectSummary> selector = predicate::test;++			return selector;+		}++		@Bean+		public MessageSource<?> s3MessageSource(PollableChannel listingChannel,+				S3ListingMessageProducer s3ListingMessageProducer) {+			return () -> {+				s3ListingMessageProducer.getObjectMetadata();+				return (Message<Object>) listingChannel.receive();+			};+		}++		@Bean+		S3ListingMessageProducer s3ListingMessageProducer() {+			S3ListingMessageProducer messageProducer = new S3ListingMessageProducer();+			messageProducer.setOutputChannel(listingChannel());+			return messageProducer;+		}++		class S3ListingMessageProducer extends MessageProducerSupport {+			public void getObjectMetadata() {+				ListObjectsRequest listObjectsRequest = new ListObjectsRequest();+				listObjectsRequest.setBucketName(awsS3SupplierProperties.getRemoteDir());+				ObjectListing objectListing = amazonS3.listObjects(listObjectsRequest);+				objectListing.getObjectSummaries().forEach(summary -> {+					sendMessage(MessageBuilder.createMessage(summary, new MessageHeaders(Collections.EMPTY_MAP)));

Why can't we use a new GenericMessage(summary) instead ?

dturanski

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 import org.springframework.integration.aws.support.filters.S3PersistentAcceptOnceFileListFilter; import org.springframework.integration.aws.support.filters.S3RegexPatternFileListFilter; import org.springframework.integration.aws.support.filters.S3SimplePatternFileListFilter;+import org.springframework.integration.channel.QueueChannel;+import org.springframework.integration.core.GenericSelector; import org.springframework.integration.core.MessageSource; import org.springframework.integration.dsl.IntegrationFlows;+import org.springframework.integration.endpoint.MessageProducerSupport; import org.springframework.integration.file.filters.ChainFileListFilter; import org.springframework.integration.file.filters.FileListFilter;-import org.springframework.integration.metadata.SimpleMetadataStore;+import org.springframework.integration.metadata.ConcurrentMetadataStore; import org.springframework.integration.util.IntegrationReactiveUtils; import org.springframework.messaging.Message;+import org.springframework.messaging.MessageHeaders;+import org.springframework.messaging.PollableChannel;+import org.springframework.messaging.support.MessageBuilder; import org.springframework.util.StringUtils;  /**  * @author Artem Bilan+ * @author David Turanski  */ @Configuration-@EnableConfigurationProperties({AwsS3SupplierProperties.class, FileConsumerProperties.class})-public class AwsS3SupplierConfiguration {+@EnableConfigurationProperties({ AwsS3SupplierProperties.class, FileConsumerProperties.class })+@AutoConfigureAfter(MetadataStoreAutoConfiguration.class)+public abstract class AwsS3SupplierConfiguration { -	private final AwsS3SupplierProperties awsS3SupplierProperties;-	private final FileConsumerProperties fileConsumerProperties;-	private final AmazonS3 amazonS3;-	private final ResourceIdResolver resourceIdResolver;+	protected static final String METADATA_STORE_PREFIX = "s3-metadata-";++	protected final AwsS3SupplierProperties awsS3SupplierProperties;++	protected final FileConsumerProperties fileConsumerProperties;++	protected final AmazonS3 amazonS3;++	protected final ResourceIdResolver resourceIdResolver;++	protected final ConcurrentMetadataStore metadataStore;  	public AwsS3SupplierConfiguration(AwsS3SupplierProperties awsS3SupplierProperties,-									FileConsumerProperties fileConsumerProperties,-									AmazonS3 amazonS3,-									ResourceIdResolver resourceIdResolver) {+			FileConsumerProperties fileConsumerProperties,+			AmazonS3 amazonS3,+			ResourceIdResolver resourceIdResolver, ConcurrentMetadataStore metadataStore) { 		this.awsS3SupplierProperties = awsS3SupplierProperties; 		this.fileConsumerProperties = fileConsumerProperties; 		this.amazonS3 = amazonS3; 		this.resourceIdResolver = resourceIdResolver;+		this.metadataStore = metadataStore; 	}  	@Bean-	public S3InboundFileSynchronizer s3InboundFileSynchronizer() {-		S3SessionFactory s3SessionFactory = new S3SessionFactory(this.amazonS3, this.resourceIdResolver);-		S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(s3SessionFactory);-		synchronizer.setDeleteRemoteFiles(this.awsS3SupplierProperties.isDeleteRemoteFiles());-		synchronizer.setPreserveTimestamp(this.awsS3SupplierProperties.isPreserveTimestamp());-		String remoteDir = this.awsS3SupplierProperties.getRemoteDir();-		synchronizer.setRemoteDirectory(remoteDir);-		synchronizer.setRemoteFileSeparator(this.awsS3SupplierProperties.getRemoteFileSeparator());-		synchronizer.setTemporaryFileSuffix(this.awsS3SupplierProperties.getTmpFileSuffix());--		FileListFilter<S3ObjectSummary> filter = null;-		if (StringUtils.hasText(this.awsS3SupplierProperties.getFilenamePattern())) {-			filter = new S3SimplePatternFileListFilter(this.awsS3SupplierProperties.getFilenamePattern());+	public Supplier<Flux<Message<?>>> s3Supplier(Publisher<Message<Object>> s3SupplierFlow) {+		return () -> Flux.from(s3SupplierFlow);+	}++	@Configuration+	@ConditionalOnExpression("environment['s3.supplier.list-only'] != 'true'")

Why not havingValue="false", matchIfMissing = true?

dturanski

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 public class MetadataStoreAutoConfiguration {  	@Bean-	@ConditionalOnMissingBean+	@ConditionalOnProperty(prefix = "metadata.store", name = "type", havingValue = "memory", matchIfMissing = true) 	public ConcurrentMetadataStore simpleMetadataStore() { 		return new SimpleMetadataStore(); 	} -	@ConditionalOnClass(RedisMetadataStore.class)-	@ConditionalOnBean(RedisTemplate.class)

Not good: we may just not have such a dependency in the classpath. In fact we really don't because all the store impls are optional in deps...

dturanski

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

List only enhancements

 public class MetadataStoreAutoConfiguration {  	@Bean-	@ConditionalOnMissingBean

Not good: I may decide to provide my own store or that one which is not supported here, e.g. Cassandra one.

dturanski

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentspring-cloud/stream-applications

List only enhancements

AmazonS3ListOnlyTests passes with an SI exception when shutting down

May I see that stack trace, please?

dturanski

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

+<?xml version="1.0" encoding="UTF-8"?>+<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">+    <modelVersion>4.0.0</modelVersion>+    <artifactId>mail-source</artifactId>+    <version>3.0.0-SNAPSHOT</version>+    <name>mail-source</name>+    <description>mail source apps</description>+    <packaging>jar</packaging>++    <parent>+        <groupId>org.springframework.cloud.stream.app</groupId>+        <artifactId>stream-applications-core</artifactId>+        <version>3.0.0-SNAPSHOT</version>+        <relativePath/>+    </parent>++    <dependencies>+        <dependency>+            <groupId>org.springframework.boot</groupId>+            <artifactId>spring-boot-starter-test</artifactId>+            <scope>test</scope>+        </dependency>+        <dependency>+            <groupId>org.springframework.cloud.fn</groupId>+            <artifactId>mail-supplier</artifactId>+        </dependency>+        <dependency>+            <groupId>org.springframework.cloud.stream.app</groupId>+            <artifactId>stream-applications-composite-function-support</artifactId>+            <version>${stream-apps-core.version}</version>+            <scope>test</scope>+        </dependency>+        <dependency>+            <groupId>org.springframework.integration</groupId>+            <artifactId>spring-integration-test-support</artifactId>+            <scope>test</scope>+        </dependency>+    </dependencies>++    <build>+        <plugins>+            <plugin>+                <groupId>org.springframework.cloud</groupId>+                <artifactId>spring-cloud-dataflow-apps-docs-plugin</artifactId>+            </plugin>+            <plugin>+                <groupId>org.springframework.cloud</groupId>+                <artifactId>spring-cloud-dataflow-apps-generator-plugin</artifactId>+                <configuration>+                    <application>+                        <name>mail</name>+                        <type>source</type>+                        <version>${project.version}</version>+                        <configClass>org.springframework.cloud.fn.supplier.mail.MailSupplierConfiguration.class</configClass>++                        <maven>+                            <dependencies>+                                <dependency>+                                    <groupId>org.springframework.cloud.fn</groupId>+                                    <artifactId>mail-supplier</artifactId>+                                </dependency>+                                <dependency>+                                    <groupId>org.springframework.cloud.stream.app</groupId>+                                    <artifactId>stream-applications-composite-function-support</artifactId>

And why haven't you addressed this one?

sobychacko

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

+# Mail Supplier++This module provides a File supplier that can be reused and composed in other applications.+The `Supplier` uses the mail IMAP and POP3 support from Spring Integration.+`mailSupplier` bean is implemented as a `java.util.function.Supplier`.

"The mailSupplier..." ?

sobychacko

comment created time in 4 days

PullRequestReviewEvent

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

 # JDBC Supplier -This module provides a JDBC supplier that can be reused and composed in other applications.+This module provides a Mail supplier that can be reused and composed in other applications.

Huh? There is still this mail word in JDBC readme...

sobychacko

comment created time in 4 days

PullRequestReviewEvent

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

                                             <groupId>org.springframework.boot</groupId>                                             <artifactId>spring-boot-starter-security</artifactId>                                         </dependency>+                                        <dependency>+                                            <groupId>com.jayway.jsonpath</groupId>+                                            <artifactId>json-path</artifactId>

OK. But there is no justification of the change in the commit message, so we could be lost bumping to the change in the future.

sobychacko

comment created time in 4 days

PullRequestReviewEvent

push eventspring-projects/spring-integration-samples

Artem Bilan

commit sha ce4979701f94fd5a276f8c213b8b5df38e52abfe

Upgrade dependencies to latest milestones

view details

push time in 4 days

release spring-projects/spring-integration

v5.4.0-M3

released time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

+<?xml version="1.0" encoding="UTF-8"?>+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">+    <modelVersion>4.0.0</modelVersion>+    <artifactId>mail-supplier</artifactId>+    <version>1.0.0-SNAPSHOT</version>+    <name>mail-supplier</name>+    <description>mail supplier</description>

Probably need to be Capitalized...

sobychacko

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

+# Mail Supplier++This module provides a File supplier that can be reused and composed in other applications.+The `Supplier` uses the mail IMAP and POP3 support from Spring Integration.+`MailSupplier` is implemented as a `java.util.function.Supplier`.

I don't think that we really have a MailSupplier class. Can we somehow change this text to not mislead end-users?

sobychacko

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

-# JDBC Supplier+# Mail Supplier

Probably some mistake... Because this is really in the jdbc-supplier

sobychacko

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

                                             <groupId>org.springframework.boot</groupId>                                             <artifactId>spring-boot-starter-security</artifactId>                                         </dependency>+                                        <dependency>+                                            <groupId>com.jayway.jsonpath</groupId>+                                            <artifactId>json-path</artifactId>

Why do we need this? Why haven't we needed it before? What does really make you to bring it now?

sobychacko

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

+<?xml version="1.0" encoding="UTF-8"?>+<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">+    <modelVersion>4.0.0</modelVersion>+    <artifactId>mail-source</artifactId>+    <version>3.0.0-SNAPSHOT</version>+    <name>mail-source</name>+    <description>mail source apps</description>+    <packaging>jar</packaging>++    <parent>+        <groupId>org.springframework.cloud.stream.app</groupId>+        <artifactId>stream-applications-core</artifactId>+        <version>3.0.0-SNAPSHOT</version>+        <relativePath/>+    </parent>++    <dependencies>+        <dependency>+            <groupId>org.springframework.boot</groupId>+            <artifactId>spring-boot-starter-test</artifactId>+            <scope>test</scope>+        </dependency>+        <dependency>+            <groupId>org.springframework.cloud.fn</groupId>+            <artifactId>mail-supplier</artifactId>+        </dependency>+        <dependency>+            <groupId>org.springframework.cloud.stream.app</groupId>+            <artifactId>stream-applications-composite-function-support</artifactId>+            <version>${stream-apps-core.version}</version>+            <scope>test</scope>+        </dependency>+        <dependency>+            <groupId>org.springframework.integration</groupId>+            <artifactId>spring-integration-test-support</artifactId>+            <scope>test</scope>+        </dependency>+    </dependencies>++    <build>+        <plugins>+            <plugin>+                <groupId>org.springframework.cloud</groupId>+                <artifactId>spring-cloud-dataflow-apps-docs-plugin</artifactId>+            </plugin>+            <plugin>+                <groupId>org.springframework.cloud</groupId>+                <artifactId>spring-cloud-dataflow-apps-generator-plugin</artifactId>+                <configuration>+                    <application>+                        <name>mail</name>+                        <type>source</type>+                        <version>${project.version}</version>+                        <configClass>org.springframework.cloud.fn.supplier.mail.MailSupplierConfiguration.class</configClass>++                        <maven>+                            <dependencies>+                                <dependency>+                                    <groupId>org.springframework.cloud.fn</groupId>+                                    <artifactId>mail-supplier</artifactId>+                                </dependency>+                                <dependency>+                                    <groupId>org.springframework.cloud.stream.app</groupId>+                                    <artifactId>stream-applications-composite-function-support</artifactId>

Why do we need this here? I see it is in a <scope>test</scope> for the project. Anyway: is there a reason that we need this dep at all? Thanks

sobychacko

comment created time in 4 days

Pull request review commentspring-cloud/stream-applications

Migrating Mail Source

+/*+ * Copyright 2020-2020 the original author or authors.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *      https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package org.springframework.cloud.stream.app.source.file;++import org.junit.jupiter.api.AfterAll;+import org.junit.jupiter.api.BeforeAll;+import org.junit.jupiter.api.Test;++import org.springframework.boot.WebApplicationType;+import org.springframework.boot.autoconfigure.SpringBootApplication;+import org.springframework.boot.builder.SpringApplicationBuilder;+import org.springframework.cloud.fn.supplier.mail.MailSupplierConfiguration;+import org.springframework.cloud.stream.binder.test.OutputDestination;+import org.springframework.cloud.stream.binder.test.TestChannelBinderConfiguration;+import org.springframework.context.ConfigurableApplicationContext;+import org.springframework.context.annotation.Import;+import org.springframework.integration.test.mail.TestMailServer;+import org.springframework.messaging.Message;++import static org.assertj.core.api.Assertions.assertThat;+++/**+ * @author Soby Chacko+ */+public class MailSourceTests {++	private static TestMailServer.MailServer MAIL_SERVER;++	@BeforeAll+	public static void startImapServer() throws Throwable {+		startMailServer(TestMailServer.imap(0));+	}++	@AfterAll+	public static void cleanup() {+		System.clearProperty("test.mail.server.port");+		MAIL_SERVER.stop();+	}++	private static void startMailServer(TestMailServer.MailServer mailServer)+			throws InterruptedException {+		MAIL_SERVER = mailServer;+		System.setProperty("test.mail.server.port", "" + MAIL_SERVER.getPort());+		int n = 0;+		while (n++ < 100 && (!MAIL_SERVER.isListening())) {+			Thread.sleep(100);+		}+		assertThat(n < 100).isTrue();+	}++	@Test+	public void testMailSource() throws Exception {+		try (ConfigurableApplicationContext context = new SpringApplicationBuilder(+				TestChannelBinderConfiguration+						.getCompleteConfiguration(MailSourceTestApplication.class))+				.web(WebApplicationType.NONE)+				.run("--spring.cloud.function.definition=mailSupplier",+						"--mail.supplier.url=imap://user:pw@localhost:${test.mail.server.port}/INBOX",+						"--mail.supplier.mark-as-read=true",+						"--mail.supplier.delete=false",+						"--mail.supplier.user-flag=testSIUserFlag",+						"--mail.supplier.java-mail-properties=mail.imap.socketFactory.fallback=true\\n mail.store.protocol=imap\\n mail.debug=true")) {++			OutputDestination target = context.getBean(OutputDestination.class);+			Message<byte[]> sourceMessage = target.receive(10000);+			final String actual = new String(sourceMessage.getPayload());+			assertThat(actual.endsWith("\r\n\r\nfoo\r\n\r\n"));+		}+	}++	@SpringBootApplication+	@Import(MailSupplierConfiguration.class)

Don't we make them as auto-configuration? I recall that there was a discussion on the matter... Where did it go, please?

sobychacko

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventspring-projects/spring-integration

Artem Bilan

commit sha a36377a8d8d3886e9c111d411e6513e4e62f693d

Upgrade to SK-2.6.1

view details

push time in 4 days

pull request commentspring-projects/spring-integration

GH-3376: Remove gauges on application ctx close

Back-ported to 5.3.x as https://github.com/spring-projects/spring-integration/commit/d0cab670ebc52d28abb9d8def8626e966af4ba55 after fixing some conflicts.

Back-ported to 5.2.x as https://github.com/spring-projects/spring-integration/commit/a2a8764481c5433ddbc35a839bd35a947edadc0f with more conflict resolutions.

artembilan

comment created time in 4 days

push eventspring-projects/spring-integration

Artem Bilan

commit sha a2a8764481c5433ddbc35a839bd35a947edadc0f

GH-3376: Remove gauges on application ctx close (#3377) * GH-3376: Remove gauges on application ctx close Fixes https://github.com/spring-projects/spring-integration/issues/3376 The `MeterRegistry` may request meters on application shutdown. The gauges for channels, handlers and message sources don't make sense at the moment since all those beans are going to be destroyed. * Remove gauges for channel, handler and message source numbers from the `IntegrationManagementConfigurer.destroy()` **Cherry-pick to 5.3.x & 5.2.x** * * Add `MicrometerImportSelector` to conditionally load a `MicrometerMetricsCaptorConfiguration` when `MeterRegistry` is on class path. * Make `MicrometerMetricsCaptorConfiguration.integrationMicrometerMetricsCaptor()` bean dependant on the `ObjectProvider<MeterRegistry>` * Make `IntegrationManagementConfiguration.managementConfigurer()` dependant on the `ObjectProvider<MetricsCaptor>`. This way the `IntegrationManagementConfigurer` is destroyed before `MeterRegistry` when application context is closed * Deprecate `MicrometerMetricsCaptor.loadCaptor()` in favor of `@Import(MicrometerImportSelector.class)` * * Add `MicrometerMetricsCaptorRegistrar` to register a `MICROMETER_CAPTOR_NAME` bean when `MeterRegistry` is on class path and no `MICROMETER_CAPTOR_NAME` bean yet. * Make `IntegrationManagementConfiguration.managementConfigurer()` dependant on the `ObjectProvider<MetricsCaptor>`. This way the `IntegrationManagementConfigurer` is destroyed before `MeterRegistry` when application context is closed * Deprecate `MicrometerMetricsCaptor.loadCaptor()` in favor of `@Import(MicrometerMetricsCaptorRegistrar.class)` * Fix test to make a `MeterRegistry` bean as `static` since `@EnableIntegrationManagement` depends on this bean definition now # Conflicts: # spring-integration-core/src/main/java/org/springframework/integration/config/EnableIntegrationManagement.java # spring-integration-core/src/main/java/org/springframework/integration/config/IntegrationManagementConfiguration.java # spring-integration-core/src/main/java/org/springframework/integration/config/IntegrationManagementConfigurer.java * Fix some deprecation warnings # Conflicts: # spring-integration-core/src/main/java/org/springframework/integration/config/IntegrationManagementConfigurer.java

view details

push time in 4 days

push eventspring-projects/spring-integration

Artem Bilan

commit sha d0cab670ebc52d28abb9d8def8626e966af4ba55

GH-3376: Remove gauges on application ctx close (#3377) * GH-3376: Remove gauges on application ctx close Fixes https://github.com/spring-projects/spring-integration/issues/3376 The `MeterRegistry` may request meters on application shutdown. The gauges for channels, handlers and message sources don't make sense at the moment since all those beans are going to be destroyed. * Remove gauges for channel, handler and message source numbers from the `IntegrationManagementConfigurer.destroy()` **Cherry-pick to 5.3.x & 5.2.x** * * Add `MicrometerImportSelector` to conditionally load a `MicrometerMetricsCaptorConfiguration` when `MeterRegistry` is on class path. * Make `MicrometerMetricsCaptorConfiguration.integrationMicrometerMetricsCaptor()` bean dependant on the `ObjectProvider<MeterRegistry>` * Make `IntegrationManagementConfiguration.managementConfigurer()` dependant on the `ObjectProvider<MetricsCaptor>`. This way the `IntegrationManagementConfigurer` is destroyed before `MeterRegistry` when application context is closed * Deprecate `MicrometerMetricsCaptor.loadCaptor()` in favor of `@Import(MicrometerImportSelector.class)` * * Add `MicrometerMetricsCaptorRegistrar` to register a `MICROMETER_CAPTOR_NAME` bean when `MeterRegistry` is on class path and no `MICROMETER_CAPTOR_NAME` bean yet. * Make `IntegrationManagementConfiguration.managementConfigurer()` dependant on the `ObjectProvider<MetricsCaptor>`. This way the `IntegrationManagementConfigurer` is destroyed before `MeterRegistry` when application context is closed * Deprecate `MicrometerMetricsCaptor.loadCaptor()` in favor of `@Import(MicrometerMetricsCaptorRegistrar.class)` * Fix test to make a `MeterRegistry` bean as `static` since `@EnableIntegrationManagement` depends on this bean definition now # Conflicts: # spring-integration-core/src/main/java/org/springframework/integration/config/EnableIntegrationManagement.java # spring-integration-core/src/main/java/org/springframework/integration/config/IntegrationManagementConfiguration.java # spring-integration-core/src/main/java/org/springframework/integration/config/IntegrationManagementConfigurer.java * Fix some deprecation warnings

view details

push time in 4 days

push eventspring-projects/spring-integration

Artem Bilan

commit sha be82e572f53e2499eb65f2c05b90d76a4ee683f4

Upgrade dependencies; fix deprecations * Prepare for release

view details

push time in 4 days

issue commentspring-cloud/spring-cloud-gcp

Spring Cloud Dataflow samples with Pub/Sub

No, I don't see because Spring team just don't support that binder and its release lifecycle might not be aligned with what we release in that stream-applications project.

I think @mminella's announcement is the same what I'm telling you. So, if you would like to provide an out-of-the-box stream applications bundled with the Pub/Sub binder, you should do that in this project or a new separate one.

And that's why I'm pointing to @sobychacko . He is a lead of stream-applications project and he can advice how to proceed with the proper final artifact generation.

saturnism

comment created time in 4 days

PullRequestReviewEvent

pull request commentspring-projects/spring-integration

GH-3376: Remove gauges on application ctx close

OK. I have added a MicrometerMetricsCaptorRegistrar implements ImportBeanDefinitionRegistrar to satisfy dependency tree.

Please, take a look how is this now, @wilkinsona .

Thank you!

artembilan

comment created time in 5 days

push eventartembilan/spring-integration

guycall

commit sha 71a273eeced21bbebfb8c566faa019018ffbc49a

Fix naming in Inbound Kafka Gateway code sample

view details

Artem Bilan

commit sha 6c2a4c97c5b84005ab63e6443566a23515305a37

More H2 for JDBC tests * Upgrade to Spring Security 5.4.0

view details

Artem Bilan

commit sha 47cae4670f73fb0842e262b00ce6f73d1c5181eb

GH-3370: Remove synchronized from RemoteFileUtils (#3380) * GH-3370: Remove synchronized from RemoteFileUtils Fixes https://github.com/spring-projects/spring-integration/issues/3370 The `synchronized` on the `RemoteFileUtils.makeDirectories()` makes an application too slow, especially when we deal with different paths in different sessions * Remove the `synchronized` from that method and rework `SftpSession.mkdir()` to return `false` when "A file cannot be created if it already exists" exception is thrown from the server. Essentially make an `exists()` call to be sure that an exception is really related to "file-already-exists" answer from the server **Cherry-pick to 5.3.x, 5.2.x & 4.3.x** * * Re-throw an exception in the `SftpSession.mkdir()` when error code is not `4` or remote dir does not exist * * Check `session.mkdir()` result in the `RemoteFileUtils` to throw an `IOException` when `false` * * Fix mock test to return `true` for `mkdir` instead of `null`

view details

Artem Bilan

commit sha 383af8ceb981c0324ec9cb13906edef1080f3729

GH-3374: Fix scan for BF propagation (#3378) * GH-3374: Fix scan for BF propagation Fixes https://github.com/spring-projects/spring-integration/issues/3374 An internal `ClassPathScanningCandidateComponentProvider` instance in the `IntegrationComponentScanRegistrar` does not propagate a provided `registry`. * Implement `getRegistry()` on the internal `ClassPathScanningCandidateComponentProvider` to propagate a provided into the `registerBeanDefinitions()` a `BeanDefinitionRegistry` * Add `@Conditional` on some scanned `@MessagingGateway` in the `EnableIntegrationTests` **Cherry-pick to 5.3.x & 5.2.x** * * Remove unused import * Restore `unused` warning on the unused registry arg

view details

Artem Bilan

commit sha 481d5eb2a56abe92466c784fb24809bbfedc0a9f

Fix new Sonar smells * Use `tryEmitNext` on Reactor `Sink` since `emitNext` is deprecated * Add `MessageDeliveryException` emission when `send()` returns `false` in the `FluxMessageChannel` for `subscribeTo` provided `Publisher`

view details

Artem Bilan

commit sha 09dec2eab5216586f13cdcf45308238e5072a931

Handle new Reactor Emission FAIL_NON_SERIALIZED * Rework WebFlux test to JUnit 5

view details

Artem Bilan

commit sha 9aa9707f3798f0f0da4f5fefd4b16e31f54bf5a1

GH-3366: Return null from HTTP handleNoMatch Fixes: https://github.com/spring-projects/spring-integration/issues/3366 When the same path is mapped for integration HTTP endpoint and MVC method mapping, but different other mapping options (e.g. method) and one of them fails to match, there is no way to try other `RequestMapping` from the `DispatcherServlet` because `RequestMappingHandlerMapping.handleNoMatch()` throws an exception when no match instead of `null` according chain of responsibility logic in the `DispatcherServlet` * Rework `IntegrationRequestMappingHandlerMapping.handleNoMatch()` to catch all the super exception and return `null` to the `DispatcherServlet` to let it to try other `RequestMapping` from the configuration * Change an order for `IntegrationRequestMappingHandlerMapping` to `-1` to let it to be tried first before regular MVC `RequestMappingHandlerMapping` * Add a test-case to ensure that mix-in Integration HTTP and MVC for the same path works as expected without failing on first try

view details

Gary Russell

commit sha dbb2f9cecb305be1c2c3dbe08c187108f5446fc9

Revert "GH-3366: Return null from HTTP handleNoMatch" This reverts commit 9aa9707f3798f0f0da4f5fefd4b16e31f54bf5a1. See https://github.com/spring-projects/spring-framework/issues/25636#issuecomment-691269516

view details

Artem Bilan

commit sha c36314d6eca1b218de7d4f03abf5edf23de4745b

GH-3366: Document HTTP request mapping limitation (#3382) * GH-3366: Document HTTP request mapping limitation Fixes https://github.com/spring-projects/spring-integration/issues/3366 The same path cannot be mapped both Spring Integration and MVC ways * Doc Polishing Co-authored-by: Gary Russell <grussell@pivotal.io>

view details

Artem Bilan

commit sha 5bac9bf66ec256dd387e10ea78516c2efc2e9eb1

Fix some tests race conditions * Fix unused import in the `IntegrationRequestMappingHandlerMapping` * Fix deprecations from Reactor * Fix race condition in the `AbstractCorrelatingMessageHandlerTests`: the discard message is sent much earlier than group is removed from the store. Iterate group count call until it pass or 10 seconds timeout * Remove list size assert in the `FtpServerOutboundTests`: looks like it is not updated properly even if we have an expected content in the collection * Increase timeout to assert remote files removal in the `FtpRemoteFileTemplateTests`

view details

Artem Bilan

commit sha 19b59bbde87068e8a377fc95f588c33b779497ad

Upgrade to Reactor 2020.0.0-RC1 * Handle `Emission.FAIL_ZERO_SUBSCRIBER` in the `FluxMessageChannel` and `IntegrationReactiveUtils`

view details

Artem Bilan

commit sha ee7ebf3530874a6b88e5092b4fd1d26c0e3f71e0

GH-3373: Support IPV6 in AbstractInboundFileSynch Fixes https://github.com/spring-projects/spring-integration/issues/3373 The `AbstractInboundFileSynchronizer` doesn't consider that `hostPort` from `Session` could be in an IPv6 syntax * Parse the `hostPort` from `Session` in a manner that only the last `:` is treated as a port delimiter **Cherry-pick to 5.3.x & 5.2.x**

view details

Artem Bilan

commit sha 7f330850d1b884a4c6b9462345cbefa076b6aa23

GH-3372: Expose (S)FTP remoteComparator for DSL Fixes https://github.com/spring-projects/spring-integration/issues/3372 **Cherry-pick to 5.3.x & 5.2.x**

view details

Artem Bilan

commit sha 7a97eb6e1c1d78d74a594317f3546899220108b1

GH-3336: Change MongoDb Store sequence to long (#3385) * GH-3336: Change MongoDb Store sequence to long Fixes https://github.com/spring-projects/spring-integration/issues/3336 Turns out there are some scenarios where too many messages are transferred through the message store, so `int` for sequence is not enough as a type * Change sequence to `long` to widen a sequence lifespan * * Change MongoDb store to deal with `Number.longValue()` instead of casting which doesn't work from `Integer` to `Long`. This way we can keep an old sequence document with an `int` type for value * Documents with new `long` type for their sequence field are OK. The `NumberToNumberConverter` has an effect converting `int` to `long` properly.

view details

Artem Bilan

commit sha ffeb33c1b7e6cdae6d124bb1561931dd545467ee

GH-3376: Remove gauges on application ctx close Fixes https://github.com/spring-projects/spring-integration/issues/3376 The `MeterRegistry` may request meters on application shutdown. The gauges for channels, handlers and message sources don't make sense at the moment since all those beans are going to be destroyed. * Remove gauges for channel, handler and message source numbers from the `IntegrationManagementConfigurer.destroy()` **Cherry-pick to 5.3.x & 5.2.x**

view details

Artem Bilan

commit sha 0ddfc86c13c26dc011d8091c22e5c3b890039228

* Add `MicrometerImportSelector` to conditionally load a `MicrometerMetricsCaptorConfiguration` when `MeterRegistry` is on class path. * Make `MicrometerMetricsCaptorConfiguration.integrationMicrometerMetricsCaptor()` bean dependant on the `ObjectProvider<MeterRegistry>` * Make `IntegrationManagementConfiguration.managementConfigurer()` dependant on the `ObjectProvider<MetricsCaptor>`. This way the `IntegrationManagementConfigurer` is destroyed before `MeterRegistry` when application context is closed * Deprecate `MicrometerMetricsCaptor.loadCaptor()` in favor of `@Import(MicrometerImportSelector.class)`

view details

Artem Bilan

commit sha 639136e2abcbd3fe0465a8c4beac335bdef42c7c

* Add `MicrometerMetricsCaptorRegistrar` to register a `MICROMETER_CAPTOR_NAME` bean when `MeterRegistry` is on class path and no `MICROMETER_CAPTOR_NAME` bean yet. * Make `IntegrationManagementConfiguration.managementConfigurer()` dependant on the `ObjectProvider<MetricsCaptor>`. This way the `IntegrationManagementConfigurer` is destroyed before `MeterRegistry` when application context is closed * Deprecate `MicrometerMetricsCaptor.loadCaptor()` in favor of `@Import(MicrometerMetricsCaptorRegistrar.class)` * Fix test to make a `MeterRegistry` bean as `static` since `@EnableIntegrationManagement` depends on this bean definition now

view details

push time in 5 days

pull request commentspring-projects/spring-kafka

GH-1587: Don't Correct TX Offsets After Seek(s)

... and cherry-picked to 2.5.x

garyrussell

comment created time in 5 days

push eventspring-projects/spring-kafka

Gary Russell

commit sha ad246753bd1c5d09f9205076a0887213d0f2c8fc

GH-1587: Don't Correct TX Offsets After Seek(s) Resolves https://github.com/spring-projects/spring-kafka/issues/1587 We should not advance the consumer partition if Seek operations have been performed; skip fixing the offsets if that condition is detected. Capture the positions after the poll and check during the fix operation. This is difficult to write a unit test for; tested with a Boot application, observing the correct behavior via DEBUG logs. **cherry-pick to 2.5.x** (cherry picked from commit bd911ad99cffd212224c3401127b8bc349ea3670)

view details

push time in 5 days

push eventspring-projects/spring-kafka

Gary Russell

commit sha bd911ad99cffd212224c3401127b8bc349ea3670

GH-1587: Don't Correct TX Offsets After Seek(s) Resolves https://github.com/spring-projects/spring-kafka/issues/1587 We should not advance the consumer partition if Seek operations have been performed; skip fixing the offsets if that condition is detected. Capture the positions after the poll and check during the fix operation. This is difficult to write a unit test for; tested with a Boot application, observing the correct behavior via DEBUG logs. **cherry-pick to 2.5.x**

view details

push time in 5 days

PR merged spring-projects/spring-kafka

GH-1587: Don't Correct TX Offsets After Seek(s)

Resolves https://github.com/spring-projects/spring-kafka/issues/1587

We should not advance the consumer partition if Seek operations have been performed; skip fixing the offsets if that condition is detected.

Capture the positions after the poll and check during the fix operation.

This is difficult to write a unit test for; tested with a Boot application, observing the correct behavior via DEBUG logs.

cherry-pick to 2.5.x

+18 -0

0 comment

1 changed file

garyrussell

pr closed time in 5 days

GollumEvent
GollumEvent

pull request commentspring-projects/spring-kafka

GH-1587: Option to Correct Transactional Offsets

... and cherry-picked to 2.5.x

garyrussell

comment created time in 5 days

push eventspring-projects/spring-kafka

Gary Russell

commit sha 80aa8fde7180032e187422f55fffe0bf16ada429

GH-1587: Option to Correct Transactional Offsets Resolves https://github.com/spring-projects/spring-kafka/issues/1587 See javadoc for `ConsumerProperties.setFixTxOffsets()` for more information. **cherry-pick to 2.5.x** * Add `this.` to `logger` to honor Checkstyle

view details

push time in 5 days

push eventspring-projects/spring-kafka

Gary Russell

commit sha cee9bed89d6ae718d8fceb965cf5a33e220470d4

GH-1587: Option to Correct Transactional Offsets Resolves https://github.com/spring-projects/spring-kafka/issues/1587 See javadoc for `ConsumerProperties.setFixTxOffsets()` for more information. **cherry-pick to 2.5.x** * Add `this.` to `logger` to honor Checkstyle

view details

push time in 5 days

PR merged spring-projects/spring-kafka

GH-1587: Option to Correct Transactional Offsets

Resolves https://github.com/spring-projects/spring-kafka/issues/1587

See javadoc for ConsumerProperties.setFixTxOffsets() for more information.

cherry-pick to 2.5.x

+109 -5

0 comment

4 changed files

garyrussell

pr closed time in 5 days

issue closedspring-projects/spring-kafka

Kafka Consumer lag not zero when input topic transactional

**Affects Version(s): 2.5.5.RELEASE Description:

When using ConcurrentMessageListenerContainer, with batch processing and "read.committed" consumers, reading off of a transacted topic, we see that the consumer lag is never 0. It is off by 1 offset for each partition(Even after processing all messages)

Based on a similar jira on Kafka Streams: https://issues.apache.org/jira/browse/KAFKA-6607, we see that Matthias, recommends the following: "Note that all applications using a plain consumer may face the same issue if they use KafkaConsumer#commitSync(Map<TopicPartition, OffsetAndMetadata> offsets): to address the issue, the correct pattern is to either commit "nextRecord.offset()" (if the next record is available already, ie, was returned by poll(), or use consumer.position() that takes the commit marker into account and would "step over it")."

However, in spring-kafka source: KafkaMessageListenerContainer->ackImmediate method, uses : new OffsetAndMetadata(record.offset() + 1), when commiting offsets.

closed time in 5 days

ekanthi

push eventgaryrussell/spring-kafka

Artem Bilan

commit sha 282cfc6296b55aef7ce20ab9a30c8d9c7655ff4d

Add `this.` to `logger` to honor Checkstyle

view details

push time in 5 days

Pull request review commentspring-projects/spring-kafka

GH-1587: Option to Correct Transactional Offsets

 protected void pollAndInvoke() { 			} 		} +		@SuppressWarnings("rawtypes")+		private void fixTxOffsetsIfNeeded() {+			if (this.fixTxOffsets) {+				try {+					Map<TopicPartition, OffsetAndMetadata> toFix = new HashMap<>();+					this.lastCommits.forEach((tp, oamd) -> {+						long position = this.consumer.position(tp);+						if (position > oamd.offset()) {+							toFix.put(tp, new OffsetAndMetadata(position));+						}+					});+					if (toFix.size() > 0) {+						logger.debug(() -> "Fixing TX offsets: " + toFix);
						this.logger.debug(() -> "Fixing TX offsets: " + toFix);
garyrussell

comment created time in 5 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentspring-projects/spring-integration

GH-3336: Change MongoDb Store sequence to long

Pushed some fix to keep compatibility with the previous MongoDb collection state.

The Migration Guide note still has to be done.

artembilan

comment created time in 5 days

push eventartembilan/spring-integration

Artem Bilan

commit sha b06c678b7fd26bdbae29d8b58cda18f6692b2c8c

* Change MongoDb store to deal with `Number.longValue()` instead of casting which doesn't work from `Integer` to `Long`. This way we can keep an old sequence document with an `int` type for value * Documents with new `long` type for their sequence field are OK. The `NumberToNumberConverter` has an effect converting `int` to `long` properly.

view details

push time in 5 days

pull request commentspring-projects/spring-integration

GH-3336: Change MongoDb Store sequence to long

Sure! I'll prepare Migration Guide note. Probably the best way to go is to remove an existing messagesSequence document and let the framework to recreate it with the long type already.

I don't think there is the problem with casting existing int values for allocated sequences in message documents to long...

artembilan

comment created time in 6 days

PR opened spring-projects/spring-integration

GH-3336: Change MongoDb Store sequence to long

Fixes https://github.com/spring-projects/spring-integration/issues/3336

Turns out there are some scenarios where too many messages are transferred through the message store, so int for sequence is not enough as a type

  • Change sequence to long to widen a sequence lifespan

<!-- Thanks for contributing to Spring Integration. Please provide a brief description of your pull-request and reference any related issue numbers (prefix references with #).

See the Contributor Guidelines for more information. -->

+33 -24

0 comment

4 changed files

pr created time in 6 days

create barnchartembilan/spring-integration

branch : GH-3336

created branch time in 6 days

PR opened spring-projects/spring-integration

GH-3372: Expose (S)FTP remoteComparator for DSL

Fixes https://github.com/spring-projects/spring-integration/issues/3372

Cherry-pick to 5.3.x & 5.2.x

<!-- Thanks for contributing to Spring Integration. Please provide a brief description of your pull-request and reference any related issue numbers (prefix references with #).

See the Contributor Guidelines for more information. -->

+18 -3

0 comment

2 changed files

pr created time in 6 days

create barnchartembilan/spring-integration

branch : GH-3372

created branch time in 6 days

PR opened spring-projects/spring-integration

GH-3373: Support IPV6 in AbstractInboundFileSynch

Fixes https://github.com/spring-projects/spring-integration/issues/3373

The AbstractInboundFileSynchronizer doesn't consider that hostPort from Session could be in an IPv6 syntax

  • Parse the hostPort from Session in a manner that only the last : is treated as a port delimiter

Cherry-pick to 5.3.x & 5.2.x

<!-- Thanks for contributing to Spring Integration. Please provide a brief description of your pull-request and reference any related issue numbers (prefix references with #).

See the Contributor Guidelines for more information. -->

+21 -5

0 comment

2 changed files

pr created time in 6 days

create barnchartembilan/spring-integration

branch : GH-3373

created branch time in 6 days

issue closedspring-projects/spring-integration

Issues in Sping integration mail idle receiver

Framework version

  • Spring boot 2.2.6.RELEASE
  • Spring Integration Mail

Bug description I'm using IMAP server and idle configuration I wrote the following code:

@Autowired
private IntegrationFlowContext flowContext;
IntegrationFlow flow = null;
String userFlag = confMailIn.getHost() + "_idle_adapter";
ImapIdleChannelAdapterSpec imapIdleChannelAdapterSpec = Mail.imapIdleAdapter(connectionUrl.toString())
        .javaMailProperties(javaMailProperties)
        .shouldDeleteMessages(deleteMessages)
        .shouldMarkMessagesAsRead(markMessagesRead)
        .autoStartup(true)
        .autoCloseFolder(false)
        .userFlag(userFlag)
        .id(userFlag)
        //.searchTermStrategy(this::notSeenTerm)
        .selector(selectFunction);

if (confMailIn.isRichiedeAutenticazione()) {
    imapIdleChannelAdapterSpec = imapIdleChannelAdapterSpec.javaMailAuthenticator(new CasellaPostaleAuthenticator(cpd.getIndirizzoMail(), cpd.getUsername(), cpd.getPassword()));
}
flow = IntegrationFlows
        .from(imapIdleChannelAdapterSpec)
        .handle(message ->{
            //Prendo il closable del messaggio e valorizzo i l'elenco di closeale da chiudere
            Closeable closeable = StaticMessageHeaderAccessor.getCloseableResource(message);
            if( !closeables.containsKey(cpd.getIndirizzoMail()) ) {
                closeables.put(cpd.getIndirizzoMail(), closeable);
            }
            publishMailEvent(message);
        })
        .get();
flowContext.registration(flow).id(flowId).register();

I set to false the auto close folder because if it's true I can't handle the mail message because I receive a FolderClosedException. So I collect all the Closeable objects and I close them when Spring context is closed (in the best scenario.... never :) )

So far so good... I register the flow and it starts working. But I noticed that after some time it stops in receiving mail messages. I need to restart my service and once again it works for some time.

When it works I see these logs:

2020-04-20 16:32:57,427 25199841 [scheduling-1] DEBUG o.s.i.mail.ImapIdleChannelAdapter - waiting for mail 
2020-04-20 16:32:57,460 25199874 [scheduling-1] INFO  o.s.i.mail.ImapMailReceiver - attempting to receive mail from folder [INBOX] 
2020-04-20 16:32:57,460 25199874 [scheduling-1] DEBUG o.s.i.mail.ImapMailReceiver - This email server does not support RECENT flag, but it does support USER flags which will be used to prevent duplicates during email fetch. This receiver instance uses flag: imapmail.libero.it_idle_adapter 
2020-04-20 16:32:57,476 25199890 [scheduling-1] DEBUG o.s.i.mail.ImapMailReceiver - found 0 new messages 
2020-04-20 16:32:57,476 25199890 [scheduling-1] DEBUG o.s.i.mail.ImapMailReceiver - Received 0 messages 
2020-04-20 16:32:57,476 25199890 [scheduling-1] DEBUG o.s.i.mail.ImapIdleChannelAdapter - received 0 mail messages 
2020-04-20 16:32:57,476 25199890 [scheduling-1] DEBUG o.s.i.mail.ImapIdleChannelAdapter - Task completed successfully. Re-scheduling it again right away.

All works pretty good If I set auto close folder true in my code like in this way

@Autowired
private IntegrationFlowContext flowContext;
IntegrationFlow flow = null;
String userFlag = confMailIn.getHost() + "_idle_adapter";
ImapIdleChannelAdapterSpec imapIdleChannelAdapterSpec = Mail.imapIdleAdapter(connectionUrl.toString())
        .javaMailProperties(javaMailProperties)
        .shouldDeleteMessages(deleteMessages)
        .shouldMarkMessagesAsRead(markMessagesRead)
        .autoStartup(true)
        .autoCloseFolder(true)
        .userFlag(userFlag)
        .id(userFlag)
        //.searchTermStrategy(this::notSeenTerm)
        .selector(selectFunction);

if (confMailIn.isRichiedeAutenticazione()) {
    imapIdleChannelAdapterSpec = imapIdleChannelAdapterSpec.javaMailAuthenticator(new CasellaPostaleAuthenticator(cpd.getIndirizzoMail(), cpd.getUsername(), cpd.getPassword()));
}
flow = IntegrationFlows
        .from(imapIdleChannelAdapterSpec)
        .handle(message ->{
System.out.println("Mail received");
        })
        .get();
flowContext.registration(flow).id(flowId).register();

closed time in 6 days

angeloimm

issue commentspring-projects/spring-integration

Issues in Sping integration mail idle receiver

Closing as Works as Designed and as an issue in end-user configuration. Plus there is no feedback for a while.

Feel free to reopen if you have some further justification about a bug in the framework.

Thanks for undertstanding!

angeloimm

comment created time in 6 days

pull request commentspring-projects/spring-integration

Make JsonPropertyAccessor returned type directly a JsonNode or value

@pilak ,

any update on the matter, please?

We are very close to the 5.4 release this October-November, so would be great to have some solution merged and rejected altogether.

Thanks for understanding!

pilak

comment created time in 6 days

pull request commentspring-projects/spring-integration

GH-3376: Remove gauges on application ctx close

@wilkinsona ,

any further feedback, please?

We probably need to see how this can make it into the upcoming release this Wednesday.

Thanks

artembilan

comment created time in 6 days

push eventspring-projects/spring-integration

Artem Bilan

commit sha 19b59bbde87068e8a377fc95f588c33b779497ad

Upgrade to Reactor 2020.0.0-RC1 * Handle `Emission.FAIL_ZERO_SUBSCRIBER` in the `FluxMessageChannel` and `IntegrationReactiveUtils`

view details

push time in 6 days

issue commentspring-cloud/spring-cloud-gcp

Spring Cloud Dataflow samples with Pub/Sub

No, it's not. That project is obsolete and replaced with this: https://github.com/spring-cloud/stream-applications

See more info in Blog Post series : https://spring.io/blog/2020/07/13/introducing-java-functions-for-spring-cloud-stream-applications-part-0

So, what you are asking almost does not make sense since that project is not about "samples". It represents some function implementations and generated Spring Cloud Stream applications with Kafka and Rabbit binder bundle.

We probably can't provide Pub/Sub binder-based generation because we don't manage this GCP project, but you definitely can do something as a part of your project to provide generated artifacts with Pub/Sub binder based on those our out-of-the-box functions.

WDYT @sobychacko ?

saturnism

comment created time in 6 days

more