profile
viewpoint
Stephane Maldini smaldini @netflix San Francisco, CA

mstein/elasticsearch-grails-plugin 63

ElasticSearch grails plugin

netifi/spring-demo 21

Demo application for Netifi Proteus and RSocket. The guideline is available here ->

rstoyanchev/s2gx2015-intro-to-reactive-programming 11

Demos for "Intro to Reactive Programming" talk

rstoyanchev/s2gx2015-reactive-web-apps 10

Demos for "Reactive Web Apps" at SpringOne2GX 2015

rstoyanchev/context-holder 5

Test project for experimenting with request context feature for WebFlux

sdeleuze/rxweb 4

Reactor + Netty based micro web framework prototype

pledbrook/grailsTodos 3

Sample using Grails , CloudFoundry, RabbitMQ, BackboneJS, Coffeescript, and the new 3 Events Bus plugins (platform-core, events-si and events-push)

pull request commentreactor/reactor-core

fix #1053 MonoProcessor rework

Good points @aneveu

smaldini

comment created time in 9 hours

push eventsmaldini/reactor-core

Stephane Maldini

commit sha 3e74e376e574aa5bb0c298619f825b073339afc1

Test fixes and doc tweak

view details

push time in 5 days

push eventsmaldini/reactor-core

Stephane Maldini

commit sha 34b2cc1140e13eec3c6aa7630a52e206b616bae7

Test fixes and doc tweak

view details

push time in 5 days

push eventsmaldini/reactor-core

Stephane Maldini

commit sha 3543e8878618d0aed594ace28fae7e82ae14746b

Split MonoProcessor to void and next implementations add consistent Mono.shareNext() and Mono.share()

view details

push time in 6 days

pull request commentreactor/reactor-core

#1053 MonoProcessor rework

This PR depends on #2218

smaldini

comment created time in 6 days

PR opened reactor/reactor-core

#1053 MonoProcessor rework

This PR introduces NextProcessor and VoidProcessor. A possible LatestProcessor could be added too. It also introduces Flux.shareNext and Mono.share in addition to publishNext for consistency. A follow up would be to add a ConnectableMono.

Now if we add a latest processor, how to represent it in the new Sinks ? right now the sinks is expressed around cardinality, should Sinks.singleOrEmpty() becomes Sinks.singleOrEmpty().first() and last() to differentiate the first and last signals ? Should this be better under Sinks.many().latest() ?

+7043 -5355

0 comment

113 changed files

pr created time in 6 days

create barnchsmaldini/reactor-core

branch : monoProcessorRework

created branch time in 6 days

pull request commentreactor/reactor-core

Suggesting some changes on the Processor API update

Better late than never they said - So in this push :

  • Split the PR (the other one on MonoProcessor to come)
  • The reorg using Sinks.empty, singleOrEmpty and many
  • Added the Emission enum.
  • Also changed the EmitterProcessor behavior to not lock
smaldini

comment created time in 6 days

push eventsmaldini/reactor-core

Stephane Maldini

commit sha 8b1fef2f844ae02c53c5603de3300de1bd86445d

Reorgnize Sinks into many, singleOrEmpty and empty

view details

push time in 6 days

push eventsmaldini/reactor-core

robotmrv

commit sha f91f944d465168f1f05ea2e493d494b6b214b5e6

Lazily instantiate exception in Mono#repeatWhenEmpty (#2221)

view details

Nicolas Gavalda

commit sha d2ff8aef4c55307d9fdd91033a7c961d953029ae

Fix small error in TupleUtils asciidoc (#2201)

view details

Eric Bottard

commit sha af42fda075e6df155ad4bf8196a9e68db61520dc

Merge #2201 into 3.4.0-M2

view details

Simon Baslé

commit sha f6a2edf6a1eac0ef53375702200c62408b54aa65

fix #2220 Log context access in FINE(ST) level and with correct prefix

view details

Sergei Egorov

commit sha 0f4a1f00835b59060561f1845590f5e8a3c99d1b

Rework linking between check, japicmp and downloadBaseline tasks (#2226) This turns the dependency between `downloadBaseline` and `japicmp` tasks around, removing the need for `doDownloadBaseline`. `japicmp` is still suppressed by offline mode, or by entering `SKIP` as the `baselineVersion`. This should also fix corruption errors when comparing the current jar with the baseline jar.

view details

Simon Baslé

commit sha 817a790f02816d0ad4179d3517d2b632b6cdac9a

Merge #2226 into 3.3.8.RELEASE

view details

Simon Baslé

commit sha bde717342a15e2923f2fbe464da87a45b6192664

Merge #2226 into 3.4.0-M2

view details

Eric Bottard

commit sha 9dde683ca84349d2a499396e79e02ecf3c7fd12f

Fix 2199: Fix typo in documentation

view details

Eric Bottard

commit sha 70e8ce01d203c0b69afd2a051741907ad435da20

Merge #2199 into 3.4.0-M2

view details

Denny Abraham Cheriyan

commit sha c3e3d5f7fbdb1a6c62103937c2f4ec231c851308

[doc] Fix Scheduler Javadoc

view details

Eric Bottard

commit sha b8fbd7f8a5a428e69e51aa325a6210c1d74b960f

Merge #2190 into 3.4.0-M2

view details

Eric Bottard

commit sha 85da74bc1f16b732ed9aa4a6b8b53e56970cf8b7

Remove compilation warnings related to jsr305

view details

Yuri Schimke

commit sha 17050aa6360735b539fc14f3595acaf394149917

[doc] Document Android 21 desugaring options (#2232)

view details

Eric Bottard

commit sha 472a55e5da510be602512109b315d52d2f3c333b

Fix #2229: remove Schedulers.elastic() from tests

view details

Eric Bottard

commit sha afaac3e075911629a48805787e5378ac94ab0b64

More post cfacc22 (Schedulers.elastic() removal) cleanup.

view details

Eric Bottard

commit sha 2b4082d8b1ee7116ade817a51ebe3c6225f1ae5e

fix #1734 Use nanoseconds instead of milliseconds wherever possible

view details

Sergei Egorov

commit sha c4480e9ca75407c56601b7e4cf736badc39bd7a9

Allow "0" prefetch value in `concatMap` (#2202) `concatMap` can be very helpful in combination with `window(n)` and similar operators. But the current implementation enforces the mandatory prefetch, making it request another window without waiting for the completion of the inner `Publisher`. Given `concatMap`'s nature, this change makes it accept `0` prefetch value, so that it will request an item on a first request, and next request will be on inner `Publisher`'s completion.

view details

Eric Bottard

commit sha ce915285ae7ddbaa7bdd2f5676f36771ffe2f0ae

fix #2237 remove some compilation warnings

view details

Neveu Audrey

commit sha 741f0eb72d806dcba589b6636f6bbd333643dcfa

fix #2058 identify operators with scheduler through new scannable property (#2123) fix #2058 identify operators which can operate on a different thread with a new Scannable Attribute called Run_Style which might either be UNKNOW (weakest level of guarantee), ASYNC or SYNC (strongest level of guarantee)

view details

Gyuil Han

commit sha 951cb92a9a0bbb61bd78041fdcd99e9b2033350e

[chore] Fix space indentation in Context5 Reviewed-in: #2228

view details

push time in 6 days

issue commentreactor/reactor-netty

HTTP client request timeout

I would rather call it a responseTimeout - the java client has it on the http request builder but its documentation mentions it is a response timeout. Request timeout in our http client might be open to different reading, write timeout or read timeout IMO. Good report @philsttr tho, we are also starting work around webclient here and we came across the same issue.

philsttr

comment created time in 6 days

issue commentreactor/reactor-core

Publisher-initiated Context

Defer Operators might be a big limitation: . Again using WebClient as an example that's the first thing done in the default ExchangeFunction. It's impossible in this situation to pass a context downstream before subscribe time:

Mono.defer(() -> someApi.monoCall().publisherContext(ctx -> ctx.put(xxx, yyy)))
         //...
         .subscriberContext(ctx -> ctx.put(xxx, ctx.get(xxx) + 1))
        .subscribe()

In that sample the publisher context is only materialized on subscribe anyway. i don't know how to get around this one without AOP, blocking or anything else that is within what the JVM can express...

simonbasle

comment created time in 9 days

issue commentreactor/reactor-core

Publisher-initiated Context

@bsideup that's assuming you control the consumer side, but if you provide an api returning flux/mono you don't (e.g. webclient). WebClient is an interesting one because its typically a use case we have where we want to capture thread local when the user subscribe and pass that into the Webclient filter chain for tracing, security and other support.

simonbasle

comment created time in 10 days

pull request commentreactor/reactor-core

Suggesting some changes on the Processor API update

Hey @rstoyanchev and @simonbasle I'll get back to this over this weekend unless i can push some update today. Locally I stopped on having to separate the work on MonoProcessor to support Mono<Void> in a different PR. I agree with the suggestion you made @rstoyanchev in terms of organizing first by cardinality. In fact I wondered about : Sinks.zero(), Sinks.one(), Sinks.many()

The reason is to deemphasize on the item (Sinks.one could be at most one where item() feels like you need to produce one item). Variants include Sinks.empty(), Sinks.singleOrEmpty() and Sinks.many() but we can adapt naming at will once the concept is in. So leftover is to polish with @simonbasle comments and move the rest to a further iteration (scannable and connectable additions).

smaldini

comment created time in 17 days

issue openedulisesbocchio/jasypt-spring-boot

Should configurationsProperties be wrapped/proxied at all

Hey, with Spring Boot 2.3 we have a new caching property behavior used by configurationsProperties (ConfigurationPropertySourcesPropertySource ). Unfortunately the encryptable AOP is applied to it as checked via (https://github.com/ulisesbocchio/jasypt-spring-boot/blob/3715d61d9a887ab471eeb52434cd13fcffec50f2/jasypt-spring-boot/src/main/java/com/ulisesbocchio/jasyptspringboot/EncryptablePropertySourceConverter.java#L124).

I was wondering if we need to aop this composite configurationProperties as it does delegate to individual prop source ?

created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

+/*+ * Copyright (c) 2011-2017 Pivotal Software Inc, All Rights Reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *       https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package reactor.core.publisher;++import org.reactivestreams.Subscription;+import reactor.core.CoreSubscriber;+import reactor.core.Scannable;+import reactor.util.annotation.Nullable;+import reactor.util.context.Context;++import java.util.Objects;+import java.util.stream.Stream;++/**+ * @author Stephane Maldini+ */+final class DelegateSinkFluxProcessor<IN> extends FluxProcessor<IN, IN> {++	final Flux<IN> downstream;+	final Sink<IN> upstream;++	DelegateSinkFluxProcessor(Flux<IN> downstream,+							  Sink<IN> upstream) {+		this.downstream = Objects.requireNonNull(downstream, "Downstream must not be null");+		this.upstream = Objects.requireNonNull(upstream, "Upstream must not be null");+	}++	@Override+	public Context currentContext() {+		if(upstream instanceof CoreSubscriber){

The same existed in FluxProcessor but yeah upstream is the receiving side and downstream the producing side here. Can change to less ambiguous term tho

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 		@Override 		public Stream<? extends Scannable> inners() { 			return Stream.concat(-					lefts.values().stream(),+					lefts.values().stream().map(Scannable.class::cast),

good catch

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 private Sinks() { } 	 */ 	@SuppressWarnings("deprecation") 	public static <T> Sink<T> multicast() {-		return new ProcessorSink<>(EmitterProcessor.create(Queues.SMALL_BUFFER_SIZE));+		EmitterProcessor<T> processor = EmitterProcessor.create(Queues.SMALL_BUFFER_SIZE);+		return SinksHelper.toSink(processor, processor); 	}  	/** 	 * A {@link Sink} with the following characteristics: 	 * <ul> 	 *     <li>Multicast</li>-	 *     <li>Backpressure : this sink honors downstream demand of individual subscribers.</li>+	 *     <li>Backpressure : this sink is not able to honor downstream demand and will emit `onError` if there is a mismatch.</li> 	 *     <li>Replaying: No replay. Only forwards to a {@link Subscriber} the elements that have been-	 *     pushed to the sink AFTER this subscriber was subscribed.</li>-	 *     <li>Without {@link Subscriber}: Discarding. Pushing elements while there are no {@link Subscriber}-	 *     registered will simply discard these elements instead of "warming up" the sink.</li>+	 *     pushed to the sink AFTER this subscriber was subscribed. To the exception of the first+	 *     subscribe.</li>+	 * </ul>+	 */+	@SuppressWarnings("deprecation")+	public static <T> Sink<T> multicastNoBackpressure() {+		DirectProcessor<T> processor = DirectProcessor.create();+		return SinksHelper.toSink(processor, processor);+	}++	/**+	 * A {@link Sink} with the following characteristics:+	 * <ul>+	 *     <li>Multicast</li>+	 *     <li>Backpressure : this sink does not need any demand since it can only signal error or completion</li>+	 *     <li>Replaying: Replay the terminal signal (error or complete).</li> 	 * </ul>-	 * <p>-	 * <img class="marble" src="doc-files/marbles/sinkNoWarmup.svg" alt=""> 	 */ 	@SuppressWarnings("deprecation")-	public static <T> Sink<T> multicastNoWarmup() {-		return new ProcessorSink<>(ReplayProcessor.create(0));+	public static Sink<Void> coordinator() {

In a separate PR we will have VoidProcessor for this

smaldini

comment created time in a month

pull request commentreactor/reactor-core

Suggesting some changes on the Processor API update

Besides rebasing, the last touch on this PR :

  • If the guided Sinks api works for everyone, should we extend it to promise, adding its doc too :
Sink<Void> coordinator = Sinks.promise().coordinator(); // I have implemented a `VoidProcessor extends MonoProcessor` locally

Sink<String> nextPromise = Sinks.promise().next(); //the current MonoProcessor, locally moved under NextProcessor.

Sink<String> latestPromise = Sinks.promise().latest(); // a new processor I wanted to suggest that would emit the latest value only, useful for one off observers.

  • Adding some common state observers instead of relying Scannable which is not a guaranteed state query: hasSubscribers, isTerminated(), hasError() / getError()
  • For this pr or later changing emitterprocessor lock.parkNanos to onError: I don't think we should implement any sort of backpressure management on Sink, also i don't know about adding "offer()" or similar: Flux.create can already do that job in a more appropriate way using onRequest.

In a later iteration I'd like to see a toConnectableMono and toConnectableFlux on the sink while deprecating autoCancel on emitterProcessor. There should be a common way to address disposing between all sinks that is similar to Flux/Mono : refCount, connect etc. By default all Sinks except EmitterProcessor are similar to share()

smaldini

comment created time in a month

push eventsmaldini/reactor-core

Stephane Maldini

commit sha 5981100d27d02470de0b807433e63042353be2d0

Add Sinks builders and address some PR feedback - Reuse onBackpressureXxx naming conventions - deprecate FluxProcessor#switchOnNext - add FluxProcessor#isIdentityProcessor - add FluxProcessor#fromSink

view details

push time in a month

push eventsmaldini/reactor-core

Stephane Maldini

commit sha cce9eda630e0fe43179e5f2c15ac37fb119f792b

Add Sinks builders and address some PR feedback - Reuse onBackpressureXxx naming conventions - deprecate FluxProcessor#switchOnNext - add FluxProcessor#isIdentityProcessor - add FluxProcessor#fromSink

view details

push time in a month

issue commentspring-projects/spring-framework

MappingJackson2HttpMessageConverter might be too specific in its Charset support

@poutsma good job ! We have confirmed it fixed it sorry for the noise :) Talk soon !

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 	void success(@Nullable T value);  	/**-	 * Terminate with the given value without requiring {@link #complete()} to be explicitly called.+	 * @see #emitError(Throwable)+	 */+	void error(Throwable e);++	/**+	 * Terminate with the given value without requiring {@link #emitComplete()} to be explicitly called.

Agree - Being consistent here with the hook there and processor/sink. That might be a separate PR to Improve documentation on Dropping behaviors.

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

  *  * @param <T> the value type emitted  */-public interface MonoSink<T> extends ScalarSink<T> {+public interface MonoSink<T> extends Sink<T> {

If we go that route we always bring back the enum Emission with various helper isOk, isTerminated, isCancelled, isDisposed, isBackpressured

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

  */ public interface FluxSink<T> extends Sink<T> { -	@Override+	/**+	 * Emit a non-null element, generating an {@link Subscriber#onNext(Object) onNext} signal.

I'd tend to agree, you still have some chaining value out of next but error and complete are just there to align semantically with next. To be reviewed in a subsequent PR imo

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 		@Override 		public Stream<? extends Scannable> inners() { 			return Stream.concat(-					lefts.values().stream(),+					lefts.values().stream().map(Scannable.class::cast),

I'm not too sure, tangent, inners should have its own attribute (and delegate to it on the Scannable contract). Will update in another PR if we want

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 public Object scanUnsafe(Attr key) {  		volatile boolean done; -		SerializedSink(BaseSink<T> sink) {+		SerializedSink(SINK sink) { 			this.sink = sink; 			this.mpscQueue = Queues.<T>unboundedMultiproducer().get(); 		}  		@Override-		public Context currentContext() {-			return sink.currentContext();+		public final Flux<T> toFlux() {+			return sink.toFlux(); 		}  		@Override-		public FluxSink<T> next(T t) {+		public final Mono<T> toMono() {+			return sink.toMono();+		}++		@Override+		public final boolean emitComplete() {+			if (done) {+				return false;+			}+			done = true;+			drain();+			return true;+		}++		abstract Context currentContext();++		@Override+		public final boolean emitError(Throwable t) {+			Objects.requireNonNull(t, "t is null in sink.error(t)");+			if (done) {+				Operators.onOperatorError(t, currentContext());+				return false;+			}+			if (Exceptions.addThrowable(ERROR, this, t)) {+				done = true;+				drain();+				return true;+			}++			Context ctx = currentContext();+			Operators.onDiscardQueueWithClear(mpscQueue, ctx, null);+			Operators.onOperatorError(t, ctx);+			return false;+		}++		@Override+		public final boolean emitNext(T t) { 			Objects.requireNonNull(t, "t is null in sink.next(t)");-			if (sink.isTerminated() || done) {-				Operators.onNextDropped(t, sink.currentContext());-				return this;+			if (done) {+				Operators.onNextDropped(t, currentContext());+				return false; 			} 			if (WIP.get(this) == 0 && WIP.compareAndSet(this, 0, 1)) { 				try {-					sink.next(t);+					sink.emitNext(t); 				} 				catch (Throwable ex) {-					Operators.onOperatorError(sink, ex, t, sink.currentContext());+					Operators.onOperatorError(null, ex, t, currentContext());+					emitError(ex);+					return false; 				} 				if (WIP.decrementAndGet(this) == 0) {-					return this;+					return true; 				} 			} 			else { 				this.mpscQueue.offer(t); 				if (WIP.getAndIncrement(this) != 0) {-					return this;+					return true; 				} 			} 			drainLoop();-			return this;-		}--		@Override-		public void error(Throwable t) {-			Objects.requireNonNull(t, "t is null in sink.error(t)");-			if (sink.isTerminated() || done) {-				Operators.onOperatorError(t, sink.currentContext());-				return;-			}-			if (Exceptions.addThrowable(ERROR, this, t)) {-				done = true;-				drain();-			}-			else {-				Context ctx = sink.currentContext();-				Operators.onDiscardQueueWithClear(mpscQueue, ctx, null);-				Operators.onOperatorError(t, ctx);-			}-		}--		@Override-		public void complete() {-			if (sink.isTerminated() || done) {-				return;-			}-			done = true;-			drain();+			return true; 		}  		//impl note: don't use sink.isTerminated() in the drain loop, 		//it needs to separately check its own `done` status before calling the base sink 		//complete()/error() methods (which do flip the isTerminated), otherwise it could 		//bypass the terminate handler (in buffer and latest variants notably).-		void drain() {+		final void drain() { 			if (WIP.getAndIncrement(this) == 0) { 				drainLoop(); 			} 		} -		void drainLoop() {-			Context ctx = sink.currentContext();-			BaseSink<T> e = sink;+		final void drainLoop() {+			Sink<T> e = sink; 			Queue<T> q = mpscQueue; 			for (; ; ) {  				for (; ; ) {-					if (e.isCancelled()) {

double checked but unfortunately there is no easy other way for now and i don't want to scope creep into changing this here. So reverted

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 public void subscribe(CoreSubscriber<? super T> actual) { 		} 	} +	@Override+	public boolean emitComplete() {+		if (done) {+			return false;+		}+		done = true;+		drain();+		return true;+	}++	@Override+	public boolean emitError(Throwable t) {+		Objects.requireNonNull(t, "onError");+		if (done) {+			Operators.onErrorDroppedMulticast(t);+			return false;+		}+		if (Exceptions.addThrowable(ERROR, this, t)) {+			done = true;+			drain();+			return true;+		}+		else {+			Operators.onErrorDroppedMulticast(t);+			return false;+		}+	}++	@Override+	public boolean emitNext(T t) {+		if (done) {+			Operators.onNextDropped(t, currentContext());+			return false;+		}++		if (sourceMode == Fuseable.ASYNC) {+			drain();+			return true;+		}++		Objects.requireNonNull(t, "onNext");++		Queue<T> q = queue;++		if (q == null) {+			if (Operators.setOnce(S, this, Operators.emptySubscription())) {+				q = Queues.<T>get(prefetch).get();+				queue = q;+			}+			else {+				for (; ; ) {+					if (isDisposed()) {+						return false;+					}+					q = queue;+					if (q != null) {+						break;+					}+				}+			}+		}++		while (!q.offer(t)) {+			LockSupport.parkNanos(10);

I think we should just not support it at all this is a mistake and it feels like a documentable change for 3.4

smaldini

comment created time in a month

issue commentspring-projects/spring-framework

Collect metrics during application context startup

Awesome we are watching that feature closely as we have an internal implementation capturing similar metrics. Couple points:

  • Will we be able to wire the ContextEventFactory factory via spring.factories ?
  • Will this provide an In Memory store out of the box we can use at the end of startup to replay and forward metrics ?Sending during startup can skew the data by adding delays but also might not be possible until we have the infrastructure in place, e.g. a kafka client or similar.
bclozel

comment created time in a month

issue openedspring-projects/spring-framework

MappingJackson2HttpMessageConverter might be too specific in its Charset support

With this https://github.com/spring-projects/spring-framework/commit/eb0aae066c83ae0b7be280bd5c9e0679ed394a92#diff-ea14ba194c2adb7f8aa7f97a2ca5bcc7, spring web Jackson2 Converter introduced an enum set limiting UTF options. In a testing scenario we use "US-ASCII" which was working until now. Since 5.2.7, the controller will reject the request failing to decode. Is that the intended behavior or should ENCODINGS map include US-ASCII as well ?

created time in a month

issue commentreactor/reactor-netty

Stack-overflows in reactor-netty

i'm still very surprised something did add a lot of operators to get to that stage and we should identify it (recursive reactive api call, chained many data operations ?)

matiwinnetou

comment created time in a month

issue commentreactor/reactor-netty

Stack-overflows in reactor-netty

This might be different and I'm not sure why the debugging agent didn't help but a stackoverflow error is a pretty special case. So this looks like a case where then/concat + flatMapMany is involved. If you Step debug in the lines it traverses (onComplete in Concat or FlatmapMany), try to look for the parent (you can watch stepNames().forEach(System.out::println) in your IDE ). Each stage should be interleaved with a special operator generated by the ReactorDebugAgent.init() and its toString should show where these operator come from

matiwinnetou

comment created time in a month

pull request commentreactor/reactor-netty

fixes memory-leaks when onNext races with cancellation signal

Overall its good finding and mindful fix regarding efficiency (inlining context). Initially, I was wondering if it was possible to just consume the source to discard its data but that might cause other problems and will still be exposed to race conditions. It's really an annoying problem and I wish there was an extension to Reactive Streams spec that allowed circular cancellation (source would error with a specific signal or complete once it finishes cleanup, allowing child to gracefully cleanup)

OlegDokuka

comment created time in a month

Pull request review commentreactor/reactor-netty

fixes memory-leaks when onNext races with cancellation signal

 public Void get(long timeout, TimeUnit unit) { 			throw new UnsupportedOperationException(); 		} +		// this as discard hook+		@Override+		public void accept(I i) {+			parent.sourceCleanup.accept(i);+			// propagates discard to the downstream+			Operators.onDiscard(i, actual.currentContext());+		}++		// Context interface impl+		@Override+		@SuppressWarnings("unchecked")+		public <T> T get(Object key) {+			if (KEY_ON_DISCARD.equals(key)) {+				return (T) this;+			}++			return actual.currentContext().get(key);+		}++		@Override+		public boolean hasKey(Object key) {+			if (KEY_ON_DISCARD.equals(key)) {+				return true;+			}++			return actual.currentContext().hasKey(key);+		}++		@Override+		public Context put(Object key, Object value) {+			return actual+					.currentContext()+					.put(KEY_ON_DISCARD, this)+					.put(key, value);+		}++		@Override+		public Context delete(Object key) {+			return actual+					.currentContext()+					.put(KEY_ON_DISCARD, this)+					.delete(key);+		}++		@Override+		public int size() {+			return actual.currentContext()+			             .put(KEY_ON_DISCARD, this)+			             .size();+		}++		@Override+		public Stream<Map.Entry<Object, Object>> stream() {+			return actual.currentContext()+			             .put(KEY_ON_DISCARD, this)+			             .stream();+		}+

The overall put context is a bit convoluted although i understand why you inline Context. Size could be simplified (size + 1), stream too (concat), but for delete and put we might just need to add a putAll or a multi put (varargs ?) in reactor. Other solutions are probably more complicated, but its giving some good idea for Context support like should we support lazy merge to avoid looking up currentContext.

OlegDokuka

comment created time in a month

Pull request review commentreactor/reactor-netty

fixes memory-leaks when onNext races with cancellation signal

 void trySchedule(@Nullable Object data) { 					} 				} 			}+			else {+				if (this.s == Operators.cancelledSubscription()) {

Why checking cancelled here and not let the loop do this ?

OlegDokuka

comment created time in a month

Pull request review commentreactor/reactor-netty

fixes memory-leaks when onNext races with cancellation signal

 public void onNext(I t) { 				return; 			} -			if (terminalSignal != null) {+			if (terminalSignal != null || this.s == Operators.cancelledSubscription()) {

Why ? shouldn't cancellation be checked in the drainLoop for safety anyway ?

OlegDokuka

comment created time in a month

Pull request review commentreactor/reactor-netty

fixes memory-leaks when onNext races with cancellation signal

 public void onSubscribe(Subscription s) { 					@SuppressWarnings("unchecked") QueueSubscription<I> f = 							(QueueSubscription<I>) s; -					int m = f.requestFusion(Fuseable.ANY/* | Fuseable.THREAD_BARRIER*/);+					int m = f.requestFusion(Fuseable.ANY | Fuseable.THREAD_BARRIER);

I commented that one specifically as it reduces a lot of the fusion scope unfortunately and I couldn't see at the time the impact. But that seems to be on a the safer side yes.

OlegDokuka

comment created time in a month

pull request commentreactor/reactor-core

Suggesting some changes on the Processor API update

Regarding the builder suggestion here is the the proposal:

		Sink<String> test = Sinks.multicast()
								 .onBackpressureBuffer()
								 .get();

		Sink<String> test2 = Sinks.replay()
								  .limit(Duration.ofMillis(100))
								  .get();

		Sink<String> test3 = Sinks.replay()
								  .limit(Duration.ofMillis(100))
								  .get();

		FluxProcessor<String, String> test4 = Sinks.unicast()
												   .onBackpressureBuffer()
												   .fluxProcessor();
smaldini

comment created time in a month

pull request commentreactor/reactor-core

Suggesting some changes on the Processor API update

I'm close to happiness with the changes:

  • Only 2 new APIs : Sinks and Sink
  • Remove the need to use a Processor altogether except for integration purpose
  • Optimized Sink impls to avoid extra wrapper allocation

Follow up: refining Sinks/Sink api if needed:

  • the alloc-free builder API is a good idea to make Sinks more guided, provide toProcessor
  • do we need extra Sink api: scannable(), isTerminated/hasCancelled, hasDowntream/Subscribers...
  • polish doc
smaldini

comment created time in a month

push eventsmaldini/reactor-core

Stephane Maldini

commit sha 3d171d8870cd3d3ee46d5fc06aac05b98ff3046d

Merge Processors into Sinks Make sure Sinks have minimal or zero extra allocation for use by internal operators Change Sink API to not overlap with operator and provide simple emission ack (boolean)

view details

push time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

  *  * @param <T> the value type emitted  */-public interface MonoSink<T> extends ScalarSink<T> {+public interface MonoSink<T> extends Sink<T> {

That is changed in the next commit. Sink has orthogonal methods "boolean emitXxx" (complete|error|next). Chaining only brings value to FluxSink if any and it was impossible for processors to be internally sink due to competing error name. Plus it provides for a new feature, returning false if emission failed because the sink is terminated.

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 	 */ 	FluxSink<T> onDispose(Disposable d); +	@Override+	default Flux<T> toFlux() {+		return Flux.error(new IllegalStateException("A FluxSink does not support back referencing the outer Flux"));

It's not impossible to support it but its easy to make mistakes if supported. At least its explicit that it doesn't work unlike using Processor directly vs sinks. We can look at the final result once I'm done and see what can go back. Right now i'm aiming for one API entry: Sink/Sinks and will add more from this.

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

-/*- * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved.- *- * Licensed under the Apache License, Version 2.0 (the "License");- * you may not use this file except in compliance with the License.- * You may obtain a copy of the License at- *- *        https://www.apache.org/licenses/LICENSE-2.0- *- * Unless required by applicable law or agreed to in writing, software- * distributed under the License is distributed on an "AS IS" BASIS,- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.- * See the License for the specific language governing permissions and- * limitations under the License.- */--package reactor.core.publisher;--/**- * A {@link FluxProcessor} that has the same input and output types.- * - * @author Simon Baslé- */-public abstract class FluxIdentityProcessor<T> extends FluxProcessor<T, T> {

Yeah the rationale is to not use processors api at all in the future so no need to spend any more surface api for them.

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

  * @deprecated Prefer clear cut usage of either {@link Processors} or {@link Sinks}, to be removed in 3.5  */ @Deprecated-public final class DirectProcessor<T> extends FluxIdentityProcessor<T> {+public final class DirectProcessor<T> extends FluxProcessor<T, T> {

The PR is not finished and there will be more impact due to removing Processors and use of Processor directly wherever possible.

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 If, after exploring the above alternatives, you still think you need a `Processo the <<processor-overview>> section to learn about the different implementations.  [[sinks]]-= Safely Produce from Multiple Threads by Using `StandaloneFluxSink` and `StandaloneMonoSink`+= Safely Produce from Multiple Threads by Using `Sink` and `Sink`

I think that is something I'm not too attached to personally if you think having a dedicated mono is valuable. I'll clean the doc in the final pass. The main problem I'm attacking is removing Processors which is more impacting IMO.

smaldini

comment created time in a month

Pull request review commentreactor/reactor-core

Suggesting some changes on the Processor API update

 public void error(Throwable e) { 		}  		@Override-		public StandaloneFluxSink<T> next(T t) {+		public Sink<T> next(T t) { 			delegateSink.next(t); 			return this; 		} 	} -	//TODO improve synchronization, prefer CAS ?-	static final class MonoProcessorSink<T> implements StandaloneMonoSink<T> {+	static final class MonoProcessorSink<T> implements Sink<T> {

MonoProcessor are safe, all the onXxx methods are atomic - I'll actually have an extra commit that moves further and remove the need for MonoProcessorSink by making MonoProcessor implementing it directly.

smaldini

comment created time in a month

push eventsmaldini/reactor-core

Stephane Maldini

commit sha e152ff38435ecb251deefea7fdfbde45ba9516c7

Remove FluxIdentityProcessor.java

view details

push time in a month

PR opened reactor/reactor-core

Suggesting some changes on the Processor API update

Giving a go at my latest comment on #2188

+252 -297

0 comment

13 changed files

pr created time in a month

create barnchsmaldini/reactor-core

branch : processorApiCont

created branch time in a month

pull request commentreactor/reactor-core

Introduce Sinks and deprecate concrete processors

@dfeist tryNext/offer could be great with other features such as per-item context so we would have a true no-ack receipt if either downstream operator discards the data. Otherwise just on the sink it might prove confusing as only type of underlying processor has backpressure support, that'd be the only one sometimes returning true, sometimes false.

simonbasle

comment created time in a month

issue commentnetty/netty

reactor.core.Exceptions$ReactiveException: io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection timed out

You might want to ask first on https://github.com/reactor/reactor-netty, please share your code snippet as well

v891

comment created time in a month

pull request commentreactor/reactor-core

sinks vs processors epic

Did a first pass here are my comments:

  • Don't like having a Processors vs just the new Sinks. Adding a new top level class would signal continuity in support for Processor and still a double entry point for a user that can just use sinks. I understand we can't provide an easy bridge with SinkFlux.toProcessor() without allocating a serialized sink, and that would hurt some operator perfs.

  • i like the idea of where Processors and more() was going with an allocation free guided API. Could be used to differentiate flux vs mono sinks. Also, given my comment above the concept could be applied to Sinks with unsafe() to produce non serialized processors. Said differently, that means all sinks would be serialized by default, in fact I'm not sure it's the sink itself that needs serialization or a processor (via delegate) so toProcessor() comes serialized as well.

  • FluxSink vs SinkFlux is going to be confusing. I've suggested in the past to bring back an old naming from history, Broadcaster, the intent being sort of pub-sub.

  • Sinks factories could reuse more of the existing flux vocabulary: onBackpressureDrop etc maybe ?

  • Should sinks share some ConnectableFlux concept ?

I'd conclude with this final thought: Ultimately processors are all-in-one series of behaviors available in fluxes and I'm excited to see a long-time dream of rationalization happening with them. Today we can write pub-sub bridges using processors with and without sinks or Flux.create. All of these constructs ask the user what to do with backpressure, completely overlapping with existing Flux.onBackpressureXXX operator. In addition, EmitterProcessor uniquely overlaps with flux.share() and UnicastProcessor looks like an Operators util more than anything. A good place to be first is to get rid of public exposure of a Processor. Because processors are N behaviors in one, they are pretty much a technical optimization (allocation wise) over a longer flux definition. I'll need to iterate a bit more on that just wanted to share my first impression.

cc @rstoyanchev

simonbasle

comment created time in 2 months

issue commentreactor/reactor-core

EmitterProcessor: cyclical LockSupport.parkNanos(10) gives CPU load 100%

@simonbasle you are correct. We should be explicit on this behavior when building the processor maybe. Or provide specific API for "whenSubscriber(Consumer<Sink> xxxx)" and remove this behavior altogether in a future version. This is also a problem of over producing.

mayras

comment created time in 2 months

issue commentreactor/reactor-core

Do we even need multiple Processor implementations ? Discussing sinks again

Just another comment or nail in the coffin: Processors are used for custom operators flow like takeUntil(Publisher), window etc. The processor contract itself is something I regret exposing, that was historically the first step to stages in the pipeline (see reactor 2). I think we will still need a toProcessor for some API accepting Subscriber as argument and for custom operator implementations but this should not be the entry point for users.

simonbasle

comment created time in 2 months

Pull request review commentreactor/reactor-netty

Support HTTP/2 for HttpClient

 	 * @return a {@link HttpClient} 	 */ 	public static HttpClient create() {-		return create(HttpResources.get());+		return new HttpClientConnect(new HttpConnectionProvider(HttpResources.get(), Http2Resources::get)); 	}  	/** 	 * Prepare an {@link HttpClient}. {@link UriConfiguration#uri(String)} or 	 * {@link #baseUrl(String)} should be invoked before a verb 	 * {@link #request(HttpMethod)} is selected. 	 *+	 * @param connectionProvider the {@link ConnectionProvider} to be used 	 * @return a {@link HttpClient} 	 */ 	public static HttpClient create(ConnectionProvider connectionProvider) {-		return new HttpClientConnect(connectionProvider);+		Objects.requireNonNull(connectionProvider, "connectionProvider");+		return new HttpClientConnect(new HttpConnectionProvider(connectionProvider));+	}++	/**+	 * Prepare an {@link HttpClient}. {@link UriConfiguration#uri(String)} or+	 * {@link #baseUrl(String)} should be invoked before a verb+	 * {@link #request(HttpMethod)} is selected.+	 *+	 * @param connectionProvider the {@link ConnectionProvider} to be used+	 * @param maxHttp2Connections the max number of connections that will be used for HTTP/2 requests+	 * @return a {@link HttpClient}+	 */+	public static HttpClient create(ConnectionProvider connectionProvider, int maxHttp2Connections) {

This might be a bit limiting - is there any other configuration that might apply later ? E.g. connection timeout etc. Should this be a Supplier of PoolProvider or similar ? or a Consumer of a spec?

violetagg

comment created time in 2 months

Pull request review commentreactor/reactor-netty

Support HTTP/2 for HttpClient

 public final HttpServer host(String host) { 		return super.host(host); 	} +	/**+	 * Apply HTTP/2 configuration+	 *+	 * @param http2Settings configures {@link Http2Settings} before requesting+	 * @return a new {@link HttpServer}+	 */+	public final HttpServer http2Setting(Consumer<Http2Settings> http2Settings) {

We should choose a constitent way to expose builders, most of the code use spec, I think this will be a unique one. WDYT

violetagg

comment created time in 2 months

Pull request review commentreactor/reactor-netty

Support HTTP/2 for HttpClient

 public int initialBufferSize() { 		return initialBufferSize; 	} +	/**+	 * Configure the maximum length of the content of the H2C upgrade request.

Maybe make it more clear since its a builder spec with the full name of h2c: http2ClearText

violetagg

comment created time in 2 months

Pull request review commentreactor/reactor-netty

Support HTTP/2 for HttpClient

+/*+ * Copyright (c) 2011-Present VMware, Inc. or its affiliates, All Rights Reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *       https://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */+package reactor.netty.http;++import io.netty.buffer.ByteBuf;+import io.netty.channel.ChannelDuplexHandler;+import io.netty.channel.ChannelHandlerContext;+import io.netty.channel.ChannelPromise;+import io.netty.handler.codec.http.DefaultHttpContent;+import io.netty.handler.codec.http2.Http2StreamFrameToHttpObjectCodec;++import static reactor.netty.ReactorNetty.format;++/**+ * This handler is intended to work together with {@link Http2StreamFrameToHttpObjectCodec}+ * it converts the outgoing messages into objects expected by+ * {@link Http2StreamFrameToHttpObjectCodec}.+ *+ * @author Violeta Georgieva+ * @since 1.0.0+ */+public class Http2StreamBridgeHandler extends ChannelDuplexHandler {

Nit picking: Good reuse between client/server, Unfortunate it makes it in the public api tho. I wonder if separately you should think about using an "internal" subpackage for some of these or an annotation cc @rstoyanchev

violetagg

comment created time in 2 months

issue openedspring-cloud/spring-cloud-commons

CachedRandomPropertySourceAutoConfiguration fails to downcast RandomPropertySource when wrapped

Libraries such as Jasypt and similar Metric wrappers might create delegate for each Environment.propertySource - In this case the down cast fails to resolve like this :

Caused by: java.lang.ClassCastException: Cannot cast com.ulisesbocchio.jasyptspringboot.wrapper.EncryptablePropertySourceWrapper to org.springframework.boot.env.RandomValuePropertySource
	at java.base/java.lang.Class.cast(Class.java:3605)
	at org.springframework.cloud.util.random.CachedRandomPropertySourceAutoConfiguration.initialize(CachedRandomPropertySourceAutoConfiguration.java:44)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...

Since that PropertySource relies on conventions around getProperty calls, we don't need the specific type to be passed.

Source: https://github.com/spring-cloud/spring-cloud-commons/blob/master/spring-cloud-context/src/main/java/org/springframework/cloud/util/random/CachedRandomPropertySourceAutoConfiguration.java#L43

created time in 3 months

more