profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/frekw/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Fredrik Wärnsberg frekw https://www.soundtrackyourbrand.com/ Stockholm, Sweden

frekw/augle 1

Auth + Google = Augle

frekw/absinthe_relay 0

Absinthe support for the Relay framework

frekw/avro-cpp-packaging 0

Avro C++ library packaging

frekw/backburner.js 0

A rewrite of the Ember.js run loop as a generic microlibrary

frekw/broccoli 0

Browser compilation library – a build tool for applications that run in the browser

frekw/broccoli-merge-trees 0

Broccoli plugin to merge multiple trees into one

frekw/caliban 0

Functional GraphQL library for Scala

delete branch frekw/caliban

delete branch : fix/add-to-input-string

delete time in 16 hours

push eventfrekw/caliban

Fredrik Wärnsberg

commit sha 9c2adfbdbd197bcf302ddc3dfb34c0f67e3abd0b

fix: push toString to the sealed trait

view details

Fredrik Wärnsberg

commit sha cbccf37b0c95ecae477e3804b28593315414ff06

fix: push toString to the sealed trait

view details

push time in 19 hours

push eventfrekw/caliban

Fredrik Wärnsberg

commit sha 48558df8550ea21c140e4857720749308d426405

fix: reuse toString where possible

view details

push time in 20 hours

push eventfrekw/caliban

Fredrik Wärnsberg

commit sha d6aae68e991041a14b3722d8008523905ae8b9aa

fix: reuse toString where possible

view details

push time in 20 hours

PR opened ghostdogpr/caliban

fix: Add Value.toInputString
+39 -22

0 comment

2 changed files

pr created time in 20 hours

create barnchfrekw/caliban

branch : fix/add-to-input-string

created branch time in 20 hours

PR opened ghostdogpr/caliban

fix: Object value formatting

So I realized I got tricked here yesterday. The issue isn't that the values are JSON (which is correct).

From the spec:

defaultValue may return a String encoding (using the GraphQL language) of the default value used by this input value in the condition a value is not provided at runtime. If this input value has no default value, returns null.

So our usage of Parser was correct. However, when we encode the ObjectValue we wrap fields in ", which isn't allowed per the spec:

Input object literal values are unordered lists of keyed input values wrapped in curly‐braces { }. The values of an object literal may be any input value literal or variable (ex. { name: "Hello world", score: 1.0 }). We refer to literal representation of input objects as “object literals.”

So this reverts back to using the Parser as before but fixing toString on ObjectValue.

I'm not sure exactly where toString is used so this feels a tad scary, but all tests pass.

+3 -3

0 comment

2 changed files

pr created time in 21 hours

push eventfrekw/caliban

Fredrik Wärnsberg

commit sha 59dfb0acc4505461d35bea5d3bf6ea96cc1ff9c5

fix: Object value formatting

view details

push time in 21 hours

create barnchfrekw/caliban

branch : fix/object-value-formatting

created branch time in 21 hours

delete branch frekw/caliban

delete branch : fix/default-value-validation-edge-cases

delete time in a day

Pull request review commentghostdogpr/caliban

fix: Handle additional default value edge cases

 object IntrospectionClient {     `type`: Type,     defaultValue: Option[String]   ): InputValueDefinition = {-    val default = defaultValue.flatMap(v => Parser.parseInputValue(v).toOption)+    val default = defaultValue.flatMap(v => decode[InputValue](v).toOption)

This was just a blunder on my part, don't know what I was thinking.

frekw

comment created time in a day

PullRequestReviewEvent

PR opened ghostdogpr/caliban

fix: Handle additional default value edge cases

Realized adding default values broke the stitching example since there were a couple of edge cases we didn't properly handle.

  1. When we get an enum value from introspection it'll be a string so we need to accept string values
  2. null is a valid value for any scalar since the NON_NULL is handled before dealing with the scalar itself.

I've verified that we now support the full GitHub API again, which is quite comprehensive.

+58 -15

0 comment

3 changed files

pr created time in a day

push eventfrekw/caliban

Fredrik Wärnsberg

commit sha b56117abbeafe4b33b84cf10194a95647c04d5ae

fix: Handle additional default value edgecases

view details

push time in a day

create barnchfrekw/caliban

branch : fix/default-value-validation-edge-cases

created branch time in a day

delete branch frekw/zio

delete branch : feat/add-zstream-effect-async-managed

delete time in a day

push eventfrekw/zio

Scala Steward

commit sha 5dd5eb969158011449cf99b56b1177ee51e24a84

Update sbt-bloop to 1.4.6 (#4482)

view details

Scala Steward

commit sha 9659ed40b25947ae9427bcc45e4a018660434198

Update sbt-scalafix to 0.9.24 (#4462) * Update sbt-scalafix to 0.9.24 * Update sbt-scalafix to 0.9.24

view details

Adam Fraser

commit sha 1a80a7aae864c5d6e5b018db294e326522add71b

Implement Unfold Constructors For Chunk (#4470) * implement Chunk.unfold * suspend effects

view details

Devon Stewart

commit sha b543a652d2bb2b6a693c1529ed7642321b2f436d

Add a comment to cond directing towards ifM (#4485)

view details

Itamar Ravid

commit sha 8d33b85ed1d324e983319db00c79f63a797b4091

Generalize ZIO.getOrFail/Unit to getOrFailWith (#4491)

view details

Scala Steward

commit sha 6211601564fcc91306813444f5b461498e5c6681

Update reactor-core to 3.4.1 (#4492)

view details

Ondra Pelech

commit sha a5155aced11779008420b77ea83327d10c883844

Use IntelliJ's default order for Organize Imports (#4449)

view details

Daniel Vigovszky

commit sha 18ecbc7a709f5b8607e199434f3d2b8652dbb2c0

Temporary magnolia-free implementation of zio-test-magnolia on Scala 3 (#4487) * Temporary magnolia-free implementation of zio-test-magnolia on Scala 3 * Extend testJVMDotty with testMagnoliaTests

view details

Ondra Pelech

commit sha cdef32554f166b047fcaa02deebd309c60322f1d

Bintray for sbt-plugins/sbt-jcstress not necessary (#4496) * Bintray for sbt-plugins/sbt-jcstress not necessary * Add scalafix-rule-update to mergify

view details

Scala Steward

commit sha 27ea1fdcd5132f22bd07ccf748521fe898d0a8ab

Update sbt-ci-release to 1.5.5 (#4484)

view details

Scala Steward

commit sha 59a1c7a74ff66ca0ff6181814f8db717d02aaebb

Update sbt-explicit-dependencies to 0.2.16 (#4474) * Update sbt-explicit-dependencies to 0.2.16 * Update sbt-explicit-dependencies to 0.2.16

view details

scalavision

commit sha 7bcc0f7d09bfc9ac3262c08af9b22ed4a4e59f32

add support for Scala 3.0.0-M2 (#4483)

view details

Adam Fraser

commit sha 6a003d01884f79b7e46ad48ddc28b0e47d33ff19

implement unfoldGen (#4497)

view details

Adam Fraser

commit sha 2e2635b1b09f1bb390fe41079851a6def31b1577

Fix Race Condition in ZIO#effectAsyncInterrupt (#4490) * fix race condition in effectAsyncInterrupt * simplify test

view details

Scala Steward

commit sha 0cf07574cbddad5ef3a6681f01bb4fe19bc03333

Update sbt to 1.4.5 (#4501)

view details

Scala Steward

commit sha fe75bd4d783ecccd42cf106feb43f471e2457bb8

Update sbt-dotty to 0.5.0 (#4502)

view details

Devon Stewart

commit sha 40b1753516e68e023348c6950b0fc86333ee2e24

Add includeCase parameter to SummaryBuilder (#4486) Suppress the second Cause during ZTestFramework output. Currently, the output is of the structure: - SuiteName - Test label Fiber failed. <snip> Ran 1 test in 2 s 167 ms: 0 succeeded, 0 ignored, 1 failed - SuiteName - Test label Fiber failed. <snip> Done This obscures the individual test failures, the summary at the very end should only have the SuiteName and Test label, with stacktrace suppressed, to highlight which tests are having a problem to make it easier to scroll up in the test log. With this alteration, the output of the test structure now looks like: - SuiteName - Test label Fiber failed. <snip> Ran 1 test in 2 s 167 ms: 0 succeeded, 0 ignored, 1 failed - SuiteName - Test label Done (notice, the second `Fiber failed.` and stacktrace have been omitted)

view details

Ondra Pelech

commit sha 8aa78a8335fd7dff0ce3ad1491ed81101586d713

Adopt the new IntelliJ import order (#4505) * Adopt the new IntelliJ import order Intellij IDEA has recently changed the order of import blocks. For the good of our contributors, we should follow it. * fix * fix

view details

Scala Steward

commit sha 1eb105b9e40227ea743e4d91074b9711aaf4d7f5

Update scalacheck to 1.15.2 (#4506)

view details

Adam Fraser

commit sha 1f8e1bb8402d26a8a7ca6a193b56752765a37fb8

change logo (#4507)

view details

push time in 2 days

pull request commentzio/zio

Add ZStream.effectAsyncManaged

@adamgfraser done!

frekw

comment created time in 2 days

push eventfrekw/zio

Scala Steward

commit sha 5dd5eb969158011449cf99b56b1177ee51e24a84

Update sbt-bloop to 1.4.6 (#4482)

view details

Scala Steward

commit sha 9659ed40b25947ae9427bcc45e4a018660434198

Update sbt-scalafix to 0.9.24 (#4462) * Update sbt-scalafix to 0.9.24 * Update sbt-scalafix to 0.9.24

view details

Adam Fraser

commit sha 1a80a7aae864c5d6e5b018db294e326522add71b

Implement Unfold Constructors For Chunk (#4470) * implement Chunk.unfold * suspend effects

view details

Devon Stewart

commit sha b543a652d2bb2b6a693c1529ed7642321b2f436d

Add a comment to cond directing towards ifM (#4485)

view details

Itamar Ravid

commit sha 8d33b85ed1d324e983319db00c79f63a797b4091

Generalize ZIO.getOrFail/Unit to getOrFailWith (#4491)

view details

Scala Steward

commit sha 6211601564fcc91306813444f5b461498e5c6681

Update reactor-core to 3.4.1 (#4492)

view details

Ondra Pelech

commit sha a5155aced11779008420b77ea83327d10c883844

Use IntelliJ's default order for Organize Imports (#4449)

view details

Daniel Vigovszky

commit sha 18ecbc7a709f5b8607e199434f3d2b8652dbb2c0

Temporary magnolia-free implementation of zio-test-magnolia on Scala 3 (#4487) * Temporary magnolia-free implementation of zio-test-magnolia on Scala 3 * Extend testJVMDotty with testMagnoliaTests

view details

Ondra Pelech

commit sha cdef32554f166b047fcaa02deebd309c60322f1d

Bintray for sbt-plugins/sbt-jcstress not necessary (#4496) * Bintray for sbt-plugins/sbt-jcstress not necessary * Add scalafix-rule-update to mergify

view details

Scala Steward

commit sha 27ea1fdcd5132f22bd07ccf748521fe898d0a8ab

Update sbt-ci-release to 1.5.5 (#4484)

view details

Scala Steward

commit sha 59a1c7a74ff66ca0ff6181814f8db717d02aaebb

Update sbt-explicit-dependencies to 0.2.16 (#4474) * Update sbt-explicit-dependencies to 0.2.16 * Update sbt-explicit-dependencies to 0.2.16

view details

scalavision

commit sha 7bcc0f7d09bfc9ac3262c08af9b22ed4a4e59f32

add support for Scala 3.0.0-M2 (#4483)

view details

Adam Fraser

commit sha 6a003d01884f79b7e46ad48ddc28b0e47d33ff19

implement unfoldGen (#4497)

view details

Adam Fraser

commit sha 2e2635b1b09f1bb390fe41079851a6def31b1577

Fix Race Condition in ZIO#effectAsyncInterrupt (#4490) * fix race condition in effectAsyncInterrupt * simplify test

view details

Scala Steward

commit sha 0cf07574cbddad5ef3a6681f01bb4fe19bc03333

Update sbt to 1.4.5 (#4501)

view details

Scala Steward

commit sha fe75bd4d783ecccd42cf106feb43f471e2457bb8

Update sbt-dotty to 0.5.0 (#4502)

view details

Devon Stewart

commit sha 40b1753516e68e023348c6950b0fc86333ee2e24

Add includeCase parameter to SummaryBuilder (#4486) Suppress the second Cause during ZTestFramework output. Currently, the output is of the structure: - SuiteName - Test label Fiber failed. <snip> Ran 1 test in 2 s 167 ms: 0 succeeded, 0 ignored, 1 failed - SuiteName - Test label Fiber failed. <snip> Done This obscures the individual test failures, the summary at the very end should only have the SuiteName and Test label, with stacktrace suppressed, to highlight which tests are having a problem to make it easier to scroll up in the test log. With this alteration, the output of the test structure now looks like: - SuiteName - Test label Fiber failed. <snip> Ran 1 test in 2 s 167 ms: 0 succeeded, 0 ignored, 1 failed - SuiteName - Test label Done (notice, the second `Fiber failed.` and stacktrace have been omitted)

view details

Ondra Pelech

commit sha 8aa78a8335fd7dff0ce3ad1491ed81101586d713

Adopt the new IntelliJ import order (#4505) * Adopt the new IntelliJ import order Intellij IDEA has recently changed the order of import blocks. For the good of our contributors, we should follow it. * fix * fix

view details

Scala Steward

commit sha 1eb105b9e40227ea743e4d91074b9711aaf4d7f5

Update scalacheck to 1.15.2 (#4506)

view details

Adam Fraser

commit sha 1f8e1bb8402d26a8a7ca6a193b56752765a37fb8

change logo (#4507)

view details

push time in 2 days

push eventfrekw/caliban

Fredrik Wärnsberg

commit sha cc54f222187bf9ec595a853fd9e3277cffd2287a

fix: Default value parsing

view details

push time in 2 days

PR opened ghostdogpr/caliban

fix: Default value propagation
+18 -7

0 comment

2 changed files

pr created time in 2 days

create barnchfrekw/caliban

branch : fix/propagate-default-value

created branch time in 2 days

Pull request review commentandreabrduque/pubsub-zstreams

Pubsub Subscriber as a ZStream

+package subscriber++import com.google.api.core.ApiService.{ Listener, State }+import com.google.api.gax.batching.FlowControlSettings+import com.google.api.gax.core.InstantiatingExecutorProvider+import com.google.cloud.pubsub.v1.{ AckReplyConsumer, MessageReceiver, Subscriber => GSubscriber }+import com.google.pubsub.v1.{ ProjectSubscriptionName, PubsubMessage }+import org.threeten.bp.Duration+import zio.blocking._+import zio.stream.ZStream+import zio.{ IO, Queue, ZIO, ZManaged }++import java.util.concurrent.TimeUnit++object PubSubSubscriber {++  def subscribe[A](+      projectId: String,+      subscription: String,+      config: PubSubSubscriberConfig,+      decoder: MessageDecoder[A]+  ): ZManaged[Blocking, PubSubError, ZStream[Any, Throwable, DecodedMessage[A]]] =+    for {+      queue <-+        Queue+          .bounded[IO[Throwable, RawMessage]](config.maxOutstandingElementCount)+          .toManaged(_.shutdown)++      runtime <- ZIO.runtime[Any].toManaged_+      _ <- createSubscriber(+        projectId,+        subscription,+        config,+        value =>+          runtime.unsafeRunAsync_(+            queue.offer(+              value+            )+          )+      )+    } yield ZStream.repeatEffect(takeNextAndDecode(queue, decoder))

I would probably opt to use ZStream.async or ZStream.asyncM here instead since it basically does the queuing etc for you.

andreabrduque

comment created time in 2 days

PullRequestReviewEvent

Pull request review commentandreabrduque/pubsub-zstreams

Pubsub Subscriber as a ZStream

+package subscriber++import com.google.api.core.ApiService.{ Listener, State }+import com.google.api.gax.batching.FlowControlSettings+import com.google.api.gax.core.InstantiatingExecutorProvider+import com.google.cloud.pubsub.v1.{ AckReplyConsumer, MessageReceiver, Subscriber => GSubscriber }+import com.google.pubsub.v1.{ ProjectSubscriptionName, PubsubMessage }+import org.threeten.bp.Duration+import zio.blocking._+import zio.stream.ZStream+import zio.{ IO, Queue, ZIO, ZManaged }++import java.util.concurrent.TimeUnit++object PubSubSubscriber {++  def subscribe[A](+      projectId: String,+      subscription: String,+      config: PubSubSubscriberConfig,+      decoder: MessageDecoder[A]+  ): ZManaged[Blocking, PubSubError, ZStream[Any, Throwable, DecodedMessage[A]]] =+    for {+      queue <-+        Queue+          .bounded[IO[Throwable, RawMessage]](config.maxOutstandingElementCount)+          .toManaged(_.shutdown)++      runtime <- ZIO.runtime[Any].toManaged_+      _ <- createSubscriber(+        projectId,+        subscription,+        config,+        value =>+          runtime.unsafeRunAsync_(+            queue.offer(+              value+            )+          )+      )+    } yield ZStream.repeatEffect(takeNextAndDecode(queue, decoder))++  private def createSubscriber(+      projectId: String,+      subscription: String,+      config: PubSubSubscriberConfig,+      callback: IO[PubSubError, RawMessage] => Unit+  ): ZManaged[Blocking, PubSubError, GSubscriber] = {++    //uses the default executor provider+    //default number of threads is number of CPUs and opens only one stream parallel pull count (1)+    val executorProvider = InstantiatingExecutorProvider+      .newBuilder()+      .build()++    val flowControlSettings = FlowControlSettings+      .newBuilder()+      .setMaxOutstandingElementCount(config.maxOutstandingElementCount)+      .setMaxOutstandingRequestBytes(config.maxOutstandingRequestBytes)+      .build()++    val name = ProjectSubscriptionName.of(projectId, subscription)+    val subscriber =+      GSubscriber+        .newBuilder(name, new PubSubMessageReceiver(callback))+        .setParallelPullCount(1)+        .setFlowControlSettings(flowControlSettings)+        .setExecutorProvider(executorProvider)+        .setMaxAckExtensionPeriod(Duration.ofMillis(config.maxAckExtensionPeriod.toMillis))+        .build()++    subscriber.addListener(+      new Listener {+        override def failed(+            state: State,+            t: Throwable+        ): Unit = callback(ZIO.fail(PubSubError(t)))

I'm not 100% sure if this does what you want it to. Since the callback uses runtime.unsafeRunAsync_ I think this will only crash thatt fiber rather than propagating the error? But not sure without having tested it, this is tricky!

andreabrduque

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentandreabrduque/pubsub-zstreams

Pubsub Subscriber as a ZStream

+package subscriber++import com.google.api.core.ApiService.{ Listener, State }+import com.google.api.gax.batching.FlowControlSettings+import com.google.api.gax.core.InstantiatingExecutorProvider+import com.google.cloud.pubsub.v1.{ AckReplyConsumer, MessageReceiver, Subscriber => GSubscriber }+import com.google.pubsub.v1.{ ProjectSubscriptionName, PubsubMessage }+import org.threeten.bp.Duration+import zio.blocking._+import zio.stream.ZStream+import zio.{ IO, Queue, ZIO, ZManaged }++import java.util.concurrent.TimeUnit++object PubSubSubscriber {++  def subscribe[A](+      projectId: String,+      subscription: String,+      config: PubSubSubscriberConfig,+      decoder: MessageDecoder[A]+  ): ZManaged[Blocking, PubSubError, ZStream[Any, Throwable, DecodedMessage[A]]] =+    for {+      queue <-+        Queue+          .bounded[IO[Throwable, RawMessage]](config.maxOutstandingElementCount)+          .toManaged(_.shutdown)++      runtime <- ZIO.runtime[Any].toManaged_+      _ <- createSubscriber(+        projectId,+        subscription,+        config,+        value =>+          runtime.unsafeRunAsync_(+            queue.offer(+              value+            )+          )+      )+    } yield ZStream.repeatEffect(takeNextAndDecode(queue, decoder))++  private def createSubscriber(+      projectId: String,+      subscription: String,+      config: PubSubSubscriberConfig,+      callback: IO[PubSubError, RawMessage] => Unit+  ): ZManaged[Blocking, PubSubError, GSubscriber] = {++    //uses the default executor provider+    //default number of threads is number of CPUs and opens only one stream parallel pull count (1)+    val executorProvider = InstantiatingExecutorProvider+      .newBuilder()+      .build()++    val flowControlSettings = FlowControlSettings+      .newBuilder()+      .setMaxOutstandingElementCount(config.maxOutstandingElementCount)+      .setMaxOutstandingRequestBytes(config.maxOutstandingRequestBytes)+      .build()++    val name = ProjectSubscriptionName.of(projectId, subscription)+    val subscriber =+      GSubscriber+        .newBuilder(name, new PubSubMessageReceiver(callback))+        .setParallelPullCount(1)+        .setFlowControlSettings(flowControlSettings)+        .setExecutorProvider(executorProvider)+        .setMaxAckExtensionPeriod(Duration.ofMillis(config.maxAckExtensionPeriod.toMillis))+        .build()++    subscriber.addListener(+      new Listener {+        override def failed(+            state: State,+            t: Throwable+        ): Unit = callback(ZIO.fail(PubSubError(t)))+      },+      executorProvider.getExecutor()+    )++    effectBlocking(subscriber.startAsync().awaitRunning())+      .mapBoth(PubSubError, _ => subscriber)+      .toManaged(s =>+        effectBlockingInterrupt(

I'm not sure if effectBlockingInterrupt gives you anything here since a ZManaged:s finalizer is uninterruptible. 🤔

andreabrduque

comment created time in 2 days

delete branch frekw/caliban

delete branch : feat/gqldefault

delete time in 2 days

pull request commentghostdogpr/caliban

feat: Support @GQLDefault

Added docs in 37dd3a1. 👍

Imports management is THE reason I can't use Metals 😆

I've learnt to live with it. It being automatic is just too big of a convenience 😅. But it seems as if it hadn't run correctly on all files, so I've cleaned the imports up.

Looks great otherwise! 🙇

frekw

comment created time in 2 days