profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/expipiplus1/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Joe Hermaszewski expipiplus1 Singapore My code is so strongly typed I'm going to need a new keyboard! He/Him

expipiplus1/aarch64-build-box 0

Config for the Community aarch64 NixOS box.

expipiplus1/actions-playground 0

A place for me to experiment with Github Actions

expipiplus1/ad 0

Automatic Differentiation

expipiplus1/all-hies 0

Cached Haskell IDE Engine Nix builds for all GHC versions

expipiplus1/apply-refact 0

Refactor Haskell source files

expipiplus1/arenacolles 0

A unity game of martian exploration

Pull request review commentNixOS/nixos-hardware

raspberry-pi/4: Add poe-hat option

+{ config, lib, pkgs, ... }:++let +  cfg = config.hardware.raspberry-pi."4".poe-hat;+in {+  options.hardware = {+    raspberry-pi."4".poe-hat = {+      enable = lib.mkEnableOption ''+        Enable support for the Raspberry Pi POE Hat.+      '';

lib.mkEnableOption prepends the string "Whether to enable " to the string that is provided as function argument. (source)

This way we will get "Weather to enable Enable support for the ...".

I suggest "support for the Raspberry Pi POE Hat."

walkah

comment created time in 21 hours

issue openedNixOS/nixos-hardware

Support for Asus ROG Zephyrus G14 GA401

Specs: https://rog.asus.com/laptops/rog-zephyrus/rog-zephyrus-g14-series/spec

created time in a day

pull request commenttomjaguarpaw/haskell-opaleye

Make all queries LATERAL all the time

I'm going to prioritise this, but rather than make it LATERAL all the time (which will likely scare other Opaleye users) I am going to rewrite lateral to use runQueryArr rather than runSimpleQueryArr. That should avoid https://github.com/circuithub/rel8/issues/72.

It needs a bit of fiddling but I hope to have it done this week.

duairc

comment created time in a day

Pull request review commenttomjaguarpaw/haskell-opaleye

Improve JSON support

 jsonFieldParser, jsonbFieldParser :: FieldParser String jsonFieldParser  = jsonFieldTypeParser (String.fromString "json") jsonbFieldParser = jsonFieldTypeParser (String.fromString "jsonb") +jsonFieldTextParser, jsonbFieldTextParser :: FieldParser ST.Text+jsonFieldTextParser  = jsonFieldTypeTextParser (String.fromString "json")+jsonbFieldTextParser = jsonFieldTypeTextParser (String.fromString "jsonb")++jsonFieldLazyTextParser, jsonbFieldLazyTextParser :: FieldParser LT.Text+jsonFieldLazyTextParser  = jsonFieldTypeLazyTextParser (String.fromString "json")+jsonbFieldLazyTextParser = jsonFieldTypeLazyTextParser (String.fromString "jsonb")++jsonFieldByteParser, jsonbFieldByteParser :: FieldParser SBS.ByteString+jsonFieldByteParser  = jsonFieldTypeByteParser (String.fromString "json")+jsonbFieldByteParser = jsonFieldTypeByteParser (String.fromString "jsonb")++jsonFieldLazyByteParser, jsonbFieldLazyByteParser :: FieldParser LBS.ByteString+jsonFieldLazyByteParser  = jsonFieldTypeLazyByteParser (String.fromString "json")+jsonbFieldLazyByteParser = jsonFieldTypeLazyByteParser (String.fromString "jsonb")+ -- typenames, not type Oids are used in order to avoid creating -- a dependency on 'Database.PostgreSQL.LibPQ' -- -- Eventually we want to move this to postgresql-simple -- --     https://github.com/tomjaguarpaw/haskell-opaleye/issues/329 jsonFieldTypeParser :: SBS.ByteString -> FieldParser String-jsonFieldTypeParser jsonTypeName field mData = do+jsonFieldTypeParser x = (fmap . fmap . fmap) IPT.strictDecodeUtf8 (jsonFieldTypeByteParser x)++jsonFieldTypeTextParser :: SBS.ByteString -> FieldParser ST.Text+jsonFieldTypeTextParser x = (fmap . fmap . fmap) STE.decodeUtf8 (jsonFieldTypeByteParser x)++jsonFieldTypeLazyTextParser :: SBS.ByteString -> FieldParser LT.Text+jsonFieldTypeLazyTextParser x = (fmap . fmap . fmap) (LTE.decodeUtf8 . LBS.fromStrict) (jsonFieldTypeByteParser x)++jsonFieldTypeByteParser :: SBS.ByteString -> FieldParser SBS.ByteString+jsonFieldTypeByteParser jsonTypeName field mData = do     ti <- typeInfo field     if TI.typname ti == jsonTypeName        then convert        else returnError Incompatible field "types incompatible"   where     convert = case mData of-        Just bs -> pure $ IPT.strictDecodeUtf8 bs+        Just bs -> pure bs         _       -> returnError UnexpectedNull field "" +jsonFieldTypeLazyByteParser :: SBS.ByteString -> FieldParser LBS.ByteString+jsonFieldTypeLazyByteParser x = (fmap . fmap . fmap) LBS.fromStrict (jsonFieldTypeByteParser x)+

Done 😄

njaremko

comment created time in a day

Pull request review commenttomjaguarpaw/haskell-opaleye

Improve JSON support

 jsonFieldParser, jsonbFieldParser :: FieldParser String jsonFieldParser  = jsonFieldTypeParser (String.fromString "json") jsonbFieldParser = jsonFieldTypeParser (String.fromString "jsonb") +jsonFieldTextParser, jsonbFieldTextParser :: FieldParser ST.Text+jsonFieldTextParser  = jsonFieldTypeTextParser (String.fromString "json")+jsonbFieldTextParser = jsonFieldTypeTextParser (String.fromString "jsonb")++jsonFieldLazyTextParser, jsonbFieldLazyTextParser :: FieldParser LT.Text+jsonFieldLazyTextParser  = jsonFieldTypeLazyTextParser (String.fromString "json")+jsonbFieldLazyTextParser = jsonFieldTypeLazyTextParser (String.fromString "jsonb")++jsonFieldByteParser, jsonbFieldByteParser :: FieldParser SBS.ByteString+jsonFieldByteParser  = jsonFieldTypeByteParser (String.fromString "json")+jsonbFieldByteParser = jsonFieldTypeByteParser (String.fromString "jsonb")++jsonFieldLazyByteParser, jsonbFieldLazyByteParser :: FieldParser LBS.ByteString+jsonFieldLazyByteParser  = jsonFieldTypeLazyByteParser (String.fromString "json")+jsonbFieldLazyByteParser = jsonFieldTypeLazyByteParser (String.fromString "jsonb")+ -- typenames, not type Oids are used in order to avoid creating -- a dependency on 'Database.PostgreSQL.LibPQ' -- -- Eventually we want to move this to postgresql-simple -- --     https://github.com/tomjaguarpaw/haskell-opaleye/issues/329 jsonFieldTypeParser :: SBS.ByteString -> FieldParser String-jsonFieldTypeParser jsonTypeName field mData = do+jsonFieldTypeParser x = (fmap . fmap . fmap) IPT.strictDecodeUtf8 (jsonFieldTypeByteParser x)++jsonFieldTypeTextParser :: SBS.ByteString -> FieldParser ST.Text+jsonFieldTypeTextParser x = (fmap . fmap . fmap) STE.decodeUtf8 (jsonFieldTypeByteParser x)++jsonFieldTypeLazyTextParser :: SBS.ByteString -> FieldParser LT.Text+jsonFieldTypeLazyTextParser x = (fmap . fmap . fmap) (LTE.decodeUtf8 . LBS.fromStrict) (jsonFieldTypeByteParser x)++jsonFieldTypeByteParser :: SBS.ByteString -> FieldParser SBS.ByteString+jsonFieldTypeByteParser jsonTypeName field mData = do     ti <- typeInfo field     if TI.typname ti == jsonTypeName        then convert        else returnError Incompatible field "types incompatible"   where     convert = case mData of-        Just bs -> pure $ IPT.strictDecodeUtf8 bs+        Just bs -> pure bs         _       -> returnError UnexpectedNull field "" +jsonFieldTypeLazyByteParser :: SBS.ByteString -> FieldParser LBS.ByteString+jsonFieldTypeLazyByteParser x = (fmap . fmap . fmap) LBS.fromStrict (jsonFieldTypeByteParser x)+

Nice! Do you mind adding another fmap and getting rid of those xs? :D

njaremko

comment created time in a day

Pull request review commenttomjaguarpaw/haskell-opaleye

Improve JSON support

 jsonFieldTypeParser jsonTypeName field mData = do         Just bs -> pure $ IPT.strictDecodeUtf8 bs         _       -> returnError UnexpectedNull field "" +jsonFieldTypeTextParser :: SBS.ByteString -> FieldParser ST.Text+jsonFieldTypeTextParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure $ STE.decodeUtf8 bs+        _       -> returnError UnexpectedNull field ""++jsonFieldTypeLazyTextParser :: SBS.ByteString -> FieldParser LT.Text+jsonFieldTypeLazyTextParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure . LTE.decodeUtf8 $ LBS.fromStrict bs+        _       -> returnError UnexpectedNull field ""++jsonFieldTypeByteParser :: SBS.ByteString -> FieldParser SBS.ByteString+jsonFieldTypeByteParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure bs+        _       -> returnError UnexpectedNull field ""++jsonFieldTypeLazyByteParser :: SBS.ByteString -> FieldParser LBS.ByteString+jsonFieldTypeLazyByteParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure $ LBS.fromStrict bs+        _       -> returnError UnexpectedNull field ""+

I can 😛 . I was being lazy.

One moment.

njaremko

comment created time in a day

Pull request review commenttomjaguarpaw/haskell-opaleye

Improve JSON support

 jsonFieldTypeParser jsonTypeName field mData = do         Just bs -> pure $ IPT.strictDecodeUtf8 bs         _       -> returnError UnexpectedNull field "" +jsonFieldTypeTextParser :: SBS.ByteString -> FieldParser ST.Text+jsonFieldTypeTextParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure $ STE.decodeUtf8 bs+        _       -> returnError UnexpectedNull field ""++jsonFieldTypeLazyTextParser :: SBS.ByteString -> FieldParser LT.Text+jsonFieldTypeLazyTextParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure . LTE.decodeUtf8 $ LBS.fromStrict bs+        _       -> returnError UnexpectedNull field ""++jsonFieldTypeByteParser :: SBS.ByteString -> FieldParser SBS.ByteString+jsonFieldTypeByteParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure bs+        _       -> returnError UnexpectedNull field ""++jsonFieldTypeLazyByteParser :: SBS.ByteString -> FieldParser LBS.ByteString+jsonFieldTypeLazyByteParser jsonTypeName field mData = do+    ti <- typeInfo field+    if TI.typname ti == jsonTypeName+       then convert+       else returnError Incompatible field "types incompatible"+  where+    convert = case mData of+        Just bs -> pure $ LBS.fromStrict bs+        _       -> returnError UnexpectedNull field ""+

Can you remove the duplication in here by doing all the others as an (fmap . fmap) f jsonFieldTypeByteParser? f will be IPT.strictDecodeUtf8 for String, for example.

njaremko

comment created time in a day

PR opened tomjaguarpaw/haskell-opaleye

Improve JSON support
  • Improve JSON function documentation
  • Add ByteString instances for JSON types, so query response can be used directly.
  • Add Text instances for JSON types
+133 -4

0 comment

5 changed files

pr created time in a day

pull request commenttomjaguarpaw/haskell-opaleye

Make all queries LATERAL all the time

AFAIK using a lateral join will force a nested loop (rather than hash join or merge join) unless the postgresql optimizer is able to eliminate the lateral join by pulling up the lateral subquery. The subquery is only able to be pulled up if it satisfies a number of conditions outlined here. It cannot have any sort of aggregation, limit, offset, window function, etc.

So, to give an example, consider this query:

explain
  select
    cl.relname,
    co.conkey
  from pg_class cl
  left outer join pg_constraint co on cl.oid = co.conrelid
  ;

which generates the following plan:

QUERY PLAN
Hash Left Join  (cost=1.54..22.58 rows=446 width=87)
  Hash Cond: (cl.oid = co.conrelid)
  ->  Seq Scan on pg_class cl  (cost=0.00..17.46 rows=446 width=68)
  ->  Hash  (cost=1.24..1.24 rows=24 width=27)
        ->  Seq Scan on pg_constraint co  (cost=0.00..1.24 rows=24 width=27)

We could instead write this with a lateral query

explain
  select
    cl.relname,
    t.conkey
  from pg_class cl
  left outer join lateral (
    select *
    from pg_constraint co
    where cl.oid = co.conrelid
  ) as t on true
  ;

and generate an identical plan because the subquery satisfies the is_simple_subquery predicate.

If we make the subquery slightly more complex in a way that causes it to not satisfy the is_simple_subquery predicate then we cannot eliminate the subquery and the lateral remains, which forces the planner to use a nested loop for the lateral join.

Some examples:

explain
  select
    cl.relname,
    t.conkey
  from pg_class cl
  left outer join lateral (
    select array_agg(co.conkey) as conkey
    from pg_constraint co
    where cl.oid = co.conrelid
  ) as t on true
  ;
QUERY PLAN
Nested Loop Left Join  (cost=1.31..613.99 rows=446 width=96)
  ->  Seq Scan on pg_class cl  (cost=0.00..17.46 rows=446 width=68)
  ->  Aggregate  (cost=1.31..1.32 rows=1 width=32)
        ->  Seq Scan on pg_constraint co  (cost=0.00..1.30 rows=2 width=23)
              Filter: (cl.oid = conrelid)

compared to a non-lateral equivalent (assuming cl.relname is unique)

explain
  select
    cl.relname,
    array_agg(co.conkey)
  from pg_class cl
  left outer join pg_constraint co on cl.oid = co.conrelid
  group by cl.relname
  ;
QUERY PLAN
HashAggregate  (cost=24.81..30.39 rows=446 width=96)
  Group Key: cl.relname
  ->  Hash Left Join  (cost=1.54..22.58 rows=446 width=87)
        Hash Cond: (cl.oid = co.conrelid)
        ->  Seq Scan on pg_class cl  (cost=0.00..17.46 rows=446 width=68)
        ->  Hash  (cost=1.24..1.24 rows=24 width=27)
              ->  Seq Scan on pg_constraint co  (cost=0.00..1.24 rows=24 width=27)

Or we can even force the subquery to not be pulled up with the much sillier offset 0

explain
  select
    cl.relname,
    t.conkey
  from pg_class cl
  left outer join lateral (
    select *
    from pg_constraint co
    where cl.oid = co.conrelid
    offset 0
  ) as t on true
  ;
QUERY PLAN
Nested Loop Left Join  (cost=0.00..615.10 rows=892 width=87)
  ->  Seq Scan on pg_class cl  (cost=0.00..17.46 rows=446 width=68)
  ->  Seq Scan on pg_constraint co  (cost=0.00..1.30 rows=2 width=320)
        Filter: (cl.oid = conrelid)

The example @ocharles gives above

explain
select *
from
  (values (1), (2), (3)) as s1(x)
  cross join lateral (
    select *
    from (select 0) as q
    left join lateral (
      select *
      from (values (1), (2), (3)) as s2(x)
      where s1.x = s2.x
    ) as s2 on true
  ) as q;
QUERY PLAN
Nested Loop  (cost=0.00..0.29 rows=3 width=12)
  ->  Values Scan on "*VALUES*"  (cost=0.00..0.04 rows=3 width=4)
  ->  Nested Loop Left Join  (cost=0.00..0.07 rows=1 width=8)
        ->  Result  (cost=0.00..0.01 rows=1 width=0)
        ->  Values Scan on "*VALUES*_1"  (cost=0.00..0.05 rows=1 width=4)
              Filter: ("*VALUES*".column1 = column1)

fails to satisfy the is_simple_subquery predicate for a trickier reason. Have a look at this excerpt

 * If the subquery is LATERAL, check for pullup restrictions from that.
 */
if (rte->lateral)
{
        bool      restricted;
        Relids        safe_upper_varnos;

        /*
         * The subquery's WHERE and JOIN/ON quals mustn't contain any lateral
         * references to rels outside a higher outer join (including the case
         * where the outer join is within the subquery itself).  In such a
         * case, pulling up would result in a situation where we need to
         * postpone quals from below an outer join to above it, which is
         * probably completely wrong and in any case is a complication that
         * doesn't seem worth addressing at the moment.
         */
        if (lowest_outer_join != NULL)
        {
                restricted = true;
                safe_upper_varnos = get_relids_in_jointree((Node *) lowest_outer_join,
                                                                                                   true);
        }
        else
        {
                restricted = false;
                safe_upper_varnos = NULL; /* doesn't matter */
        }

        if (jointree_contains_lateral_outer_refs((Node *) subquery->jointree,
                                                                                         restricted, safe_upper_varnos))
                return false;

I believe that this means that the subquery

(
  select *
  from (values (1), (2), (3)) as s2(x)
  where s1.x = s2.x
)

fails to be pulled up since we have a qual s1.x = s2.x with lateral references to a relation outside of the lowest outer join (the join with select 0). It would be wrong to pull this subquery up since that qual would have to be moved from the nullable side of an outer join to the non-nullable side, and would begin filtering rows from the non-nullable side, changing the query.

We then consider pulling up the lateral subquery

(
      select *
      from (select 0) as q
      left join lateral (
        select *
        from (values (1), (2), (3)) as s2(x)
        where s1.x = s2.x
      ) as s2 on true
    )

which fails for essentially the same reason: If we pulled up this subquery then we cannot have the lateral qual s1.x = s2.x, so this condition would need to be pulled up to a where clause, but we refuse to pull it out of the nullable side of an outer join as it would change the meaning of the query. So, we do not pull this subquery up.

So, I believe any sort of query with the structure:

  select
    ...
  from src
  cross join lateral (
    select
      ...
    from blerg
    left outer join (lateral or not) (
      subquery with some lateral reference to src
    )
  )

will fail to have the lateral subquery pulled up, and thus will force a nested loop.

duairc

comment created time in a day

issue openedexpipiplus1/vector-sized

type-safe lens

Hi

Wouldn't it make sense to add a type-safe lens ? Using type applications it becomes quite compact.

ix' :: forall m (n :: Nat) a (f :: * -> *).
  (Functor f, KnownNat m, KnownNat n, m+1 <= n) =>
  (a -> f a) -> Vec.Vector n a -> f (Vec.Vector n a)
ix' = ix $ natToFinite (Proxy::Proxy m)

This can be then be used like so:

ix' @7

created time in 2 days

issue commenttomjaguarpaw/haskell-opaleye

Port Rel8's eval

It probably would be good to have a release if you're going to merge this, because we will have to make changes to Rel8 to accommodate this, and at least if it's in a release we can set the version bounds appropriately. However, probably the more urgent problem for us is #480.

tomjaguarpaw

comment created time in 2 days

issue commenttomjaguarpaw/haskell-opaleye

Port Rel8's eval

It probably would be good to have a release if you're going to merge this, because we will have to make changes to Rel8 to accommodate this, and at least if it's in a release we can set the version bounds appropriately. However, probably the more urgent problem for us is #480.

tomjaguarpaw

comment created time in 2 days

pull request commenttomjaguarpaw/haskell-opaleye

Make all queries LATERAL all the time

This actually can't be solved in the optimizer alone. This is what such an optimization pass would look like (not sure if 100% correct, but something like this):

optional :: Opaleye.PrimQuery' a -> Opaleye.PrimQuery' a
optional = Opaleye.foldPrimQuery Opaleye.primQueryFoldDefault
  { Opaleye.product = product
  }
  where
    product ps es = case go (toList ps) of
      [] -> Opaleye.Unit
      [p] -> snd p
      p : ps' -> Opaleye.Product (p :| ps') es
      where
        go ((x, q) : (x', Opaleye.Join jt on lb rb Opaleye.Unit rq) : rest) =
          (x <> x', Opaleye.Join jt on lb rb q rq) : rest
        go as = as

That will basically turn @ocharles's latter example into the former, but it won't be LEFT JOIN LATERAL, it will only be LEFT JOIN, which will generate an invalid query.

duairc

comment created time in 2 days

pull request commenttomjaguarpaw/haskell-opaleye

Make all queries LATERAL all the time

We've just discovered some more motivation for this. In https://github.com/circuithub/rel8/issues/72, it's reported that Rel8 generates a bad plan for a LEFT JOIN. What's happening is we're using optional :: Query a -> Query (MaybeTable a), to introduce the LEFT JOIN, which works by taking the LEFT JOIN of the current Query with whatever Query is to be made optional. That is, in something like:

foo >> optional bar

We want something like Join LeftJoin fooQ barQ.

This isn't quite what happens at the moment, because >>= is implemented in terms of lateral, which itself uses runSimpleQueryArr. This means what you actually get is:

Product [fooQ, Join LeftJoin Unit barQ]

This spurious Unit wrecks havoc with PostgreSQL's optimizer. First, here's some "ideal" SQL:

postgres=# explain select * from (values (1), (2), (3)) s1 (x) left join lateral (select * from (select 0) q, (values (1), (2), (3)) s2 (x) where s1.x = s2.x) s2 on true;
                                QUERY PLAN                                 
---------------------------------------------------------------------------
 Hash Left Join  (cost=0.08..0.15 rows=3 width=12)
   Hash Cond: ("*VALUES*".column1 = "*VALUES*_1".column1)
   ->  Values Scan on "*VALUES*"  (cost=0.00..0.04 rows=3 width=4)
   ->  Hash  (cost=0.04..0.04 rows=3 width=8)
         ->  Values Scan on "*VALUES*_1"  (cost=0.00..0.04 rows=3 width=8)
(5 rows)

But look what happens if we introduce a Unit:

explain select * from (values (1), (2), (3)) s1 (x), lateral (select * from (select 0) q left join lateral (select * from (values (1), (2), (3)) s2 (x) where s1.x = s2.x) s2 on true) q;
                                QUERY PLAN                                 
---------------------------------------------------------------------------
 Nested Loop  (cost=0.00..0.32 rows=3 width=12)
   ->  Values Scan on "*VALUES*"  (cost=0.00..0.04 rows=3 width=4)
   ->  Nested Loop Left Join  (cost=0.00..0.07 rows=1 width=8)
         ->  Result  (cost=0.00..0.01 rows=1 width=4)
         ->  Values Scan on "*VALUES*_1"  (cost=0.00..0.05 rows=1 width=4)
               Filter: ("*VALUES*".column1 = column1)
(6 rows)

The worst-case cost has doubled, and we've deteriorated to a loop.

This could be solved in the optimizer, but spotting this unit and re-arranging the query appropriately. I think the simpler option is to just do this PR. Writing optimization passes are really hard and easy to get wrong!

duairc

comment created time in 2 days

issue commenttomjaguarpaw/haskell-opaleye

Port Rel8's eval

It's now up-to-date at https://github.com/tomjaguarpaw/haskell-opaleye/tree/endo.

Do you want this in a release or is in master good enough? I definitely will make a release with it in soon, but just checking whether you want me to expedite it.

tomjaguarpaw

comment created time in 2 days

push eventtomjaguarpaw/haskell-opaleye

Tom Ellis

commit sha d2f39e056b9b0434e6ebacce0653c25e961d6192

Add another way of generating Kleisli arrows This generates Klesli arrows by composing a `Select F -> Select F` with a `F -> Select F`. Without such a rule we fail to generate (aggregate sumInt4 . pure) =<< values [1, 2, 3] In particular we need to check the above to be sure we are using LATERAL correctly when aggregating subqueries that contain free variables.

view details

Tom Ellis

commit sha 3127b7469ee719d3d42e3ae101f3cadb7b33db88

Explanatory comment

view details

Tom Ellis

commit sha 0e60a2d389ab519880a4b8be955d6ac1732292f7

version -> 0.7.2.0

view details

Tom Ellis

commit sha 8b1e49d500e4751c601ef676f057ddc12a4394bc

The Tag shouldn't depend on the PrimQuery

view details

push time in 2 days

issue commenttomjaguarpaw/haskell-opaleye

Port Rel8's eval

And actually, also this comment as well. Basically, we actually do want that endo refactoring you did.

tomjaguarpaw

comment created time in 2 days

issue commenttomjaguarpaw/haskell-opaleye

Port Rel8's eval

And actually, also this comment as well. Basically, we actually do want that endo refactoring you did.

tomjaguarpaw

comment created time in 2 days

issue openedNixOS/nixos-hardware

Issues with connecting nixos-21.05 on a raspberry-pi-4 to multiple LG monitors:

Hello there, The pi connects fine to a single LG monitor, xrandr detects it, and I'm able to do customizations with it(except, I'm unable to change brightness with xrandr, so I have to manually press the monitors buttons to set the brightness). But, whenever I try to connected the pi to a 2nd LG monitor(which is the exact same model), and it stays stuck in the rainbow screen? And xrandr does not detect the 2nd monitor either? I enabled home-manager's autorandr option, but that does not fix this issue either? I'm posting this issue here, because I'm not sure if it's a hardware problem. Also I haven't messed with the pi's bootloader config.txt file, so I don't know if that would fix this issue. If you could at least give some tips on how to troubleshoot this issue. Thanks in advance for helping fix this issue.

created time in 2 days

MemberEvent

pull request commentNixOS/nixos-hardware

Add Dell XPS 13 9310

I built nixos-21.05 with the latest kernel 5.12.11 and the bluetooth patch right now. I can confirm: wifi and bluetooth work for me. Also built-in microphone and speaker works. A PR to update would be great @terinjokes

mitchmindtree

comment created time in 2 days

created repositoryobsidiansystems/ledger-platform

Infrastructure for writing ledger apps, built with Nix

created time in 2 days

issue commenttomjaguarpaw/haskell-opaleye

Port Rel8's eval

For what it's worth, you might be interested in this comment here explaining the technique we eventually ended up using to get this working.

tomjaguarpaw

comment created time in 3 days

issue commentNixOS/cabal2nix

hackage2nix reproducibility

Pending tasks are:

  • [ ] Document at least the regenerate-hackage-packages.sh script in the nixpkgs manual
  • [ ] Remove the legacy hackage2nix scripts from this repository (I think they now only serve to confuse users)
roberth

comment created time in 3 days

pull request commenttomjaguarpaw/haskell-opaleye

Make all queries LATERAL all the time

@tomjaguarpaw I'm still interested in this by the way, I don't know if you're any more open to it now. I've rebased it onto the latest master anyway.

duairc

comment created time in 3 days

issue closedNixOS/cabal2nix

hackage2nix reproducibility

I've tried to run hackage2nix in order to test a change I made to the yaml file in Nixpkgs. I didn't make it through. I'll summarize my experience.

  1. I used git add worktree and git checkout --detach instead of git clone nixpkgs to speed it up. Didn't work because update-nixpkgs.sh pulls in the nixpkgs worktree.
  2. I had to add cabal, ghc, zlib and openssl to my shell before invoking update-nixpkgs.sh
  3. Still, cabal failed with
    $ ./update-nixpkgs.sh 
    <command line>: can't load .so/.DLL for: libz.so (libz.so: cannot open shared object file: No such file or directory)
    cabal: Failed to build cabal2nix-2.15.0 (which is required by exe:hackage2nix
    from cabal2nix-2.15.0).
    

Proposed solution

I think the script can be nixified. I suggest adding the following to Nixpkgs:

  • pin files for
    • cabal2nix repo revision
    • all-cabal-hashes repo revision
  • a port of update-nixpkgs.sh, limited to generation
    • using the pins
    • no assumption about the caller's environment (nix-shell shebang or similar)
    • no git logic, just generation
  • a script to update the pins and call the codegen script
  • assuming the code generation is now pure, a derivation that tests whether the generated code is up to date, to prevent hand-editing and ensure reproducibility

This will let anyone regenerate the package set in a reproducible way. This also makes #219 trivial. Just mention the pin files in a comment.

Did I miss something that makes the above not work?

closed time in 3 days

roberth

issue commentNixOS/cabal2nix

hackage2nix reproducibility

The steps in Nixpkgs to regenerate the packages have been reduced to essentially two steps, which is close enough for me.

My notes for a quick test with updated hackage, distilled from the more complete pkgs/development/haskell-modules/HACKING.md (or its permalink)

Test newly uploaded release

Check that all-cabal-hashes is up to date with your upload. Run:

./maintainers/scripts/haskell/update-hackage.sh
./maintainers/scripts/haskell/regenerate-hackage-packages.sh
nix-build -A haskellPackages.foo
roberth

comment created time in 3 days

PR opened expipiplus1/vulkan

Update Vulkan to v1.2.182

Integrity holding at ninety percent.

Diff without documentation changes

+1 -1

0 comment

1 changed file

pr created time in 3 days

created tagexpipiplus1/vulkan

tagv3.11

Haskell bindings for Vulkan

created time in 3 days

created tagexpipiplus1/vulkan

tagvma-v0.6

Haskell bindings for Vulkan

created time in 3 days