profile
viewpoint
Divesh Otwani Divesh-Otwani Tweag I/O Princeton NJ I'm a Haskeller and I like solving hard problems with rigorous math. I Graduated with a B.S. in Math & CS from Haverford College.

bgamari/the-thoralf-plugin 22

This a type-checker plugin to rule all type checker plugins involving type-equality reasoning using smt solvers.

Divesh-Otwani/haskell-final-project 1

This is my final project for my introductory Haskell class.

Divesh-Otwani/better-bionic-workshop 0

This is a growing repository for FIG, haverford's cs club.

Divesh-Otwani/chapel-vim 0

A repo for Vundle to install chapel vim highlighting (instead of doing things locally).

Divesh-Otwani/cs-thesis 0

These are my optimizations of syr2k from PolyBench 4.0 for my undergraduate thesis.

Divesh-Otwani/online-workspace 0

This is my generic coding repository.

Divesh-Otwani/Simple-Clock 0

A combination of a clock, alarm, stopwatch and timer.

Divesh-Otwani/therapy-assistant 0

This is an app to assist therapists with certain therapies starting with CBT. It does not replace therapy but serves to complement human therapy.

Divesh-Otwani/type-verified-concurrent-linkedlist 0

This is my final project for my Concurrency class.

pull request commenttweag/linear-base

Changing (>>) in monad to accept consumable first inputs

LGTM.

saolof

comment created time in 5 days

pull request commentinput-output-hk/cardano-node

Improve GenTx ToObject and ToJSON instances for logs

bors merge

Divesh-Otwani

comment created time in a month

PR opened input-output-hk/cardano-node

Reviewers
Improve GenTx ToObject and ToJSON instances for logs

Description

This PR works on https://github.com/input-output-hk/cardano-node/issues/721 towards rendering transactions ids in logs nicely.

The culprit was the reused Condense instances from the consensus repo.

As far as I understand, the tracers that write to the log file (or elsewhere) rely on

type TraceConstraints blk =
    ( ConvertTxId blk
    , HasTxs blk
    , HasTxId (GenTx blk)
    , LedgerQueries blk
    , ToJSON   (TxId (GenTx blk))
    , ToJSON   (TxOut (AlonzoEra StandardCrypto))
    , ToJSON   (PParamsUpdate (AlonzoEra StandardCrypto))
    , ToObject (ApplyTxErr blk)
    , ToObject (GenTx blk)
    , ToObject (Header blk)
    , ToObject (LedgerError blk)
    , ToObject (LedgerEvent blk)
    , ToObject (OtherHeaderEnvelopeError blk)
    , ToObject (ValidationErr (BlockProtocol blk))
    , ToObject (CannotForge blk)
    , ToObject (ForgeStateUpdateError blk)
    , ToObject (UtxoPredicateFailure (AlonzoEra StandardCrypto))
    , ToObject (AlonzoBbodyPredFail (AlonzoEra StandardCrypto))
    , ToObject (AlonzoPredFail (AlonzoEra StandardCrypto))
    , Show blk
    , Show (Header blk)
    )

Currently, these don't mention Condense and it seems the only use is in Cardano.Tracing.OrphanInstances.{Consensus, Shelley, Byron, HardFork}, specifically with the ToObject (GenTx blk) and ToJSON (TxId (GenTx blk)) instances in the Shelley and Byron modules so I just touched those.

Testing

I tested this as follows: (relying on simple-tx.sh from https://github.com/input-output-hk/cardano-node/pull/3400)

$ ./scripts/byron-to-alonzo/mkfiles.sh alonzo
$ ./example/run/all.sh
$ ./scripts/byron-to-alonzo-simple-tx.sh
$ grep "txid" example/node-bft1/node.log
[haldane:cardano.node.Mempool:Info:143] [2021-12-08 21:54:40.57 UTC] fromList [("mempoolSize",Object (fromList [("bytes",Number 239.0),("numTxs",Number 1.0)])),("tx",Object (fromList [("txid",String "ba5e446c")])),("kind",String "TraceMempoolAddedTx")]
[haldane:cardano.node.Mempool:Info:31] [2021-12-08 21:54:41.14 UTC] fromList [("txs",Array [Object (fromList [("txid",String "ba5e446c")])]),("mempoolSize",Object (fromList [("bytes",Number 0.0),("numTxs",Number 0.0)])),("kind",String "TraceMempoolRemoveTxs")]

Notes

  • I tried earlier to convert GenTx blk into a Cardano.API.Tx.Tx era and then render that as JSON. This has some issues since not all branches of GenTx ByronBlock can be converted to Tx ByronEra (like a delegation certificate).
  • There are several other uses of Condense also to be removed
+6 -13

0 comment

2 changed files

pr created time in a month

push eventinput-output-hk/cardano-node

Divesh Otwani

commit sha 6f3e4506bacdcba4d6ac720a5b980db6c810c4a2

Improve GenTx ToObject and ToJSON instances for logs

view details

push time in a month

push eventinput-output-hk/cardano-node

Divesh Otwani

commit sha 3e101c144649f2f7a4d8e9216c886b03344e6cd9

Improve GenTx ToObject and ToJSON instances for logs

view details

push time in a month

create barnchinput-output-hk/cardano-node

branch : divesh/txid-rendering

created branch time in a month

push eventinput-output-hk/cardano-node

Divesh Otwani

commit sha 5bad2ca83e65455c10f26f556cfb42c76793819c

scripts: simple transaction example in alonzo era

view details

push time in 2 months

Pull request review commentinput-output-hk/cardano-node

scripts: simple transaction example in alonzo era

+#!/usr/bin/env bash++set -e+set -x++# This script creates, signs, and submits a simple transaction in the Alonzo era.+# We simply generate a new user and send him some money from a pre-build utxo account.+# To use: start an alonzo cluster and then run as ./scripts/byron-to-alonzo/simple-tx.sh++ROOT=example+TESTNET=42+WORK=simple-tx-files+CLI=cardano-cli+pushd ${ROOT}+mkdir ${WORK}+sleep 15 # Wait for things to initialize and the socket to be ready

Yes, I'll omit that and just put a note at the top to run the script after the cluster has started working and you can query the tip from node-bft1, say.

Divesh-Otwani

comment created time in 2 months

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentinput-output-hk/cardano-node

scripts: simple transaction example in alonzo era

 Contains all the scripts relevant to benchmarking `cardano-node`. See the benchm #### buildkite Contains scripts relevant to IOHK's CI. -#### byron-shelley-allegra-mary-Contains a script that sets up a cluster beginning in the Byron era and can transition to the Shelley era. You can also start a cluster in the Shelley, Allegra or Mary era by supplying an argument to `mk-files.sh`.+### Clusters++Most clusters sub-folders contain a `mkfiles.sh` script which creates a top level

Yes, makes sense :+1:

Divesh-Otwani

comment created time in 2 months

Pull request review commentinput-output-hk/cardano-node

scripts: simple transaction example in alonzo era

+#!/usr/bin/env bash++set -e+set -x++# This script creates, signs, and submits a simple transaction in the Alonzo era.+# We simply generate a new user and send him some money from a pre-build utxo account.

:+1:

Divesh-Otwani

comment created time in 2 months

PullRequestReviewEvent

PR opened input-output-hk/cardano-node

Reviewers
scripts: simple transaction example in alonzo era

This MR improves the docs of /scripts and adds a script for making a simple transaction in the Alonzo era.

This should also help with https://github.com/input-output-hk/cardano-node/issues/721 by providing an easy example to check the rendering of txids in the node log files.

+101 -6

0 comment

2 changed files

pr created time in 2 months

create barnchinput-output-hk/cardano-node

branch : divesh/alonzo-cluster-doc-and-ex

created branch time in 2 months

startedkatychuang/getting-started-with-haskell

started time in 2 months

issue commenttweag/linear-base

Dupable wars, episode IV: a new plan

The last definition seems elegant to me:

data Streamish a where
  Streamish :: (s %1 -> a) -> (s %1 -> (s, s)) -> (s %1 -> ()) %-> s %1 -> Streamish a

I generally don't like super polymorphic definitions unless we see a strong need. I think it addresses (1) n-generation, (2) an applicative instance, (3) seed-friendly replication if the way you duplicate a moveable value is by making s some Ur a.

aspiwack

comment created time in 3 months

issue commenttweag/linear-base

Optimizing linear arrays

@utdemir Here are the benchmark results I get on my machine:

benchmarking arrays/toList/Data.Array.Mutable.Linear ... took 10.09 s, total 56 iterations
benchmarked arrays/toList/Data.Array.Mutable.Linear
time                 168.9 ms   (162.7 ms .. 175.4 ms)
                     0.998 R²   (0.994 R² .. 1.000 R²)
mean                 188.3 ms   (181.4 ms .. 198.8 ms)
std dev              14.52 ms   (7.924 ms .. 22.36 ms)
variance introduced by outliers: 19% (moderately inflated)

benchmarking arrays/toList/Data.Vector ... took 10.23 s, total 56 iterations
benchmarked arrays/toList/Data.Vector
time                 173.6 ms   (169.4 ms .. 176.2 ms)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 190.5 ms   (184.5 ms .. 202.4 ms)
std dev              13.83 ms   (7.228 ms .. 21.45 ms)
variance introduced by outliers: 19% (moderately inflated)

benchmarking arrays/map/Data.Array.Mutable.Linear ... took 45.81 s, total 56 iterations
benchmarked arrays/map/Data.Array.Mutable.Linear
time                 788.9 ms   (775.1 ms .. 803.4 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 854.8 ms   (826.4 ms .. 919.7 ms)
std dev              71.13 ms   (25.08 ms .. 111.6 ms)
variance introduced by outliers: 28% (moderately inflated)

benchmarking arrays/map/Data.Vector ... took 45.97 s, total 56 iterations
benchmarked arrays/map/Data.Vector
time                 800.0 ms   (781.5 ms .. 812.7 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 846.4 ms   (831.0 ms .. 874.0 ms)
std dev              34.90 ms   (17.92 ms .. 57.73 ms)

benchmarking arrays/reads/Data.Array.Mutable.Linear ... took 8.839 s, total 56 iterations
benchmarked arrays/reads/Data.Array.Mutable.Linear
time                 145.3 ms   (139.3 ms .. 149.5 ms)
                     0.998 R²   (0.997 R² .. 1.000 R²)
mean                 166.0 ms   (159.1 ms .. 177.7 ms)
std dev              15.09 ms   (8.732 ms .. 23.39 ms)
variance introduced by outliers: 28% (moderately inflated)

benchmarked arrays/reads/Data.Vector
time                 50.73 ms   (50.33 ms .. 51.21 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 58.73 ms   (55.92 ms .. 67.09 ms)
std dev              7.917 ms   (3.382 ms .. 13.57 ms)
variance introduced by outliers: 52% (severely inflated)

Benchmark mutable-data: FINISH

I think the remaining focus is get as you've pointed out.

utdemir

comment created time in 3 months

issue commenttweag/linear-base

Dupable wars, episode IV: a new plan

That's what this does: (but as @aspiwack mentioned it does too many deep-copies if the original list is deep copied only once)

instance Dupable a => Dupable [a] where
  dupS [] = pure []
  dupS (a:as) = (:) <&> dupS a <*> dupS as
aspiwack

comment created time in 3 months

issue commenttweag/linear-base

Dupable wars, episode IV: a new plan

In lists, dupS [1,2,3,...] should be [[1,2,3,...], [1,2,3,...], ...] and what you wrote produces [[1,1,1,...], [2,2,2,...], ...].

aspiwack

comment created time in 3 months

issue commenttweag/linear-base

Dupable wars, episode IV: a new plan

Hmm, yes I see. I'm thinking about how to implement it. But the question still remains -- shouldn't it be copies of the original stream?

aspiwack

comment created time in 3 months

issue commenttweag/linear-base

Dupable wars, episode IV: a new plan

I think I'm missing something with (3) -- could you help clarify? If Dupable just duplicates the original item in a better stream, how is the instance you provided for Stream a correct? It should be a better stream, each of whose elements is the original stream.

I don't understand why you are stepping the state instead of doing this:

instance Dupable a => Dupable (Stream a) where
  dupS :: Stream a %1-> BetterStream (Stream a)
  dupS (Stream (s0 :: s) nxt fin) = Sequential $ Stream s0 nxt' fin'
    where
      nxt' :: s %1-> (s, Stream a)
      nxt' s = (s, Stream s nxt fin)
      
      fin' :: s %1-> Stream a
      fin' s = Stream s nxt fin
aspiwack

comment created time in 3 months

issue commenttweag/linear-base

Dupable wars, episode IV: a new plan

I think the original stream approach by @aspiwack (with BetterStream) does work and is a fine solution to (1) n-generation, (2) applicative support and (3) minimizing not-needed deep copies. It's not perfectly elegant but is a good improvement on V n a.

I would suggest compacting the definition but maybe there are reasons why this doesn't work:


data Stream a where
  Repeat :: a -> Stream a
  Affine :: x %1-> (x %1 -> (x, a)) -> (x %1-> ()) -> Stream a
  
class Dupable a where
  dupS :: a %1-> Stream a

instance Applicative (Stream a) where
  pure = Repeat
  Repeat f <*> Repeat x = Repeat f x
  Repeat f <*> xs = fmap f xs
  fs <*> Repeat x = fmap ($ x) fs
  -- other case as before
aspiwack

comment created time in 3 months

more