profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/LaurentMazare/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

apple/swift 57262

The Swift Programming Language

apple/swift-lldb 652

This is the version of LLDB that supports the Swift programming language & REPL.

LaurentMazare/deep-models 140

Implementation of a couple deep learning models using TensorFlow

LaurentMazare/npy-ocaml 28

Numpy file format support for ocaml.

LaurentMazare/btc-ocaml 24

A toy implementation of the bitcoin protocol in ocaml.

LaurentMazare/ocaml-dataframe 22

Simple and type-safe dataframe api implemented in pure ocaml

LaurentMazare/ocaml-matplotlib 20

Plotting for ocaml based on matplotlib.pyplot

LaurentMazare/ocaml-minipy 10

Naive interpreter for a Python like language

LaurentMazare/ocaml-bert 8

Transformer-based models for Natural Language Processing in OCaml

push eventLaurentMazare/tch-rs

laurent

commit sha 2407ca401ec53d26c6d7baa76ef4b4f9f7a41b12

Cosmetic readme changes.

view details

push time in a day

issue commentLaurentMazare/ocaml-torch

Reading a .pt files gives an error

That seems very odd, if you look at the pytorch code triggering this here, this uses kMaxSupportedFileFormatVersion which is set to 6 in pytorch 1.9. This constant was set to 1 only for versions <= 1.3, also the line number in your error message doesn't match 1.9 so maybe you're not building or linking with pytorch 1.9 properly?

Het-Shah

comment created time in 2 days

pull request commentLaurentMazare/tch-rs

VarStore extensions

Thanks for the PR!

guillaume-be

comment created time in 4 days

push eventLaurentMazare/tch-rs

guillaume-be

commit sha d29c65fbee189f685fb8b66d31a54521f520d890

VarStore extensions (#416) * Varstore extensions (float kind and device change) * Varstore extensions for path * Addition of `to_kind` methods to var store and path * replaced `to_[...]` methods by `set_[...]` for inplace operations

view details

push time in 4 days

PR merged LaurentMazare/tch-rs

VarStore extensions

Hello,

This PR is a first attempt of implementation of the ideas discussed in https://github.com/LaurentMazare/tch-rs/issues/415. I implemented:

  • VarStore-level casting between the different precision levels for Float types
  • VarStore-level migration between devices
  • Path-level casting between the different precision levels for Float types.

Please let me know what you think

+386 -0

0 comment

2 changed files

guillaume-be

pr closed time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentLaurentMazare/tch-rs

VarStore extensions

 impl<'a> Path<'a> {         }     } +    /// Casts all float-like variables in a sub-path to half-precision (Half kind).+    ///+    /// Only the variable in the path sub-tree are cast to half-precision:+    /// other VarStore variables are unaffected+    pub fn half(&mut self) {+        let path_root = self.path.join(SEP.to_string().as_str());+        let mut variables = self.var_store.variables_.lock().unwrap();+        for (variable_name, variable) in variables.named_variables.iter_mut() {+            if variable_name.starts_with(&path_root) & variable.is_floating_point() {+                *variable = variable.to_kind(Kind::Half);+            }+        }+    }

Would it be possible to have a function with a kind argument: it would avoid duplicating the code 4 times and also could be exposed to the end users if they want to use a kind argument.

guillaume-be

comment created time in 5 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentLaurentMazare/tch-rs

Module-like methods for VarStore

Ah makes sense, your plan of adding these half and float sounds good, and indeed if it's done in place it should hopefully work well with model creation. If you want to take a stab at it, happy to review a PR. I certainly like the idea of being able to do it at a path level (and for all subpaths), maybe it's not needed for a first cut though but that would feel pretty convenient. When it comes to loading in half-precision, I was more thinking that the load function would perform the to_kind so it would not really depend on the data to deserialize but more on a parameter of the function (or some characteristic of the var-store).

guillaume-be

comment created time in 5 days

issue commentLaurentMazare/tch-rs

Module-like methods for VarStore

Thanks for the detailed write-up. Indeed a var-store is attached to a device, but it's not really attached to a Kind (float/double/half) at the moment as mostly everything is done with float. Having only one Kind for a var-store is an interesting idea and probably an improvement over the current setup but I'm wondering how common it is for models to use mixed precision in a way this would be a problem here. When it comes to moving tensors across devices, VarStore::load and VarStore::copy let you load some tensor to a device based respectively on a file or another var store. Typically, I would have thought that the workflow you describe would work by loading directly the weights to the gpu in half precision via VarStore::load but maybe there is some interest in having the intermediary steps? Another thing is that tensors get added to a var-store when creating the model (e.g. the ModuleT), so if there was a way to move a var-store I'm not really sure how this would be used to then create a model, somehow this would need to re-use the variables rather than to create fresh ones? I think the current implementation of VarStore::add would just create fresh tensors with new paths.

guillaume-be

comment created time in 7 days

issue commentLaurentMazare/tch-rs

Discrepancy in output between same model (with same weights) in tch-rs and PyTorch

I actually gave this a try but got exactly the same values from the Python and the Rust sides. To give more details about what I did, I extracted the initial weights from Python via the following:

nps = {}
for k, v in simple_net.state_dict().items(): nps[k] = v.numpy()
np.savez('/tmp/mymodel.npz', **nps)

Then I converted this to the format expected by the crate via:

cargo run --example tensor-tools cp /tmp/mymodel.npz /tmp/mymodel.ot

I ran your Rust code and finally the following Python code which outputted only zeros.

imgs = torch.ones([4,1,28,28],dtype=torch.float32,requires_grad=True)
output = simple_net(imgs)
(output - torch.load("/tmp/outputs.pt").state_dict()["0"]).pow(2).sum()

One thing that might be different is that I used the cpu backend so you may want to give this a try rather than go through cuda. If you still see a diff, one thing you may want to try is on the Python side to load the weights from the same file as used on the Rust side, e.g. via the following code.

_mdl = torch.load("/tmp/mymodel.ot")
for k, v in _mdl.state_dict().items():
    _k1, _k2 = k.split("|")
    setattr(getattr(simple_net, _k1), _k2, nn.Parameter(v))
marc-dlf

comment created time in 12 days

push eventLaurentMazare/tch-rs

laurent

commit sha ee005c62fb024441329646b43c83eeabe64b6a30

Add the python script to extract weights from PyTorch defined models.

view details

push time in 12 days

pull request commentthierry-martinez/pyml

Add support for traceback

@thierry-martinez we now rely on these changes in pythonlib and want to push forward with follow up modifications (and also drop support for python 2). So we're going to embed a modified version of pyml in pythonlib for the time being. This means that we would care less about this PR being merged, I'll leave it open in case you want to integrate it but feel free to close it if you prefer.

LaurentMazare

comment created time in 14 days

pull request commentocaml/opam-repository

Add the torch 0.13 package for PyTorch 1.9.

No problem at all, thanks for the merge and I've merged the change upstream too.

LaurentMazare

comment created time in 20 days

push eventLaurentMazare/ocaml-torch

laurent

commit sha ab4fdc2db2921a4300f63c5a89371b13bc7493e1

Bump the ctypes dependency version.

view details

push time in 20 days

pull request commentocaml/opam-repository

Add the torch 0.13 package for PyTorch 1.9.

Ah that makes sense, it turns out that I use some ctypes bits that were only exposed in 0.11, I'll try tweaking the bounds until I get it to work.

LaurentMazare

comment created time in 20 days

push eventLaurentMazare/opam-repository

Laurent Mazare

commit sha 81b3bef75e29656f3e3e18dbdb010e1eae43c1bc

Use ctypes >= 0.11.

view details

push time in 20 days

pull request commentocaml/opam-repository

Add the torch 0.13 package for PyTorch 1.9.

That's a bit odd, I had a quick look and it seems that the proper dependencies are fetched to start with, torch 0.13 is compiled/installed, but then it is removed, the dependencies are downgraded substantially for some reason and the process tries to compile the package again which fails with these dependencies. I haven't found anything in the logs that would explain the downgrade/recompile bits, would you know what could explain this kind of thing?

LaurentMazare

comment created time in 20 days

pull request commentthierry-martinez/pyml

Add support for traceback

@thierry-martinez would you have any thought on this PR?

LaurentMazare

comment created time in 20 days

pull request commentocaml/opam-repository

Add the torch 0.13 package for PyTorch 1.9.

Nothing really urgent here, but could anyone look at this when you have a chance? @kit-ty-kate @avsm (not sure who I should notify so sorry for the noisy ping).

LaurentMazare

comment created time in 20 days

issue commentLaurentMazare/tch-rs

support cross compiling with "wasm32-unknown-unknown"

This crate depends on the PyTorch C++ library so you would need to get this compiled to wasm too, not sure if anyone attempted this already but that's likely to be quite tricky. I'm not very knowledgeable on the wasm side but I would expect much of the difficulty to be there.

amiralipour

comment created time in 22 days

issue commentLaurentMazare/tch-rs

clion and pycharm debuggers can't find libtorch_cpu.so

Sorry I wouldn't really know on this as I'm not using clion, the libraries are found via some rpath set up, maybe just setting your LD_LIBRARY_PATH (if on linux) to include the directory that contains libtorch_cpu.so would help.

drewm1980

comment created time in 24 days

issue commentLaurentMazare/tch-rs

Make the transitive dependency on curl and openssl optional

Thanks for the feedback, I've just pushed some changes to move the curl download bits behind a feature gate. Note that this is in the default set of features for the tch crate though, so you will need to use default-features = false or similar when specifying the dependency. Let me know if you run into any issue using this.

drewm1980

comment created time in 24 days

push eventLaurentMazare/tch-rs

laurent

commit sha 9d1bc4fd99d74dfe4af2654f6bfa35cd5ae155c7

Move the libtorch download behind a feature gate (activated by default).

view details

push time in 24 days

push eventLaurentMazare/ocaml-arrow

Laurent Mazare

commit sha ebd5384933e0f4abeb459b9f03ba07fbda49cbff

Fix the release of the ocaml lock.

view details

push time in a month

PR opened ocaml/opam-repository

Add the torch 0.13 package for PyTorch 1.9.

Now that the libtorch packages have been updated for PyTorch 1.9, update the torch package to use these.

+48 -0

0 comment

1 changed file

pr created time in a month

create barnchLaurentMazare/opam-repository

branch : torch1.9

created branch time in a month

push eventLaurentMazare/opam-repository

Thomas Gazagnaire

commit sha 02344ff4af871720dc6a175ef7b8e6a5974ea613

jekyll-format: fix lower bounds for yaml

view details

Thomas Gazagnaire

commit sha 16c9a418d48df65633e8a22db81f54603f0499d8

ppx_deriving_yaml: fix lower bounds for yaml

view details

Cameron Wong

commit sha bc1dc138337c4cc6a5a9339bf9eabba9c076ae5a

ppx_js_style v0.14.1 Signed-off-by: Cameron Wong <cwong@janestreet.com>

view details

Thomas Gazagnaire

commit sha bf916a5ffd666568a2c27a41ba7869b947a6f49f

integers: ocamlbuild 0.9.0 is broken

view details

Thomas Gazagnaire

commit sha 702919dd9d76e45990d622a9bb220b875b7adc59

user-agent-parser: use a recent version of yaml to fix the tests

view details

Thomas Gazagnaire

commit sha a58deeb4c9a29a19c628469324b1246c4f0bede9

graphql-cohttp: use a recent version of yojson

view details

Sonja Heinze

commit sha 68672719e1bf6ff3ab099429ba3e5eb90c7d5f28

[new release] ppxlib (0.22.2) CHANGES: - Make ppxlib compatible with 4.13 compiler (ocaml-ppx/ppxlib#260, @kit-ty-kate)

view details

Bikal Lem

commit sha 8d51bf61f3a675ebb47ac43f4aef98019735225f

[new release] reparse, reparse-lwt and reparse-lwt-unix (3.0.0) CHANGES: - Overhaul parser implementation - use functor based implementation. Introduce, `Make_buffered_input`, `Make_unbuffered_input` and `Make` functors. - Remove `reparse-unix` package - Remove base dependency - Facilitate IO promise monads such as `Lwt` and `Async` - Add package `reparse-lwt` which defines `Lwt_stream.t` as one of the input sources. - Add package 'reparse-lwt-unix' which defines `Lwt_unix.file_descr` and `Lwt_io.input_channel` as parser input sources. -

view details

Thomas Gazagnaire

commit sha bf2d2486a52cf20f8fb25408a807e328e7e0c844

graphql-cohttp: add missing license field

view details

Bikal Lem

commit sha f6ff4273d63b0d3adc857ce58e4b3e10e66c64ce

add upperbound version to reparse dependency

view details

Guillaume Claret

commit sha 706770deece5fd75b73598609be962bdf38cf5ab

Add coq-of-ocaml version 2.5.1

view details

Kate

commit sha 4b1bea8621dca72af0f2083e46fafdcbb83829d4

Merge pull request #18920 from clarus/add-coq-of-ocaml-2.5.1 Add coq-of-ocaml version 2.5.1

view details

Kate

commit sha cd3e4b62eebae1b1e26ea05bb64c840b2e5934a2

Merge pull request #18918 from bikallem/http-multipart-formdata-upper-bound add upperbound version to reparse dependency

view details

Kate

commit sha 2ed3c46b15fa20df95c29560b961273526d92bc5

ppxlib.0.22.2: Add missing constraint (uses Sexplib0.Sexpable)

view details

Kate

commit sha e7f20d10fcc0ae7ec851cb1fe2374d253f580b5f

Merge pull request #18908 from cwong-ocaml/master ppx_optcomp.v0.14.2

view details

Kate

commit sha d9f88a1e78e48e3d41226ec53f237c7c9133d863

Merge pull request #18915 from pitag-ha/release-ppxlib-0.22.2 [new release] ppxlib (0.22.2)

view details

Kate

commit sha 985b299f0f1879ca4e4f63c8007b4f78841fdad3

Merge pull request #18916 from bikallem/release-reparse-v3.0.0 [new release] reparse, reparse-lwt and reparse-lwt-unix (3.0.0)

view details

Louis Gesbert

commit sha 2b428884ba8f31408c46b10136211990a3702768

2 packages from OCamlPro/ocp-index at 1.3.0

view details

Kate

commit sha 4891d504f1ee61cc3e9ff3f9081ea8c92b7450d1

Package unison.2.51.4

view details

Kate

commit sha d9e8ee6e7876a93f648e697bb226f0f0d333fe5c

unison.2.51.4: Add missing license field

view details

push time in a month