profile
viewpoint
Lukas Geiger lgeiger @plumerai @larq London www.linkedin.com/in/lgeiger deep learning scientist | astroparticle physicist | software engineer

larq/larq 216

An Open-Source Library for Training Binarized Neural Networks

lgeiger/black-action 127

A GitHub action that runs black code formatter for Python

lgeiger/addons 0

Useful extra functionality for TensorFlow maintained by SIG-addons

lgeiger/altair 0

Declarative statistical visualization library for Python

lgeiger/angular 0

One framework. Mobile & desktop.

lgeiger/anser 0

:tv: A low level parser for ANSI sequences.

lgeiger/aperture.js 0

A library for screen recording

lgeiger/apm 0

Atom Package Manager

lgeiger/atom 0

The hackable text editor

lgeiger/atom-busy 0

:wavy_dash: A generic display and progress for things that take time

PR opened larq/larq

Add support for experimental_aggregate_gradients feature

This PR adds support for experimental_aggregate_gradients in apply_gradients for Bop and the CaseOptimizer.

experimental_aggregate_gradients=False allows users to aggregate gradients themselfs. E.g. this is use by Keras to manually aggregate scaled gradients in fp16 when using multi-GPU mixed precision training before passing them to the optimizer.

Adding support for it also works around a bug in TF 2.2-rc2 where in mixed precision training _HAS_AGGREGATE_GRAD is not checked so experimental_aggregate_gradients is always passed to the wrapped optimizer which would fail with a cryptic error message otherwise.

This PR also adds the correct name scopes in apply_gradients which previously weren't used.

+22 -10

0 comment

3 changed files

pr created time in 27 minutes

push eventlarq/larq

Lukas Geiger

commit sha e54afa7d00c2351999cb831c24ee5ad2ef784137

Run unit tests with 2.2.0rc2

view details

push time in 30 minutes

push eventlarq/larq

Lukas Geiger

commit sha 470890c4743751a2f2fd64480c282198679150e7

Remove _resource_apply_sparse since it default to NotImplemented

view details

push time in 39 minutes

create barnchlarq/larq

branch : opt_aggregate_gradients

created branch time in 44 minutes

push eventlarq/docs

Lukas Geiger

commit sha 975c7b53c1bea7e9ec20dcab383f48b44ac9ba4c

Correct model accuracies (#52) It looks the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from `model.evaluate` with the models downloaded from larq zoo.

view details

push time in 19 hours

delete branch larq/docs

delete branch : update-accuracies

delete time in 19 hours

PR merged larq/docs

Reviewers
Correct model accuracies

It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo.

+3 -3

0 comment

1 changed file

lgeiger

pr closed time in 19 hours

push eventlarq/compute-engine

Lukas Geiger

commit sha 492723d27acfa7b5d905baddbc2a79e78e02121a

(ci-skip) Correct model accuracies (#315) It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo. See https://github.com/larq/docs/pull/52

view details

push time in 19 hours

delete branch larq/compute-engine

delete branch : correct-model-accuracies

delete time in 19 hours

PR merged larq/compute-engine

Reviewers
(ci-skip) Correct model accuracies documentation

It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo.

See https://github.com/larq/docs/pull/52

+3 -3

0 comment

1 changed file

lgeiger

pr closed time in 19 hours

PR opened larq/compute-engine

Reviewers
(ci-skip) Correct model accuracies documentation

It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo.

See https://github.com/larq/docs/pull/52

+3 -3

0 comment

1 changed file

pr created time in a day

create barnchlarq/compute-engine

branch : correct-model-accuracies

created branch time in a day

PR opened larq/docs

Reviewers
Correct model accuracies

It looks the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo.

+3 -3

0 comment

1 changed file

pr created time in a day

create barnchlarq/docs

branch : update-accuracies

created branch time in a day

pull request commentlarq/compute-engine

Add ImageNet evaluation tool from TFLite

Let's take a step back here.

I think the evaluation tool is super useful and it will be necessary to evaluate how our int8 quantization performs.

If we want to have this inside the compute-engine repo or somewhere else, is a valid discussion we should have, both approaches have different tradeoffs.

To be very clear, nobody is forcing anything here. The reason why we are using PRs is so we can have those discussions and I think disagreement is necessary and incredibly useful to ultimately arrive at a better solution. However, please keep the discussion constructive so we can work on addressing those issues step by step and not get ourselves wrapped up here.

AdamHillier

comment created time in a day

push eventlarq/compute-engine

Lukas Geiger

commit sha da3b0ae632aab0f4293f974b9490e6c8c1e9dd8d

Add support for fusing Relu1 Support for this has been added in tf 2.2

view details

push time in a day

pull request commentlarq/compute-engine

Upgrade to TF 2.2

The error in compiling XNNPACK is due to old compiler. You need at least Android NDK r18b to build XNNPACK.

@Maratyszcza thanks for the help! Indeed we were able to fix it by upgrading the compiler in f8beda0f8968c282d7b5be50b50448bb9d99fd04 :+1:

lgeiger

comment created time in a day

pull request commentlarq/compute-engine

Add ImageNet evaluation tool from TFLite

Generated the necessary label .txt files to use with the binary (corresponding to the ImageNet validation set). This was a massive pain and took ages because the conversion script they gave was terrible so I'm checking in those two files if that's okay?

I'd be fine with that, I've seen many other repos also have similar lists of ImageNet labels checked in 👍

The files look very similar to imagenet2012_validation_labels.txt and imagenet2012_labels.txt, are they the same? If so we could also download them from there on the fly if we don't want to check them in.

AdamHillier

comment created time in 2 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 675ef5b88b393a1dce18c9d9ab00f8f1a9498a19

Fix pip package build

view details

push time in 2 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 074e1b708b92f975eb62f4a563eec120b5e077bf

Add uint8 to quantized conversion test

view details

push time in 2 days

PR opened larq/compute-engine

Reviewers
Add MLIR status handler to log converter errors internal-improvement

What do these changes do?

This adds a MLIR status handler that will log errors to the TensorFlow Cpp logger. I tried to correctly forward the location info to Python, but this resulted in some very unreadable error messages with duplication and weird unknown logs all over the place. With propagate=true this will just reuse the TensorFlow Cpp logger which logs errors nicely to stderr.

How Has This Been Tested?

I manually set emit_custom_ops=False in the flatbuffer export which will throw a RuntimeError in Python and the MLIR location info is correctly forwarded.

+3 -0

0 comment

1 changed file

pr created time in 2 days

create barnchlarq/compute-engine

branch : propagate_erros

created branch time in 2 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 6ede5d3bba028ecb4003673a4db6029477574ef9

Add quantization example to converter

view details

push time in 2 days

push eventlarq/compute-engine

Lukas Geiger

commit sha b8151a0623aeb46fda003491509357a9442b6127

Move calibrate_and_quantize to converter and add end2end test

view details

push time in 2 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 77b89f275af07ba09af7b78793da0156c4216118

Remove allow_float since it is not used anywhere

view details

push time in 2 days

pull request commentlarq/compute-engine

(ci-skip) Fix comment typos.

I thought the (ci-skip) in the commit message would stop CI running?

Looks like this doesn't work anymore since we moved to run CI on PRs instead of on push :(

AdamHillier

comment created time in 3 days

issue commentlarq/larq

Training from presaved weights?

We have a community chat that you can join at https://spectrum.chat/larq, for questions and general discussions about BNNs.

rsandler00

comment created time in 3 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 64f4a218757fa0844aedbc9e8d659b6cdff9e933

Fix header guard name

view details

push time in 3 days

issue commentlarq/larq

Training from presaved weights?

larq layers with kernel_quantizer=None and input_quantizer=None are equivalent to tf.keras layers. So if you use larq layers for the non-BNN as well, reloading with model.load_weights should work fine.

@timdebruin has more experience with pretraining, so he might have some more insights here.

rsandler00

comment created time in 3 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 706d62ab8c02713cd7ba7675ec494bd39874db4f

Add pass to sanitize LCE ops when reimporting flatbuffer

view details

Lukas Geiger

commit sha bc4a2b4bed6a4d0136a403a1cac70ad84b4dba06

Cleanup quantized model

view details

Lukas Geiger

commit sha 013606680d0edb865e183e3c5fab01f25da9fa82

Cleanup calibration wrapper

view details

Lukas Geiger

commit sha eb509cdfb8e1c31b9354c3f1d267da27e5627346

rename optimize --> quantization

view details

push time in 3 days

delete branch lgeiger/tensorflow

delete branch : cleanup-fwd-compat

delete time in 4 days

delete branch larq/blog

delete branch : imgbot

delete time in 4 days

push eventlarq/blog

imgbot[bot]

commit sha 951bcb2485f215a911d829b4ccc630a1dcf2eebb

[ImgBot] Optimize images (#6) *Total -- 753.12kb -> 614.71kb (18.38%) /themes/larq-blog/static/images/larq-hero.png -- 475.44kb -> 364.54kb (23.33%) /static/images/lce-announcement-hero.png -- 271.56kb -> 244.08kb (10.12%) /themes/larq-blog/static/images/larq-logo.svg -- 3.30kb -> 3.27kb (0.89%) /themes/larq-blog/static/images/larq-logo-text.svg -- 2.82kb -> 2.81kb (0.07%) Signed-off-by: ImgBotApp <ImgBotHelp@gmail.com> Co-authored-by: ImgBotApp <ImgBotHelp@gmail.com>

view details

push time in 4 days

PR merged larq/blog

[ImgBot] Optimize images

Beep boop. Your images are optimized!

Your image file size has been reduced by 18% 🎉

<details> <summary> Details </summary>

File Before After Percent reduction
/themes/larq-blog/static/images/larq-hero.png 475.44kb 364.54kb 23.33%
/static/images/lce-announcement-hero.png 271.56kb 244.08kb 10.12%
/themes/larq-blog/static/images/larq-logo.svg 3.30kb 3.27kb 0.89%
/themes/larq-blog/static/images/larq-logo-text.svg 2.82kb 2.81kb 0.07%
Total : 753.12kb 614.71kb 18.38%

</details>


📝docs | :octocat: repo | 🙋issues | 🏅swag | 🏪marketplace

+2 -31

0 comment

4 changed files

imgbot[bot]

pr closed time in 4 days

push eventlarq/docs

dependabot-preview[bot]

commit sha a53383acdab8b571195e0032f1ca16d61a3715c8

Bump larq from 0.9.2 to 0.9.3 (#50) Bumps [larq](https://github.com/larq/larq) from 0.9.2 to 0.9.3. - [Release notes](https://github.com/larq/larq/releases) - [Commits](https://github.com/larq/larq/compare/v0.9.2...v0.9.3) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>

view details

push time in 4 days

delete branch larq/docs

delete branch : dependabot/pip/larq-0.9.3

delete time in 4 days

PR merged larq/docs

Bump larq from 0.9.2 to 0.9.3 dependencies

Bumps larq from 0.9.2 to 0.9.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/larq/larq/releases">larq's releases</a>.</em></p> <blockquote> <h2>v0.9.3</h2> <h2>:bug: Bug Fixes</h2> <ul> <li>Use static tensor shapes if possible for one-padding (<a href="https://github-redirect.dependabot.com/larq/larq/issues/463">#463</a>) <a href="https://github.com/lgeiger">@lgeiger</a></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/larq/larq/commit/e83885858fa96dec298e43b948b603b75f192b70"><code>e838858</code></a> :arrow_up: 0.9.3 (<a href="https://github-redirect.dependabot.com/larq/larq/issues/464">#464</a>)</li> <li><a href="https://github.com/larq/larq/commit/da595efcce15d4fe1e80de12d7570266a5fafc48"><code>da595ef</code></a> Use static tensor shapes if possible for one-padding (<a href="https://github-redirect.dependabot.com/larq/larq/issues/463">#463</a>)</li> <li>See full diff in <a href="https://github.com/larq/larq/compare/v0.9.2...v0.9.3">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in the .dependabot/config.yml file in this repo:

  • Update frequency
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

</details>

+1 -1

0 comment

1 changed file

dependabot-preview[bot]

pr closed time in 4 days

push eventlarq/larq

Lukas Geiger

commit sha e83885858fa96dec298e43b948b603b75f192b70

:arrow_up: 0.9.3 (#464)

view details

push time in 4 days

delete branch larq/larq

delete branch : 0.9.3

delete time in 4 days

PR merged larq/larq

Update version number to 0.9.3 skip-changelog
+1 -1

0 comment

1 changed file

lgeiger

pr closed time in 4 days

PR opened larq/larq

Update version number to 0.9.3 skip-changelog
+1 -1

0 comment

1 changed file

pr created time in 4 days

create barnchlarq/larq

branch : 0.9.3

created branch time in 4 days

push eventlarq/larq

Lukas Geiger

commit sha eecade8729a2088be0ee36223b5f3c3e18eab179

Add one padding to model summary test

view details

push time in 4 days

pull request commentlarq/larq

Use static tensor shapes if possible for one-padding

Not sure why codecov doesn't like this, the test should fully cover the changes.

lgeiger

comment created time in 4 days

PR opened larq/larq

Use static tensor shapes if possible for one-padding bug

In general the batch dimension is undefined. This means that tf.pad doesn't correctly set the shapes of the tensor. This makes model summary fail since the TensorShape is not correctly inferred.

This PR uses static shapes whenever the spatial dimensions are fully defined to avoid this problem.

+21 -10

0 comment

2 changed files

pr created time in 4 days

push eventlarq/larq

Lukas Geiger

commit sha add75e2d138aebf63e91f771f0f35014490a8343

Use static tensor shapes if possible for one-padding

view details

push time in 4 days

create barnchlarq/larq

branch : use-static-shapes

created branch time in 4 days

delete branch lgeiger/compute-engine

delete branch : lgeiger-patch-1

delete time in 5 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 4a8caf38989ea23a1177eed2b3646d2c7d1818c7

(ci-skip) Fix docs link in contributing guide (#299)

view details

push time in 5 days

PR merged larq/compute-engine

Fix docs link in contributing guide documentation
+1 -1

1 comment

1 changed file

lgeiger

pr closed time in 5 days

push eventlgeiger/compute-engine

Lukas Geiger

commit sha 39ef0797b871c6d63c88ec49b66ebc68641cb25d

Don't use cache if authentication fails (#301)

view details

Lukas Geiger

commit sha f2304bc6d6c79d9baee6904db7197d1ee6e2d950

(ci-skip) Fix docs link in contributing guide

view details

push time in 5 days

pull request commentlarq/compute-engine

Fix docs link in contributing guide

The test will pass when rebased onto #301

lgeiger

comment created time in 5 days

delete branch larq/docs

delete branch : dependabot/pip/larq-compute-engine-0.2.0

delete time in 5 days

push eventlarq/docs

dependabot-preview[bot]

commit sha 9c6d6b799df83a0ec40a72d3bd51b1ba6da3d52c

Bump larq-compute-engine from 0.1.2 to 0.2.0 (#49) Bumps [larq-compute-engine](https://github.com/larq/compute-engine) from 0.1.2 to 0.2.0. - [Release notes](https://github.com/larq/compute-engine/releases) - [Commits](https://github.com/larq/compute-engine/compare/v0.1.2...v0.2.0) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>

view details

push time in 5 days

push eventlgeiger/compute-engine

Lukas Geiger

commit sha a3ac62d61313453416300b6ecbb7d27f97b961d2

(ci-skip) Fix docs link in contributing guide

view details

push time in 5 days

PR opened larq/compute-engine

Don't use cache if authentication fails internal-improvement

What do these changes do?

This PR disables caching if authentication with the server fails. This allows CI to run from a fork without crashing due to authentification problems.

How Has This Been Tested?

CI

+2 -2

0 comment

1 changed file

pr created time in 5 days

create barnchlarq/compute-engine

branch : ci-from-fork

created branch time in 5 days

push eventlgeiger/compute-engine

Lukas Geiger

commit sha fff2562b64b18e96a1ce00c06ff2c43208174eb6

Don't use cache if authentication fails

view details

push time in 5 days

pull request commentlarq/zoo

Add names to quicknet models

This doesn't super matter but I don't think this needs to be a field as opposed to simply a class attribute, because I don't think anyone will ever override the value.

I agree. This doesn't need to be a field.

koenhelwegen

comment created time in 5 days

push eventlgeiger/compute-engine

Arash Bakhtiari

commit sha 12a2ba6c13de0f45c3c4aa332901625652440b2b

update the benchmark results (#294) * update the benchmark results * Apply suggestions from code review Co-Authored-By: Koen Helwegen <koen@plumerai.com> * fix QuickNet naming convention * fix the missing parentes * added quicknet API links Co-authored-by: Koen Helwegen <koen@plumerai.com>

view details

Arash Bakhtiari

commit sha ce9c2d7e56a5f3f9d55259a7df781201821f812d

:arrow_up: v0.2.0 (#300)

view details

Lukas Geiger

commit sha 53cac28ba32985a085ed3540df19e376b4a53724

Consolidate MLIR and End2End tests (#298)

view details

Lukas Geiger

commit sha 8a676a537a0850b2b98624fb14a4f08f42ba8913

Fix docs link in contributing guide

view details

Lukas Geiger

commit sha 1e16046ee265da32c5f1607d949af492aa696be3

Fallback to local execution if cache authentication fails

view details

push time in 5 days

push eventlarq/compute-engine

Lukas Geiger

commit sha 53cac28ba32985a085ed3540df19e376b4a53724

Consolidate MLIR and End2End tests (#298)

view details

push time in 5 days

delete branch larq/compute-engine

delete branch : consolidate-mlir-and-end2end-tests

delete time in 5 days

PR merged larq/compute-engine

Consolidate MLIR and End2End tests internal-improvement

What do these changes do?

This consolidates the MLIR and End2End tests. This should reduce the amount of network traffic and general overhead on CI since we now have proper caching, so splitting the tests doesn't improve speed anymore.

+1 -22

2 comments

1 changed file

lgeiger

pr closed time in 5 days

push eventlarq/compute-engine

Arash Bakhtiari

commit sha ce9c2d7e56a5f3f9d55259a7df781201821f812d

:arrow_up: v0.2.0 (#300)

view details

push time in 5 days

delete branch larq/compute-engine

delete branch : v0.2.0

delete time in 5 days

PR merged larq/compute-engine

Reviewers
:arrow_up: v0.2.0 skip-changelog

<!-- Thank you for your contribution! Please review https://github.com/larq/compute-engine/blob/master/CONTRIBUTING.md before opening a pull request. -->

What do these changes do?

<!-- Please give a short brief about these changes. --> releasing LCE v0.2.0

How Has This Been Tested?

<!-- Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration. -->

Benchmark Results

<!-- Please provide new benchmark results on supported platforms if you believe these changes affect the LCE latency performance. -->

Related issue number

<!-- Are there any issues opened that will be resolved by merging this change? -->

+1 -1

0 comment

1 changed file

arashb

pr closed time in 5 days

push eventlarq/compute-engine

Arash Bakhtiari

commit sha 12a2ba6c13de0f45c3c4aa332901625652440b2b

update the benchmark results (#294) * update the benchmark results * Apply suggestions from code review Co-Authored-By: Koen Helwegen <koen@plumerai.com> * fix QuickNet naming convention * fix the missing parentes * added quicknet API links Co-authored-by: Koen Helwegen <koen@plumerai.com>

view details

push time in 5 days

delete branch larq/compute-engine

delete branch : update-readme

delete time in 5 days

PR merged larq/compute-engine

update the benchmark results documentation

<!-- Thank you for your contribution! Please review https://github.com/larq/compute-engine/blob/master/CONTRIBUTING.md before opening a pull request. -->

What do these changes do?

<!-- Please give a short brief about these changes. --> updated the benchmark restults

  • [x] add quicknet benchmark results for pixel and RPi
  • [x] add BiRealNet benchmark result for pixel
  • [x] update the links to Larq Zoo
+9 -8

0 comment

1 changed file

arashb

pr closed time in 5 days

push eventlarq/docs

dependabot-preview[bot]

commit sha 1c154301bb71ddb0f7c6f47259737749b1817870

Bump larq-zoo from 1.0.b3 to 1.0b4 (#48) * Bump larq-zoo from 1.0.b3 to 1.0b4 Bumps [larq-zoo](https://github.com/plumerai/larq-zoo) from 1.0.b3 to 1.0b4. - [Release notes](https://github.com/plumerai/larq-zoo/releases) - [Commits](https://github.com/plumerai/larq-zoo/compare/v1.0.b3...v1.0.b4) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> * Add QuickNetXL to API docs * Add QuickNetXL and update model accuracies Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com> Co-authored-by: Lukas Geiger <lgeiger@users.noreply.github.com>

view details

push time in 5 days

delete branch larq/docs

delete branch : dependabot/pip/larq-zoo-1.0b4

delete time in 5 days

PR merged larq/docs

Reviewers
Bump larq-zoo from 1.0.b3 to 1.0b4 dependencies

Bumps larq-zoo from 1.0.b3 to 1.0b4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/plumerai/larq-zoo/releases">larq-zoo's releases</a>.</em></p> <blockquote> <h2>v1.0.b4</h2> <h2>:tada: Features</h2> <ul> <li>Implement QuickNet-XL (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/137">#137</a>) <a href="https://github.com/koenhelwegen">@koenhelwegen</a></li> <li>Implement one-padding and reduce number of SE blocks for QuickNet (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/136">#136</a>) <a href="https://github.com/koenhelwegen">@koenhelwegen</a></li> </ul> <h2>:construction_worker_man: Internal Improvements</h2> <ul> <li>Refactor quicknets and remove duplication (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/138">#138</a>) <a href="https://github.com/koenhelwegen">@koenhelwegen</a></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/larq/zoo/commit/2bead182135136ce547897241ebea0967aa8850c"><code>2bead18</code></a> :arrow_up: 1.0.b4 (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/139">#139</a>)</li> <li><a href="https://github.com/larq/zoo/commit/3667df76ef2eb40ede1e50c2586c492e6e4442ed"><code>3667df7</code></a> Implement QuickNet-XL (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/137">#137</a>)</li> <li><a href="https://github.com/larq/zoo/commit/68209a108c025182bed00ded87a6778be647950a"><code>68209a1</code></a> Refactor quicknets and remove duplication (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/138">#138</a>)</li> <li><a href="https://github.com/larq/zoo/commit/d3e96b125ff8f7ddb779f10480ff9ef42842de15"><code>d3e96b1</code></a> Implement one-padding and reduce number of SE blocks for QuickNet (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/136">#136</a>)</li> <li><a href="https://github.com/larq/zoo/commit/27a577cab6dd28291c203d2a8a6eb336cad2df03"><code>27a577c</code></a> Explicitly set compute softmax in float32 (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/135">#135</a>)</li> <li><a href="https://github.com/larq/zoo/commit/13fade7bf1a2679475363acdf9d38aed64218c89"><code>13fade7</code></a> Add support for TensorFlow 2.2 (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/134">#134</a>)</li> <li>See full diff in <a href="https://github.com/plumerai/larq-zoo/compare/v1.0.b3...v1.0.b4">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Note: This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.

You can always request more updates by clicking Bump now in your Dependabot dashboard.

<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in the .dependabot/config.yml file in this repo:

  • Update frequency
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

</details>

+5 -3

0 comment

3 changed files

dependabot-preview[bot]

pr closed time in 5 days

push eventlarq/docs

Lukas Geiger

commit sha 8266d1fc2867e34e003fd4291b6d268eb6a67409

Add QuickNetXL and update model accuracies

view details

push time in 5 days

push eventlarq/docs

Lukas Geiger

commit sha 6f49207cdb708f46d3ce66ddcc681f8078fdd34c

Add QuickNetXL to API docs

view details

push time in 5 days

push eventlarq/zoo

Lukas Geiger

commit sha 2bead182135136ce547897241ebea0967aa8850c

:arrow_up: 1.0.b4 (#139)

view details

push time in 5 days

delete branch larq/zoo

delete branch : lgeiger-patch-1

delete time in 5 days

PR merged larq/zoo

Update version number 1.0.b4 skip-changelog
+1 -1

0 comment

1 changed file

lgeiger

pr closed time in 5 days

PR opened larq/zoo

Update version number 1.0.b4 skip-changelog
+1 -1

0 comment

1 changed file

pr created time in 5 days

create barnchlarq/zoo

branch : lgeiger-patch-1

created branch time in 5 days

Pull request review commenttensorflow/tensorflow

Remove expired forward compatibility horizons

 def __init__(self, input_dataset, num_workers, index):     self._input_dataset = input_dataset      self._element_spec = input_dataset.element_spec-    if (compat.forward_compatible(2019, 11, 25) or

compat.forward_compatible(2019, 11, 25) will always evaluate to True, so the second condition will never be called since a logic like if True or auto_shard_policy_condition will always be True regardless of the auto_shard_policy_condition.

lgeiger

comment created time in 5 days

Pull request review commenttensorflow/tensorflow

Remove expired forward compatibility horizons

 def __init__(self,           f=self._map_func.function,           deterministic=deterministic_string,           **self._flat_structure)-    elif deterministic is not None or compat.forward_compatible(2020, 2, 20):+    else:

This won't make any difference since compat.forward_compatible(2020, 2, 20) will always be True and if deterministic is not None or True therfore will evaluate to True regardless of the value of deterministic.

lgeiger

comment created time in 5 days

Pull request review commenttensorflow/tensorflow

Remove expired forward compatibility horizons

 def testMatMul(self):           np.array([[8]], dtype=dtype),           expected=np.array([[-2]], dtype=dtype)) -      with compat.forward_compatibility_horizon(2019, 10, 19):

The git diff here is a bit misleading. The same test is executed directly above. Just removing the compat line would execute the same test twice which would be unnecessary. The same is true for the other changes in this file.

lgeiger

comment created time in 5 days

push eventlarq/zoo

Lukas Geiger

commit sha 41148204fa2475c8ec7a35c83628e342d37318e6

Make sure input dimesion is int to make TF 1.14 happy

view details

push time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 def residual_fast_block(             activation="relu",         )(x)         x = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)-        if downsample:-            return tf.keras.layers.concatenate(-                [residual, tf.keras.layers.add([x, residual])]-            )-        else:-            return tf.keras.layers.add([x, residual])++        if use_squeeze_and_excite:+            x *= y++        return x++    def residual_block(self, x: tf.Tensor, use_squeeze_and_excite: bool) -> tf.Tensor:+        """Standard residual block, without strides or filter changes."""+        infilters = x.get_shape().as_list()[-1]+        residual = x+        x = self.conv_block(x, infilters, use_squeeze_and_excite)+        return tf.keras.layers.add([x, residual])++    def concat_transition_block(+        self, x: tf.Tensor, filters: int, strides: int, use_squeeze_and_excite: bool+    ) -> tf.Tensor:+        """Concat transition block.++        Doubles number of filters by concatenating shortcut with x + shortcut.+        This module is loosely inspired by+        [MeliusNet](https://arxiv.org/abs/2001.05936).+        """+        infilters = x.get_shape().as_list()[-1]+        assert filters == 2 * infilters++        residual = tf.keras.layers.MaxPool2D(pool_size=strides, strides=strides)(x)+        residual = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(+            residual+        )+        x = self.conv_block(x, infilters, use_squeeze_and_excite, strides)+        x = tf.keras.layers.add([x, residual])++        return tf.keras.layers.concatenate([residual, x])++    def pointwise_transition_block(+        self, x: tf.Tensor, filters: int, strides: int, use_squeeze_and_excite: bool+    ) -> tf.Tensor:+        """Pointwise transition block.++        Transition to arbitrary number of filters by inserting pointwise+        full-precision convolution in shortcut.+        """+        residual = tf.keras.layers.MaxPool2D(pool_size=strides, strides=strides)(x)+        residual = tf.keras.layers.Conv2D(+            filters, kernel_size=1, use_bias=False, kernel_initializer="glorot_normal"+        )(residual)+        residual = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(+            residual+        )+        x = self.conv_block(x, filters, use_squeeze_and_excite, strides)+        return tf.keras.layers.add([x, residual])      def build(self) -> tf.keras.models.Model:-        x = LCEFirstLayer(self.initial_filters, self.image_input)+        x = stem_module(self.stem_filters, self.image_input) -        for block, (layers, filters) in enumerate(zip(*self.spec)):+        for block, (layers, filters, use_squeeze_and_excite) in enumerate(+            zip(*self.spec)+        ):             for layer in range(layers):-                strides = 1 if block == 0 or layer != 0 else 2-                x = self.residual_fast_block(x, filters, strides=strides)+                if filters == x.get_shape().as_list()[-1]:
                if filters == x.shape[-1]:
koenhelwegen

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 def build(self) -> tf.keras.models.Model:          model = tf.keras.Model(inputs=self.image_input, outputs=x, name="quicknet") +        return model+++@factory+class QuickNetFactory(QuickNetBaseFactory):+    """Quicknet - A model designed for fast inference using [Larq Compute Engine](https://github.com/larq/compute-engine)"""++    spec = Field(+        lambda: ([2, 3, 4, 4], [64, 128, 256, 512], [False, False, False, False])

I am not sure if we should really have this as a field, or if we should just make the total number of layers configurable. I don't have a strong opinion about this, but for me it is not really intuitive what the different config values are here.

koenhelwegen

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 def residual_fast_block(             activation="relu",         )(x)         x = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)-        if downsample:-            return tf.keras.layers.concatenate(-                [residual, tf.keras.layers.add([x, residual])]-            )-        else:-            return tf.keras.layers.add([x, residual])++        if use_squeeze_and_excite:+            x *= y++        return x++    def residual_block(self, x: tf.Tensor, use_squeeze_and_excite: bool) -> tf.Tensor:+        """Standard residual block, without strides or filter changes."""+        infilters = x.get_shape().as_list()[-1]
        infilters = x.shape[-1]
koenhelwegen

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 def residual_fast_block(             activation="relu",         )(x)         x = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)-        if downsample:-            return tf.keras.layers.concatenate(-                [residual, tf.keras.layers.add([x, residual])]-            )-        else:-            return tf.keras.layers.add([x, residual])++        if use_squeeze_and_excite:+            x *= y++        return x++    def residual_block(self, x: tf.Tensor, use_squeeze_and_excite: bool) -> tf.Tensor:+        """Standard residual block, without strides or filter changes."""+        infilters = x.get_shape().as_list()[-1]+        residual = x+        x = self.conv_block(x, infilters, use_squeeze_and_excite)+        return tf.keras.layers.add([x, residual])++    def concat_transition_block(+        self, x: tf.Tensor, filters: int, strides: int, use_squeeze_and_excite: bool+    ) -> tf.Tensor:+        """Concat transition block.++        Doubles number of filters by concatenating shortcut with x + shortcut.+        This module is loosely inspired by+        [MeliusNet](https://arxiv.org/abs/2001.05936).+        """+        infilters = x.get_shape().as_list()[-1]
        infilters = x.shape[-1]
koenhelwegen

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 def LCEFirstLayer(filters: int, x: tf.Tensor) -> tf.Tensor:     return tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)  +def squeeze_and_excite(inp: tf.Tensor, filters: int, r: int = 16):+    """Squeeze and Excite as per [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)"""+    C = inp.get_shape().as_list()[-1]++    out = utils.global_pool(inp)+    out = tf.keras.layers.Dense(+        C // r,+        activation="relu",+        kernel_initializer="he_normal",+        use_bias=False,+        kernel_regularizer=tf.keras.regularizers.l2(1e-5),+    )(out)++    out = tf.keras.layers.Dense(+        filters,+        activation="sigmoid",+        kernel_initializer="he_normal",+        use_bias=False,+        kernel_regularizer=tf.keras.regularizers.l2(1e-5),+    )(out)++    return tf.reshape(out, [-1, 1, 1, filters])++ @factory-class QuickNetFactory(ModelFactory):-    """Quicknet - A model designed for fast inference using [Larq Compute Engine](https://github.com/larq/compute-engine)"""+class QuickNetBaseFactory(ModelFactory): -    num_layers: int = Field(15)+    spec: Tuple[Sequence[int], Sequence[int], Sequence[bool]] = Field(None)+    transition_block: MethodType = Field(None)

I think the transition_block name is unintuitive

Transition block is quite widely used in literature, do you have an alternative?

koenhelwegen

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 def LCEFirstLayer(filters: int, x: tf.Tensor) -> tf.Tensor:     return tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)  +def squeeze_and_excite(inp: tf.Tensor, filters: int, r: int = 16):+    """Squeeze and Excite as per [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507).++    Use of S&E in BNNs was pioneered in [Training binary neural networks with+    real-to-binary convolutions](https://openreview.net/forum?id=BJg4NgBKvH).+    """+    C = inp.get_shape().as_list()[-1]++    out = utils.global_pool(inp)+    out = tf.keras.layers.Dense(+        C // r,
    out = utils.global_pool(inp)
    out = tf.keras.layers.Dense(
        inp.shape[-1] // r,
koenhelwegen

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 from larq_zoo.core.model_factory import ModelFactory  -def LCEFirstLayer(filters: int, x: tf.Tensor) -> tf.Tensor:+def stem_module(filters: int, x: tf.Tensor) -> tf.Tensor:

I would prefer stem_module as well. STEM is a pretty general name for the first few input layer also used in https://arxiv.org/pdf/1812.01187.pdf and I think since it is defined in the quicknet file it is clear that it relates to quicknet and not to an other model.

koenhelwegen

comment created time in 5 days

push eventlarq/zookeeper

Adam Hillier

commit sha 85686f8d131fff41dc55ede9307a777cf46c7688

Fix immutability check: sets are not immutable. (#132)

view details

push time in 5 days

PR merged larq/zookeeper

Fix immutability check: sets are not immutable. bug

Prompted by discussion in larq/zoo#138.

+4 -3

0 comment

1 changed file

AdamHillier

pr closed time in 5 days

PR closed lgeiger/compute-engine

Fix docs link in contributing guide documentation
+1 -1

0 comment

1 changed file

lgeiger

pr closed time in 5 days

PR opened lgeiger/compute-engine

Fix docs link in contributing guide documentation
+1 -1

0 comment

1 changed file

pr created time in 5 days

create barnchlgeiger/compute-engine

branch : lgeiger-patch-1

created branch time in 5 days

fork lgeiger/compute-engine

Highly optimized inference engine for Binarized Neural Networks

https://docs.larq.dev/compute-engine

fork in 5 days

startedCRPropa/CRPropa3

started time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

 def build(self) -> tf.keras.models.Model:          model = tf.keras.Model(inputs=self.image_input, outputs=x, name="quicknet") +        return model+++@factory+class QuickNetFactory(QuickNetBaseFactory):+    """Quicknet - A model designed for fast inference using [Larq Compute Engine](https://github.com/larq/compute-engine)"""++    spec = Field(+        lambda _: ([2, 3, 4, 4], [64, 128, 256, 512], [False, False, False, False])

Yes this is intended behaviour. Field doesn't allow mutable default values since this could lead to unexpected behaviour. This is inline with what Python dataclasses do. Take a look at https://docs.python.org/3/library/dataclasses.html#mutable-default-values for an example why mutable default values would be a bad idea.

You could get rid of the lambda here if you use a frozenset or a tuple of frozensets. Though the lambda is also OK.

koenhelwegen

comment created time in 6 days

PR opened tensorflow/tensorflow

Remove expired forward compatibility horizons

This PR removes expired forward compatibility statements that always evaluate to True.

+30 -111

0 comment

6 changed files

pr created time in 6 days

create barnchlgeiger/tensorflow

branch : cleanup-fwd-compat

created branch time in 6 days

more