larq/larq 216

An Open-Source Library for Training Binarized Neural Networks

A GitHub action that runs black code formatter for Python

Useful extra functionality for TensorFlow maintained by SIG-addons

Declarative statistical visualization library for Python

One framework. Mobile & desktop.

:tv: A low level parser for ANSI sequences.

A library for screen recording

Atom Package Manager

The hackable text editor

:wavy_dash: A generic display and progress for things that take time

PR opened larq/larq

This PR adds support for `experimental_aggregate_gradients`

in `apply_gradients`

for `Bop`

and the `CaseOptimizer`

.

`experimental_aggregate_gradients=False`

allows users to aggregate gradients themselfs. E.g. this is use by Keras to manually aggregate scaled gradients in fp16 when using multi-GPU mixed precision training before passing them to the optimizer.

Adding support for it also works around a bug in TF 2.2-rc2 where in mixed precision training `_HAS_AGGREGATE_GRAD`

is not checked so `experimental_aggregate_gradients`

is always passed to the wrapped optimizer which would fail with a cryptic error message otherwise.

This PR also adds the correct name scopes in `apply_gradients`

which previously weren't used.

pr created time in 27 minutes

push eventlarq/larq

commit sha e54afa7d00c2351999cb831c24ee5ad2ef784137

Run unit tests with 2.2.0rc2

push time in 30 minutes

push eventlarq/larq

commit sha 470890c4743751a2f2fd64480c282198679150e7

Remove _resource_apply_sparse since it default to NotImplemented

push time in 39 minutes

push eventlarq/docs

commit sha 975c7b53c1bea7e9ec20dcab383f48b44ac9ba4c

Correct model accuracies (#52) It looks the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from `model.evaluate` with the models downloaded from larq zoo.

push time in 19 hours

PR merged larq/docs

It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from `model.evaluate`

with the models downloaded from larq zoo.

pr closed time in 19 hours

push eventlarq/compute-engine

commit sha 492723d27acfa7b5d905baddbc2a79e78e02121a

(ci-skip) Correct model accuracies (#315) It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo. See https://github.com/larq/docs/pull/52

push time in 19 hours

PR merged larq/compute-engine

It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo.

See https://github.com/larq/docs/pull/52

pr closed time in 19 hours

PR opened larq/compute-engine

It looks like the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from model.evaluate with the models downloaded from larq zoo.

See https://github.com/larq/docs/pull/52

pr created time in a day

PR opened larq/docs

It looks the accuracies quoted here were taken from TensorBoard which might not be 100% accurate since the eval dataset is repeated during training so the last batch of the epoch can include duplicated samples. This PR corrects this by quoting the accuracies from `model.evaluate`

with the models downloaded from larq zoo.

pr created time in a day

pull request commentlarq/compute-engine

Add ImageNet evaluation tool from TFLite

Let's take a step back here.

I think the evaluation tool is super useful and it will be necessary to evaluate how our int8 quantization performs.

If we want to have this inside the compute-engine repo or somewhere else, is a valid discussion we should have, both approaches have different tradeoffs.

To be very clear, nobody is forcing anything here. The reason why we are using PRs is so we can have those discussions and I think disagreement is necessary and incredibly useful to ultimately arrive at a better solution. However, please keep the discussion constructive so we can work on addressing those issues step by step and not get ourselves wrapped up here.

comment created time in a day

push eventlarq/compute-engine

commit sha da3b0ae632aab0f4293f974b9490e6c8c1e9dd8d

Add support for fusing Relu1 Support for this has been added in tf 2.2

push time in a day

pull request commentlarq/compute-engine

The error in compiling XNNPACK is due to old compiler. You need at least Android NDK r18b to build XNNPACK.

@Maratyszcza thanks for the help! Indeed we were able to fix it by upgrading the compiler in f8beda0f8968c282d7b5be50b50448bb9d99fd04 :+1:

comment created time in a day

pull request commentlarq/compute-engine

Add ImageNet evaluation tool from TFLite

Generated the necessary label .txt files to use with the binary (corresponding to the ImageNet validation set). This was a massive pain and took ages because the conversion script they gave was terrible so I'm checking in those two files if that's okay?

I'd be fine with that, I've seen many other repos also have similar lists of ImageNet labels checked in 👍

The files look very similar to `imagenet2012_validation_labels.txt`

and `imagenet2012_labels.txt`

, are they the same? If so we could also download them from there on the fly if we don't want to check them in.

comment created time in 2 days

push eventlarq/compute-engine

commit sha 675ef5b88b393a1dce18c9d9ab00f8f1a9498a19

Fix pip package build

push time in 2 days

push eventlarq/compute-engine

commit sha 074e1b708b92f975eb62f4a563eec120b5e077bf

Add uint8 to quantized conversion test

push time in 2 days

PR opened larq/compute-engine

## What do these changes do?

This adds a MLIR status handler that will log errors to the TensorFlow Cpp logger.
I tried to correctly forward the location info to Python, but this resulted in some very unreadable error messages with duplication and weird `unknown`

logs all over the place. With `propagate=true`

this will just reuse the TensorFlow Cpp logger which logs errors nicely to stderr.

## How Has This Been Tested?

I manually set `emit_custom_ops=False`

in the flatbuffer export which will throw a `RuntimeError`

in Python and the MLIR location info is correctly forwarded.

pr created time in 2 days

push eventlarq/compute-engine

commit sha 6ede5d3bba028ecb4003673a4db6029477574ef9

Add quantization example to converter

push time in 2 days

push eventlarq/compute-engine

commit sha b8151a0623aeb46fda003491509357a9442b6127

Move calibrate_and_quantize to converter and add end2end test

push time in 2 days

push eventlarq/compute-engine

commit sha 77b89f275af07ba09af7b78793da0156c4216118

Remove allow_float since it is not used anywhere

push time in 2 days

pull request commentlarq/compute-engine

I thought the (ci-skip) in the commit message would stop CI running?

Looks like this doesn't work anymore since we moved to run CI on PRs instead of on push :(

comment created time in 3 days

issue commentlarq/larq

Training from presaved weights?

We have a community chat that you can join at https://spectrum.chat/larq, for questions and general discussions about BNNs.

comment created time in 3 days

push eventlarq/compute-engine

commit sha 64f4a218757fa0844aedbc9e8d659b6cdff9e933

Fix header guard name

push time in 3 days

issue commentlarq/larq

Training from presaved weights?

`larq`

layers with `kernel_quantizer=None`

and `input_quantizer=None`

are equivalent to `tf.keras`

layers. So if you use `larq`

layers for the non-BNN as well, reloading with `model.load_weights`

should work fine.

@timdebruin has more experience with pretraining, so he might have some more insights here.

comment created time in 3 days

push eventlarq/compute-engine

commit sha 706d62ab8c02713cd7ba7675ec494bd39874db4f

Add pass to sanitize LCE ops when reimporting flatbuffer

commit sha bc4a2b4bed6a4d0136a403a1cac70ad84b4dba06

Cleanup quantized model

commit sha 013606680d0edb865e183e3c5fab01f25da9fa82

Cleanup calibration wrapper

commit sha eb509cdfb8e1c31b9354c3f1d267da27e5627346

rename optimize --> quantization

push time in 3 days

push eventlarq/blog

commit sha 951bcb2485f215a911d829b4ccc630a1dcf2eebb

[ImgBot] Optimize images (#6) *Total -- 753.12kb -> 614.71kb (18.38%) /themes/larq-blog/static/images/larq-hero.png -- 475.44kb -> 364.54kb (23.33%) /static/images/lce-announcement-hero.png -- 271.56kb -> 244.08kb (10.12%) /themes/larq-blog/static/images/larq-logo.svg -- 3.30kb -> 3.27kb (0.89%) /themes/larq-blog/static/images/larq-logo-text.svg -- 2.82kb -> 2.81kb (0.07%) Signed-off-by: ImgBotApp <ImgBotHelp@gmail.com> Co-authored-by: ImgBotApp <ImgBotHelp@gmail.com>

push time in 4 days

PR merged larq/blog

## Beep boop. Your images are optimized!

Your image file size has been reduced by **18%** 🎉

<details> <summary> Details </summary>

File | Before | After | Percent reduction |
---|---|---|---|

/themes/larq-blog/static/images/larq-hero.png | 475.44kb | 364.54kb | 23.33% |

/static/images/lce-announcement-hero.png | 271.56kb | 244.08kb | 10.12% |

/themes/larq-blog/static/images/larq-logo.svg | 3.30kb | 3.27kb | 0.89% |

/themes/larq-blog/static/images/larq-logo-text.svg | 2.82kb | 2.81kb | 0.07% |

Total : |
753.12kb |
614.71kb |
18.38% |

</details>

📝docs | :octocat: repo | 🙋issues | 🏅swag | 🏪marketplace

pr closed time in 4 days

push eventlarq/docs

commit sha a53383acdab8b571195e0032f1ca16d61a3715c8

Bump larq from 0.9.2 to 0.9.3 (#50) Bumps [larq](https://github.com/larq/larq) from 0.9.2 to 0.9.3. - [Release notes](https://github.com/larq/larq/releases) - [Commits](https://github.com/larq/larq/compare/v0.9.2...v0.9.3) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>

push time in 4 days

PR merged larq/docs

Bumps larq from 0.9.2 to 0.9.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/larq/larq/releases">larq's releases</a>.</em></p> <blockquote> <h2>v0.9.3</h2> <h2>:bug: Bug Fixes</h2> <ul> <li>Use static tensor shapes if possible for one-padding (<a href="https://github-redirect.dependabot.com/larq/larq/issues/463">#463</a>) <a href="https://github.com/lgeiger">@lgeiger</a></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/larq/larq/commit/e83885858fa96dec298e43b948b603b75f192b70"><code>e838858</code></a> :arrow_up: 0.9.3 (<a href="https://github-redirect.dependabot.com/larq/larq/issues/464">#464</a>)</li> <li><a href="https://github.com/larq/larq/commit/da595efcce15d4fe1e80de12d7570266a5fafc48"><code>da595ef</code></a> Use static tensor shapes if possible for one-padding (<a href="https://github-redirect.dependabot.com/larq/larq/issues/463">#463</a>)</li> <li>See full diff in <a href="https://github.com/larq/larq/compare/v0.9.2...v0.9.3">compare view</a></li> </ul> </details> <br />

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`

.

<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

`@dependabot rebase`

will rebase this PR`@dependabot recreate`

will recreate this PR, overwriting any edits that have been made to it`@dependabot merge`

will merge this PR after your CI passes on it`@dependabot squash and merge`

will squash and merge this PR after your CI passes on it`@dependabot cancel merge`

will cancel a previously requested merge and block automerging`@dependabot reopen`

will reopen this PR if it is closed`@dependabot close`

will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually`@dependabot ignore this major version`

will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)`@dependabot ignore this minor version`

will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)`@dependabot ignore this dependency`

will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)`@dependabot badge me`

will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in the `.dependabot/config.yml`

file in this repo:

- Update frequency
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)

</details>

pr closed time in 4 days

push eventlarq/larq

commit sha e83885858fa96dec298e43b948b603b75f192b70

:arrow_up: 0.9.3 (#464)

push time in 4 days

PR merged larq/larq

pr closed time in 4 days

PR opened larq/larq

pr created time in 4 days

push eventlarq/larq

commit sha eecade8729a2088be0ee36223b5f3c3e18eab179

Add one padding to model summary test

push time in 4 days

pull request commentlarq/larq

Use static tensor shapes if possible for one-padding

Not sure why codecov doesn't like this, the test should fully cover the changes.

comment created time in 4 days

PR opened larq/larq

In general the batch dimension is undefined. This means that `tf.pad`

doesn't correctly set the shapes of the tensor. This makes model summary fail since the `TensorShape`

is not correctly inferred.

This PR uses static shapes whenever the spatial dimensions are fully defined to avoid this problem.

pr created time in 4 days

push eventlarq/larq

commit sha add75e2d138aebf63e91f771f0f35014490a8343

Use static tensor shapes if possible for one-padding

push time in 4 days

push eventlarq/compute-engine

commit sha 4a8caf38989ea23a1177eed2b3646d2c7d1818c7

(ci-skip) Fix docs link in contributing guide (#299)

push time in 5 days

PR merged larq/compute-engine

pr closed time in 5 days

push eventlgeiger/compute-engine

commit sha 39ef0797b871c6d63c88ec49b66ebc68641cb25d

Don't use cache if authentication fails (#301)

commit sha f2304bc6d6c79d9baee6904db7197d1ee6e2d950

(ci-skip) Fix docs link in contributing guide

push time in 5 days

pull request commentlarq/compute-engine

Fix docs link in contributing guide

The test will pass when rebased onto #301

comment created time in 5 days

delete branch larq/docs

delete branch : dependabot/pip/larq-compute-engine-0.2.0

delete time in 5 days

push eventlarq/docs

commit sha 9c6d6b799df83a0ec40a72d3bd51b1ba6da3d52c

Bump larq-compute-engine from 0.1.2 to 0.2.0 (#49) Bumps [larq-compute-engine](https://github.com/larq/compute-engine) from 0.1.2 to 0.2.0. - [Release notes](https://github.com/larq/compute-engine/releases) - [Commits](https://github.com/larq/compute-engine/compare/v0.1.2...v0.2.0) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>

push time in 5 days

push eventlgeiger/compute-engine

commit sha a3ac62d61313453416300b6ecbb7d27f97b961d2

(ci-skip) Fix docs link in contributing guide

push time in 5 days

PR opened larq/compute-engine

## What do these changes do?

This PR disables caching if authentication with the server fails. This allows CI to run from a fork without crashing due to authentification problems.

## How Has This Been Tested?

CI

pr created time in 5 days

push eventlgeiger/compute-engine

commit sha fff2562b64b18e96a1ce00c06ff2c43208174eb6

Don't use cache if authentication fails

push time in 5 days

pull request commentlarq/zoo

This doesn't super matter but I don't think this needs to be a field as opposed to simply a class attribute, because I don't think anyone will ever override the value.

I agree. This doesn't need to be a field.

comment created time in 5 days

push eventlgeiger/compute-engine

commit sha 12a2ba6c13de0f45c3c4aa332901625652440b2b

update the benchmark results (#294) * update the benchmark results * Apply suggestions from code review Co-Authored-By: Koen Helwegen <koen@plumerai.com> * fix QuickNet naming convention * fix the missing parentes * added quicknet API links Co-authored-by: Koen Helwegen <koen@plumerai.com>

commit sha ce9c2d7e56a5f3f9d55259a7df781201821f812d

:arrow_up: v0.2.0 (#300)

commit sha 53cac28ba32985a085ed3540df19e376b4a53724

Consolidate MLIR and End2End tests (#298)

commit sha 8a676a537a0850b2b98624fb14a4f08f42ba8913

Fix docs link in contributing guide

commit sha 1e16046ee265da32c5f1607d949af492aa696be3

Fallback to local execution if cache authentication fails

push time in 5 days

push eventlarq/compute-engine

commit sha 53cac28ba32985a085ed3540df19e376b4a53724

Consolidate MLIR and End2End tests (#298)

push time in 5 days

delete branch larq/compute-engine

delete branch : consolidate-mlir-and-end2end-tests

delete time in 5 days

PR merged larq/compute-engine

## What do these changes do?

This consolidates the MLIR and End2End tests. This should reduce the amount of network traffic and general overhead on CI since we now have proper caching, so splitting the tests doesn't improve speed anymore.

pr closed time in 5 days

push eventlarq/compute-engine

commit sha ce9c2d7e56a5f3f9d55259a7df781201821f812d

:arrow_up: v0.2.0 (#300)

push time in 5 days

PR merged larq/compute-engine

<!-- Thank you for your contribution! Please review https://github.com/larq/compute-engine/blob/master/CONTRIBUTING.md before opening a pull request. -->

## What do these changes do?

<!-- Please give a short brief about these changes. --> releasing LCE v0.2.0

## How Has This Been Tested?

<!-- Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration. -->

## Benchmark Results

<!-- Please provide new benchmark results on supported platforms if you believe these changes affect the LCE latency performance. -->

## Related issue number

<!-- Are there any issues opened that will be resolved by merging this change? -->

pr closed time in 5 days

push eventlarq/compute-engine

commit sha 12a2ba6c13de0f45c3c4aa332901625652440b2b

update the benchmark results (#294) * update the benchmark results * Apply suggestions from code review Co-Authored-By: Koen Helwegen <koen@plumerai.com> * fix QuickNet naming convention * fix the missing parentes * added quicknet API links Co-authored-by: Koen Helwegen <koen@plumerai.com>

push time in 5 days

PR merged larq/compute-engine

<!-- Thank you for your contribution! Please review https://github.com/larq/compute-engine/blob/master/CONTRIBUTING.md before opening a pull request. -->

## What do these changes do?

<!-- Please give a short brief about these changes. --> updated the benchmark restults

- [x] add quicknet benchmark results for pixel and RPi
- [x] add BiRealNet benchmark result for pixel
- [x] update the links to Larq Zoo

pr closed time in 5 days

push eventlarq/docs

commit sha 1c154301bb71ddb0f7c6f47259737749b1817870

Bump larq-zoo from 1.0.b3 to 1.0b4 (#48) * Bump larq-zoo from 1.0.b3 to 1.0b4 Bumps [larq-zoo](https://github.com/plumerai/larq-zoo) from 1.0.b3 to 1.0b4. - [Release notes](https://github.com/plumerai/larq-zoo/releases) - [Commits](https://github.com/plumerai/larq-zoo/compare/v1.0.b3...v1.0.b4) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> * Add QuickNetXL to API docs * Add QuickNetXL and update model accuracies Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com> Co-authored-by: Lukas Geiger <lgeiger@users.noreply.github.com>

push time in 5 days

PR merged larq/docs

Bumps larq-zoo from 1.0.b3 to 1.0b4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/plumerai/larq-zoo/releases">larq-zoo's releases</a>.</em></p> <blockquote> <h2>v1.0.b4</h2> <h2>:tada: Features</h2> <ul> <li>Implement QuickNet-XL (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/137">#137</a>) <a href="https://github.com/koenhelwegen">@koenhelwegen</a></li> <li>Implement one-padding and reduce number of SE blocks for QuickNet (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/136">#136</a>) <a href="https://github.com/koenhelwegen">@koenhelwegen</a></li> </ul> <h2>:construction_worker_man: Internal Improvements</h2> <ul> <li>Refactor quicknets and remove duplication (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/138">#138</a>) <a href="https://github.com/koenhelwegen">@koenhelwegen</a></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/larq/zoo/commit/2bead182135136ce547897241ebea0967aa8850c"><code>2bead18</code></a> :arrow_up: 1.0.b4 (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/139">#139</a>)</li> <li><a href="https://github.com/larq/zoo/commit/3667df76ef2eb40ede1e50c2586c492e6e4442ed"><code>3667df7</code></a> Implement QuickNet-XL (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/137">#137</a>)</li> <li><a href="https://github.com/larq/zoo/commit/68209a108c025182bed00ded87a6778be647950a"><code>68209a1</code></a> Refactor quicknets and remove duplication (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/138">#138</a>)</li> <li><a href="https://github.com/larq/zoo/commit/d3e96b125ff8f7ddb779f10480ff9ef42842de15"><code>d3e96b1</code></a> Implement one-padding and reduce number of SE blocks for QuickNet (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/136">#136</a>)</li> <li><a href="https://github.com/larq/zoo/commit/27a577cab6dd28291c203d2a8a6eb336cad2df03"><code>27a577c</code></a> Explicitly set compute softmax in float32 (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/135">#135</a>)</li> <li><a href="https://github.com/larq/zoo/commit/13fade7bf1a2679475363acdf9d38aed64218c89"><code>13fade7</code></a> Add support for TensorFlow 2.2 (<a href="https://github-redirect.dependabot.com/plumerai/larq-zoo/issues/134">#134</a>)</li> <li>See full diff in <a href="https://github.com/plumerai/larq-zoo/compare/v1.0.b3...v1.0.b4">compare view</a></li> </ul> </details> <br />

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`

.

**Note:** This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.

You can always request more updates by clicking `Bump now`

in your Dependabot dashboard.

<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

`@dependabot rebase`

will rebase this PR`@dependabot recreate`

will recreate this PR, overwriting any edits that have been made to it`@dependabot merge`

will merge this PR after your CI passes on it`@dependabot squash and merge`

will squash and merge this PR after your CI passes on it`@dependabot cancel merge`

will cancel a previously requested merge and block automerging`@dependabot reopen`

will reopen this PR if it is closed`@dependabot close`

will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually`@dependabot ignore this major version`

will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)`@dependabot ignore this minor version`

will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)`@dependabot ignore this dependency`

will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)`@dependabot badge me`

will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in the `.dependabot/config.yml`

file in this repo:

- Update frequency
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)

</details>

pr closed time in 5 days

push eventlarq/docs

commit sha 8266d1fc2867e34e003fd4291b6d268eb6a67409

Add QuickNetXL and update model accuracies

push time in 5 days

push eventlarq/docs

commit sha 6f49207cdb708f46d3ce66ddcc681f8078fdd34c

Add QuickNetXL to API docs

push time in 5 days

push eventlarq/zoo

commit sha 2bead182135136ce547897241ebea0967aa8850c

:arrow_up: 1.0.b4 (#139)

push time in 5 days

PR merged larq/zoo

pr closed time in 5 days

PR opened larq/zoo

pr created time in 5 days

Pull request review commenttensorflow/tensorflow

Remove expired forward compatibility horizons

def __init__(self, input_dataset, num_workers, index): self._input_dataset = input_dataset self._element_spec = input_dataset.element_spec- if (compat.forward_compatible(2019, 11, 25) or

`compat.forward_compatible(2019, 11, 25)`

will always evaluate to `True`

, so the second condition will never be called since a logic like `if True or auto_shard_policy_condition`

will always be `True`

regardless of the `auto_shard_policy_condition`

.

comment created time in 5 days

Pull request review commenttensorflow/tensorflow

Remove expired forward compatibility horizons

def __init__(self, f=self._map_func.function, deterministic=deterministic_string, **self._flat_structure)- elif deterministic is not None or compat.forward_compatible(2020, 2, 20):+ else:

This won't make any difference since `compat.forward_compatible(2020, 2, 20)`

will always be `True`

and `if deterministic is not None or True`

therfore will evaluate to `True`

regardless of the value of `deterministic`

.

comment created time in 5 days

Pull request review commenttensorflow/tensorflow

Remove expired forward compatibility horizons

def testMatMul(self): np.array([[8]], dtype=dtype), expected=np.array([[-2]], dtype=dtype)) - with compat.forward_compatibility_horizon(2019, 10, 19):

The git diff here is a bit misleading. The same test is executed directly above. Just removing the `compat`

line would execute the same test twice which would be unnecessary. The same is true for the other changes in this file.

comment created time in 5 days

push eventlarq/zoo

commit sha 41148204fa2475c8ec7a35c83628e342d37318e6

Make sure input dimesion is int to make TF 1.14 happy

push time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

def residual_fast_block( activation="relu", )(x) x = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)- if downsample:- return tf.keras.layers.concatenate(- [residual, tf.keras.layers.add([x, residual])]- )- else:- return tf.keras.layers.add([x, residual])++ if use_squeeze_and_excite:+ x *= y++ return x++ def residual_block(self, x: tf.Tensor, use_squeeze_and_excite: bool) -> tf.Tensor:+ """Standard residual block, without strides or filter changes."""+ infilters = x.get_shape().as_list()[-1]+ residual = x+ x = self.conv_block(x, infilters, use_squeeze_and_excite)+ return tf.keras.layers.add([x, residual])++ def concat_transition_block(+ self, x: tf.Tensor, filters: int, strides: int, use_squeeze_and_excite: bool+ ) -> tf.Tensor:+ """Concat transition block.++ Doubles number of filters by concatenating shortcut with x + shortcut.+ This module is loosely inspired by+ [MeliusNet](https://arxiv.org/abs/2001.05936).+ """+ infilters = x.get_shape().as_list()[-1]+ assert filters == 2 * infilters++ residual = tf.keras.layers.MaxPool2D(pool_size=strides, strides=strides)(x)+ residual = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(+ residual+ )+ x = self.conv_block(x, infilters, use_squeeze_and_excite, strides)+ x = tf.keras.layers.add([x, residual])++ return tf.keras.layers.concatenate([residual, x])++ def pointwise_transition_block(+ self, x: tf.Tensor, filters: int, strides: int, use_squeeze_and_excite: bool+ ) -> tf.Tensor:+ """Pointwise transition block.++ Transition to arbitrary number of filters by inserting pointwise+ full-precision convolution in shortcut.+ """+ residual = tf.keras.layers.MaxPool2D(pool_size=strides, strides=strides)(x)+ residual = tf.keras.layers.Conv2D(+ filters, kernel_size=1, use_bias=False, kernel_initializer="glorot_normal"+ )(residual)+ residual = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(+ residual+ )+ x = self.conv_block(x, filters, use_squeeze_and_excite, strides)+ return tf.keras.layers.add([x, residual]) def build(self) -> tf.keras.models.Model:- x = LCEFirstLayer(self.initial_filters, self.image_input)+ x = stem_module(self.stem_filters, self.image_input) - for block, (layers, filters) in enumerate(zip(*self.spec)):+ for block, (layers, filters, use_squeeze_and_excite) in enumerate(+ zip(*self.spec)+ ): for layer in range(layers):- strides = 1 if block == 0 or layer != 0 else 2- x = self.residual_fast_block(x, filters, strides=strides)+ if filters == x.get_shape().as_list()[-1]:

```
if filters == x.shape[-1]:
```

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

def build(self) -> tf.keras.models.Model: model = tf.keras.Model(inputs=self.image_input, outputs=x, name="quicknet") + return model+++@factory+class QuickNetFactory(QuickNetBaseFactory):+ """Quicknet - A model designed for fast inference using [Larq Compute Engine](https://github.com/larq/compute-engine)"""++ spec = Field(+ lambda: ([2, 3, 4, 4], [64, 128, 256, 512], [False, False, False, False])

I am not sure if we should really have this as a field, or if we should just make the total number of layers configurable. I don't have a strong opinion about this, but for me it is not really intuitive what the different config values are here.

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

def residual_fast_block( activation="relu", )(x) x = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)- if downsample:- return tf.keras.layers.concatenate(- [residual, tf.keras.layers.add([x, residual])]- )- else:- return tf.keras.layers.add([x, residual])++ if use_squeeze_and_excite:+ x *= y++ return x++ def residual_block(self, x: tf.Tensor, use_squeeze_and_excite: bool) -> tf.Tensor:+ """Standard residual block, without strides or filter changes."""+ infilters = x.get_shape().as_list()[-1]

```
infilters = x.shape[-1]
```

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

def residual_fast_block( activation="relu", )(x) x = tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x)- if downsample:- return tf.keras.layers.concatenate(- [residual, tf.keras.layers.add([x, residual])]- )- else:- return tf.keras.layers.add([x, residual])++ if use_squeeze_and_excite:+ x *= y++ return x++ def residual_block(self, x: tf.Tensor, use_squeeze_and_excite: bool) -> tf.Tensor:+ """Standard residual block, without strides or filter changes."""+ infilters = x.get_shape().as_list()[-1]+ residual = x+ x = self.conv_block(x, infilters, use_squeeze_and_excite)+ return tf.keras.layers.add([x, residual])++ def concat_transition_block(+ self, x: tf.Tensor, filters: int, strides: int, use_squeeze_and_excite: bool+ ) -> tf.Tensor:+ """Concat transition block.++ Doubles number of filters by concatenating shortcut with x + shortcut.+ This module is loosely inspired by+ [MeliusNet](https://arxiv.org/abs/2001.05936).+ """+ infilters = x.get_shape().as_list()[-1]

```
infilters = x.shape[-1]
```

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

def LCEFirstLayer(filters: int, x: tf.Tensor) -> tf.Tensor: return tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x) +def squeeze_and_excite(inp: tf.Tensor, filters: int, r: int = 16):+ """Squeeze and Excite as per [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)"""+ C = inp.get_shape().as_list()[-1]++ out = utils.global_pool(inp)+ out = tf.keras.layers.Dense(+ C // r,+ activation="relu",+ kernel_initializer="he_normal",+ use_bias=False,+ kernel_regularizer=tf.keras.regularizers.l2(1e-5),+ )(out)++ out = tf.keras.layers.Dense(+ filters,+ activation="sigmoid",+ kernel_initializer="he_normal",+ use_bias=False,+ kernel_regularizer=tf.keras.regularizers.l2(1e-5),+ )(out)++ return tf.reshape(out, [-1, 1, 1, filters])++ @factory-class QuickNetFactory(ModelFactory):- """Quicknet - A model designed for fast inference using [Larq Compute Engine](https://github.com/larq/compute-engine)"""+class QuickNetBaseFactory(ModelFactory): - num_layers: int = Field(15)+ spec: Tuple[Sequence[int], Sequence[int], Sequence[bool]] = Field(None)+ transition_block: MethodType = Field(None)

I think the transition_block name is unintuitive

Transition block is quite widely used in literature, do you have an alternative?

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

def LCEFirstLayer(filters: int, x: tf.Tensor) -> tf.Tensor: return tf.keras.layers.BatchNormalization(momentum=0.9, epsilon=1e-5)(x) +def squeeze_and_excite(inp: tf.Tensor, filters: int, r: int = 16):+ """Squeeze and Excite as per [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507).++ Use of S&E in BNNs was pioneered in [Training binary neural networks with+ real-to-binary convolutions](https://openreview.net/forum?id=BJg4NgBKvH).+ """+ C = inp.get_shape().as_list()[-1]++ out = utils.global_pool(inp)+ out = tf.keras.layers.Dense(+ C // r,

```
out = utils.global_pool(inp)
out = tf.keras.layers.Dense(
inp.shape[-1] // r,
```

comment created time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

from larq_zoo.core.model_factory import ModelFactory -def LCEFirstLayer(filters: int, x: tf.Tensor) -> tf.Tensor:+def stem_module(filters: int, x: tf.Tensor) -> tf.Tensor:

I would prefer `stem_module`

as well. STEM is a pretty general name for the first few input layer also used in https://arxiv.org/pdf/1812.01187.pdf and I think since it is defined in the `quicknet`

file it is clear that it relates to quicknet and not to an other model.

comment created time in 5 days

push eventlarq/zookeeper

commit sha 85686f8d131fff41dc55ede9307a777cf46c7688

Fix immutability check: sets are not immutable. (#132)

push time in 5 days

PR merged larq/zookeeper

Prompted by discussion in larq/zoo#138.

pr closed time in 5 days

PR closed lgeiger/compute-engine

pr closed time in 5 days

PR opened lgeiger/compute-engine

pr created time in 5 days

Highly optimized inference engine for Binarized Neural Networks

https://docs.larq.dev/compute-engine

fork in 5 days

startedCRPropa/CRPropa3

started time in 5 days

Pull request review commentlarq/zoo

Refactor quicknets and remove duplication

def build(self) -> tf.keras.models.Model: model = tf.keras.Model(inputs=self.image_input, outputs=x, name="quicknet") + return model+++@factory+class QuickNetFactory(QuickNetBaseFactory):+ """Quicknet - A model designed for fast inference using [Larq Compute Engine](https://github.com/larq/compute-engine)"""++ spec = Field(+ lambda _: ([2, 3, 4, 4], [64, 128, 256, 512], [False, False, False, False])

Yes this is intended behaviour.
`Field`

doesn't allow mutable default values since this could lead to unexpected behaviour. This is inline with what Python dataclasses do. Take a look at https://docs.python.org/3/library/dataclasses.html#mutable-default-values for an example why mutable default values would be a bad idea.

You could get rid of the lambda here if you use a `frozenset`

or a tuple of `frozenset`

s. Though the lambda is also OK.

comment created time in 6 days

PR opened tensorflow/tensorflow

This PR removes expired forward compatibility statements that always evaluate to `True`

.

pr created time in 6 days