profile
viewpoint

MatteoArm/ML-examples 0

Arm Machine Learning tutorials and examples

MatteoArm/model-optimization 0

A suite of tools that users, both novice and advanced, can use to optimize machine learning models for deployment and execution.

MatteoArm/tensorflow 0

An Open Source Machine Learning Framework for Everyone

push eventMatteoArm/model-optimization

Jaehong Kim

commit sha c954287e8e567f84bcbdcc51554ac869187de931

Change the compress, training function input argument format as same as decompress. PiperOrigin-RevId: 338989166

view details

Pulkit Bhuwalka

commit sha 16aa75b340073fcfa6c7b1522b0ea30b4b848455

Increase tolerance for numerical checking Recent x86 kernel rounding changes lead to an error in the single_conv model. Lowering the tolerances so tests stop failing for now. Seems a single rounding difference leads to a "scale" sized difference in the tensor values. Affects ~0.5% of values PiperOrigin-RevId: 339273280

view details

Jaehong Kim

commit sha a5dfe1a733c15f16c4c95d6f3dc3c48579db0896

Initial API for specifying 'compressible_weights'. For testing, Added bias_only algorithm that compress bias vector has same weight for each layer. PiperOrigin-RevId: 339374777

view details

push time in 2 days

push eventMatteoArm/model-optimization

Elena Zhelezina

commit sha 75fe2f116b8f3be9987c99c92e24adae232e4407

Added kmeans_plus_plus type of centroid to the test. Change-Id: Id3ab5a93fedbf697c3aac99447362d1a95beb6d5

view details

A. Unique TensorFlower

commit sha f1beeb7ccd00262e68770d1b1a3618fbc872fcc3

Merge pull request #567 from wwwind:clustering_tidy_up_test PiperOrigin-RevId: 337365407

view details

Benjamin Klimczak

commit sha ad4ef29f90919d01ee8492aa8a58e838911782bc

Revert "Small tidy up: Removed unused leftover variables. The variable layer.trainable_weights has all we need." This reverts commit 7b7fe538640c484388dbce4390d126d57c070f91.

view details

Jakub Konecny

commit sha 58875832bf43277e627afb44ae927f94c8791f05

Partially addresses memory overhead in bitpacking. This change provides alternative implementation for bitpacking with specific bitranges. The current implementation can temporarily allocate unnecessary memory, which was confirmed to not be the case with the implementation provided here. A more general solution is desired, as this only applies to special cases. PiperOrigin-RevId: 338018083

view details

Alan Chiao

commit sha 99b3fd63e9fe67d9de2977b7bd42bf500c05ffa0

Fix automation of PR labels for pruning. Add automation for weight compression API. PiperOrigin-RevId: 338130936

view details

Alan Chiao

commit sha 15eccc0c523c340b60f5ec00cb65286d61d13ed5

Weight compression API implementation for simplest case where original weights and graph differ from training/inference weights and graph, but training/inference graphs are the same. Test that weights are converted and that pretrained weights are preserved. The TFLite prevention of constant folding currently doesn't work. PiperOrigin-RevId: 338138622

view details

Alan Chiao

commit sha 8c875fc4730e6e0d0eca690af68f5c2074cc0e45

Increase algorithm coverage for algorithms that need to modify the weights after training (`compress` API) or have an inference graph that differs from the training graph (`decompress` vs `training`). PiperOrigin-RevId: 338149184

view details

Utku Evci

commit sha caee359e3a8a0cad277d5a20911f4dfb14621288

Ceiling the remaining weights to ensure there is at least 1 connection. Fixed #215 PiperOrigin-RevId: 338257075

view details

A. Unique TensorFlower

commit sha 2e22094ba56c7533bfa073569a33efd88b59e8bc

Merge pull request #575 from benkli01:toupstream/revert_tidy_up PiperOrigin-RevId: 338340110

view details

Jaehong Kim

commit sha a05a2cc7c7de726321408e29863d679a567c1fb9

Move _prevent_constant_folding applied location from after algorithm function to before. This change makes ReducesTFLiteModelSize tests passed. PiperOrigin-RevId: 338386984

view details

push time in 8 days

push eventMatteoArm/model-optimization

Elena Zhelezina

commit sha e29f87270090118e02d6063577720f880cd4af14

Clustering for models with deep layers. Change-Id: I694055fda544e0d9f714c3009122b28f764b968b

view details

Elena Zhelezina

commit sha 6c9941e5ea846fb0ee82c40fa421b3e8e5491813

Added check before clone_model when we copy layers: if layer is SubClass model, we throw an exception. This PR addresses reviewer's comment. Change-Id: I0bd72324fe60da7eda3d3c440c68d1797beecd6c

view details

Elena Zhelezina

commit sha 39089cee94922c529968d78963ceb776a906363d

Replaced SubClass with subclassed. Change-Id: Ibf43764cbf4024890b68a2550ae44c6e52aade31

view details

Elena Zhelezina

commit sha 7b7fe538640c484388dbce4390d126d57c070f91

Small tidy up: Removed unused leftover variables. The variable layer.trainable_weights has all we need. Change-Id: I271319fd9ff09f71df3178bf2459bde55e09b72c

view details

Benjamin Klimczak

commit sha 1dafd570fbdbd906daa432174c0f7728134951e2

Re-factoring of the clustering example.

view details

A. Unique TensorFlower

commit sha d1a23df99b01002703372f762462ccc2d7f6d0a6

Merge pull request #557 from wwwind:clustering_tidy_up_trainable_weights PiperOrigin-RevId: 335478807

view details

A. Unique TensorFlower

commit sha 6f51b3dbf0bd5992ddfb0255045fb1c296d268af

Merge pull request #553 from wwwind:clustering_deep_layers PiperOrigin-RevId: 335505128

view details

A. Unique TensorFlower

commit sha b894380d6e3f6e8e43dbf6d4fa698b8b0694cf9b

Merge pull request #562 from benkli01:toupstream/clustering-example-refactoring PiperOrigin-RevId: 335509565

view details

Karim Nosir

commit sha 6b32758442867b2b72b0350ebd96c0a89d6ad37d

Remove param to disable new converter. New converter launched already. PiperOrigin-RevId: 335515066

view details

Jaehong Kim

commit sha bba3f17034deea561c1b5a23f92545dffa1ae5f8

Add keras:compat dependency to clustering callbacks. PiperOrigin-RevId: 335719012

view details

Jaehong Kim

commit sha fa3f85545d6e07ae0daa4467f08adef5be8f9d38

Initial commit for compression api core part. PiperOrigin-RevId: 335770102

view details

Pulkit Bhuwalka

commit sha 066e3f7e272ab35663939ea88cf92111c3379d1b

Add sigmoid as supported activation in QAT Handling of sigmoid is similar to Softmax. We place a FQ before the activation but not after to prevent a large set of values turning to zeros, and potentially leading to NaNs. PiperOrigin-RevId: 336213731

view details

push time in 17 days

push eventMatteoArm/model-optimization

Elena Zhelezina

commit sha e0e4c09e4eff209532baa6a10e1c7a1c897de955

Simplification of clustering registry. Added test for DepthwiseConv2D. Change-Id: Ida1598ac91e2044b3a13fd8fd0cf5a6f3279133c

view details

Elena Zhelezina

commit sha 33a803013e386004f3c12d037c9c968caec43c91

Improved the test for non_clusterable_layer that demonstrates that the layer can be clusterable if weights are not allocated. Change-Id: If26bf41355c380dc564aff5e2309dc502abe91f2

view details

Elena Zhelezina

commit sha e29f87270090118e02d6063577720f880cd4af14

Clustering for models with deep layers. Change-Id: I694055fda544e0d9f714c3009122b28f764b968b

view details

Ruomei Yan

commit sha 76f2d4e6747e457b7536fbe8fafcc818f9fbd978

Add support for tf.distribute after enabling updates of cluster indices

view details

A. Unique TensorFlower

commit sha 270435dae82f6135ace72fb1b653e8acae5b6f18

Merge pull request #539 from wwwind:clustering_registry_improvement PiperOrigin-RevId: 333336338

view details

A. Unique TensorFlower

commit sha 0f6dd5aeb818c5f61123fc1d5642435ea0f5cd70

Merge pull request #531 from Ruomei:toupstream/distribution_cluster_indices PiperOrigin-RevId: 333337676

view details

Elena Zhelezina

commit sha 6c9941e5ea846fb0ee82c40fa421b3e8e5491813

Added check before clone_model when we copy layers: if layer is SubClass model, we throw an exception. This PR addresses reviewer's comment. Change-Id: I0bd72324fe60da7eda3d3c440c68d1797beecd6c

view details

Elena Zhelezina

commit sha 39089cee94922c529968d78963ceb776a906363d

Replaced SubClass with subclassed. Change-Id: Ibf43764cbf4024890b68a2550ae44c6e52aade31

view details

Elena Zhelezina

commit sha 7b7fe538640c484388dbce4390d126d57c070f91

Small tidy up: Removed unused leftover variables. The variable layer.trainable_weights has all we need. Change-Id: I271319fd9ff09f71df3178bf2459bde55e09b72c

view details

A. Unique TensorFlower

commit sha 9926e780372d6326333d60fed0ba2c2a66bda781

BUILD cleanup PiperOrigin-RevId: 334680830

view details

Benjamin Klimczak

commit sha 1dafd570fbdbd906daa432174c0f7728134951e2

Re-factoring of the clustering example.

view details

Rick Chao

commit sha ed086c8cdd3f74e5af928681725cdef1ccdbb234

PSv2: Replace existing `tf.distribute.experimental.ParameterServerStrategy` usage with `tf.compat.v1.distribute.experimental.ParameterServerStrategy` to prepare for the upcoming TF2 ParameterServerStrategy API release. The practically only difference for API endpoint switch is the monitoring from V2 to V1, for those who were using `tf.distribute.experimental.ParameterServerStrategy`. It's not supported in V2 and should be tracked as V1 anyway. PiperOrigin-RevId: 334847114

view details

Pulkit Bhuwalka

commit sha 6a084addc3f13ca11d43059001d6775aa36edb23

Disable activation_softmax test due to numerical error PiperOrigin-RevId: 335080348

view details

A. Unique TensorFlower

commit sha d1a23df99b01002703372f762462ccc2d7f6d0a6

Merge pull request #557 from wwwind:clustering_tidy_up_trainable_weights PiperOrigin-RevId: 335478807

view details

A. Unique TensorFlower

commit sha 6f51b3dbf0bd5992ddfb0255045fb1c296d268af

Merge pull request #553 from wwwind:clustering_deep_layers PiperOrigin-RevId: 335505128

view details

A. Unique TensorFlower

commit sha b894380d6e3f6e8e43dbf6d4fa698b8b0694cf9b

Merge pull request #562 from benkli01:toupstream/clustering-example-refactoring PiperOrigin-RevId: 335509565

view details

Karim Nosir

commit sha 6b32758442867b2b72b0350ebd96c0a89d6ad37d

Remove param to disable new converter. New converter launched already. PiperOrigin-RevId: 335515066

view details

Matteo Martincigh

commit sha 0fdaf14183de819cf4a9fe148b7d60931118a98b

Add Anti Zero-Drift functionality for Sparsity-Aware clustering * Implemented the zero-centroid initialization for all clustering methods * Implemented the sparsity masks for forward and backward propagation * Added preserve_sparsity class member to ClusterWeights to make sparsity preservation optional for all clustering methods * Refactored AbstractCentroidsInitialisation to include zero-centroid initialization for all init types * Added unit tests around the new changes

view details

Matteo Martincigh

commit sha 24a6ac0e68d8f1c2be7ef834f57984d1ea013b40

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Created new experimental API for sparsity-aware clustering * Kept the original API implementation * Moved the new feature to a new experimental package, making the original implementation private * Updated the unit tests accordingly * Created init and BUILD files for the new experimental package

view details

Matteo Martincigh

commit sha 864de3caf19a9a293557eb203adcacc832cd6e6e

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Fixed the signature of the public cluster_weights method

view details

push time in 23 days

push eventMatteoArm/model-optimization

Pulkit Bhuwalka

commit sha 6a084addc3f13ca11d43059001d6775aa36edb23

Disable activation_softmax test due to numerical error PiperOrigin-RevId: 335080348

view details

push time in 25 days

push eventMatteoArm/model-optimization

Elena Zhelezina

commit sha e0e4c09e4eff209532baa6a10e1c7a1c897de955

Simplification of clustering registry. Added test for DepthwiseConv2D. Change-Id: Ida1598ac91e2044b3a13fd8fd0cf5a6f3279133c

view details

Elena Zhelezina

commit sha 33a803013e386004f3c12d037c9c968caec43c91

Improved the test for non_clusterable_layer that demonstrates that the layer can be clusterable if weights are not allocated. Change-Id: If26bf41355c380dc564aff5e2309dc502abe91f2

view details

Ruomei Yan

commit sha 76f2d4e6747e457b7536fbe8fafcc818f9fbd978

Add support for tf.distribute after enabling updates of cluster indices

view details

A. Unique TensorFlower

commit sha 270435dae82f6135ace72fb1b653e8acae5b6f18

Merge pull request #539 from wwwind:clustering_registry_improvement PiperOrigin-RevId: 333336338

view details

A. Unique TensorFlower

commit sha 0f6dd5aeb818c5f61123fc1d5642435ea0f5cd70

Merge pull request #531 from Ruomei:toupstream/distribution_cluster_indices PiperOrigin-RevId: 333337676

view details

A. Unique TensorFlower

commit sha 9926e780372d6326333d60fed0ba2c2a66bda781

BUILD cleanup PiperOrigin-RevId: 334680830

view details

Rick Chao

commit sha ed086c8cdd3f74e5af928681725cdef1ccdbb234

PSv2: Replace existing `tf.distribute.experimental.ParameterServerStrategy` usage with `tf.compat.v1.distribute.experimental.ParameterServerStrategy` to prepare for the upcoming TF2 ParameterServerStrategy API release. The practically only difference for API endpoint switch is the monitoring from V2 to V1, for those who were using `tf.distribute.experimental.ParameterServerStrategy`. It's not supported in V2 and should be tracked as V1 anyway. PiperOrigin-RevId: 334847114

view details

push time in a month

push eventMatteoArm/model-optimization

Benjamin Klimczak

commit sha 5603678fd8dd5288e20ca11f734dd21b9541ab08

Copybara import of the project: -- 5d75774d3cc187e443ba1d5c849f8042d2846505 by Benjamin Klimczak <benjamin.klimczak@arm.com>: Add visualization output via tensorboard to the clustering example. Note: - The current example seems to be overparameterized, so that there is not very much to see in the visualization. This should be addressed by another PR making the example leaner. - Writing the visualization summaries batch-wise leads to a warning message like this: "WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.102097). Check your callbacks." It should be further investigated if this can be avoided. COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/model-optimization/pull/508 from benkli01:toupstream/clustering-visualization 5d75774d3cc187e443ba1d5c849f8042d2846505 PiperOrigin-RevId: 330547925

view details

Pulkit Bhuwalka

commit sha 14b724b7604ebd85e977540e8cfee6b9e059fb82

Add missing copyright blurb PiperOrigin-RevId: 331031480

view details

Alan Chiao

commit sha d80eed6da03e7596e0327813ec3011888359b612

Update colabs to follow TF docs nbfmt formatting (https://www.tensorflow.org/community/contribute/docs#notebook_formatting). And reference formatting in CONTRIBUTORS doc. PiperOrigin-RevId: 331220386

view details

Pulkit Bhuwalka

commit sha 525accb4d3ed3bc6d345143fb0fa1d8faa0ce23d

Bump version to 0.5.0 for new release. PiperOrigin-RevId: 331473494

view details

Pulkit Bhuwalka

commit sha 68e9e8444f180126a35f6b863f0619bf287ffe49

Update RELEASE.md with proper release notes PiperOrigin-RevId: 331477367

view details

Mark Daoust

commit sha 48c08d13629ff062ce1720d53a035bbfa0331b83

Adjust books to account for changed to the tensorflow_docs api_generator. PiperOrigin-RevId: 331859959

view details

Ruomei Yan

commit sha 87c06ebb4e8c3250ad78222b4cb4815c3be4bfd4

Copybara import of the project: -- a248f898581139ddb318e0dc26e89327484cf014 by Ruomei Yan <ruomei.yan@arm.com>: Enable differentiable training and update cluster indices COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/model-optimization/pull/519 from Ruomei:toupstream/enable_differentiable_training a248f898581139ddb318e0dc26e89327484cf014 PiperOrigin-RevId: 333108062

view details

Matteo Martincigh

commit sha 26dcec29ca792de6494ee4661eb457e997cd40dd

Add Anti Zero-Drift functionality for Sparsity-Aware clustering * Implemented the zero-centroid initialization for all clustering methods * Implemented the sparsity masks for forward and backward propagation * Added preserve_sparsity class member to ClusterWeights to make sparsity preservation optional for all clustering methods * Refactored AbstractCentroidsInitialisation to include zero-centroid initialization for all init types * Added unit tests around the new changes

view details

Matteo Martincigh

commit sha da331068e61148813124249f4fdb79c0a303af91

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Created new experimental API for sparsity-aware clustering * Kept the original API implementation * Moved the new feature to a new experimental package, making the original implementation private * Updated the unit tests accordingly * Created init and BUILD files for the new experimental package

view details

Matteo Martincigh

commit sha 006f7d1987ca824c0db9556275e98c63976a83dd

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Fixed the signature of the public cluster_weights method

view details

Matteo Martincigh

commit sha 99365229afa0e312b5493f89f32ed60363892a2a

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Set the random seed in the sparsity preservation test to a specific value to make sure that some of the weights are null

view details

push time in a month

push eventMatteoArm/model-optimization

Benjamin Klimczak

commit sha 5603678fd8dd5288e20ca11f734dd21b9541ab08

Copybara import of the project: -- 5d75774d3cc187e443ba1d5c849f8042d2846505 by Benjamin Klimczak <benjamin.klimczak@arm.com>: Add visualization output via tensorboard to the clustering example. Note: - The current example seems to be overparameterized, so that there is not very much to see in the visualization. This should be addressed by another PR making the example leaner. - Writing the visualization summaries batch-wise leads to a warning message like this: "WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.102097). Check your callbacks." It should be further investigated if this can be avoided. COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/model-optimization/pull/508 from benkli01:toupstream/clustering-visualization 5d75774d3cc187e443ba1d5c849f8042d2846505 PiperOrigin-RevId: 330547925

view details

Pulkit Bhuwalka

commit sha 14b724b7604ebd85e977540e8cfee6b9e059fb82

Add missing copyright blurb PiperOrigin-RevId: 331031480

view details

Alan Chiao

commit sha d80eed6da03e7596e0327813ec3011888359b612

Update colabs to follow TF docs nbfmt formatting (https://www.tensorflow.org/community/contribute/docs#notebook_formatting). And reference formatting in CONTRIBUTORS doc. PiperOrigin-RevId: 331220386

view details

Pulkit Bhuwalka

commit sha 525accb4d3ed3bc6d345143fb0fa1d8faa0ce23d

Bump version to 0.5.0 for new release. PiperOrigin-RevId: 331473494

view details

Pulkit Bhuwalka

commit sha 68e9e8444f180126a35f6b863f0619bf287ffe49

Update RELEASE.md with proper release notes PiperOrigin-RevId: 331477367

view details

Mark Daoust

commit sha 48c08d13629ff062ce1720d53a035bbfa0331b83

Adjust books to account for changed to the tensorflow_docs api_generator. PiperOrigin-RevId: 331859959

view details

Ruomei Yan

commit sha 87c06ebb4e8c3250ad78222b4cb4815c3be4bfd4

Copybara import of the project: -- a248f898581139ddb318e0dc26e89327484cf014 by Ruomei Yan <ruomei.yan@arm.com>: Enable differentiable training and update cluster indices COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/model-optimization/pull/519 from Ruomei:toupstream/enable_differentiable_training a248f898581139ddb318e0dc26e89327484cf014 PiperOrigin-RevId: 333108062

view details

push time in a month

Pull request review commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

 def testValuesRemainClusteredAfterTraining(self):     unique_weights = set(weights_as_list)     self.assertLessEqual(len(unique_weights), self.params["number_of_clusters"]) +  @keras_parameterized.run_all_keras_modes+  def testSparsityIsPreservedDuringTraining(self):+    """Verifies that training a clustered model does not destroy the sparsity of the weights."""+    original_model = keras.Sequential([+        layers.Dense(5, input_shape=(5,)),

Fixed in the latest revision. I've set the random seed to a value that guarantees that some of the weights are zero for that test (now there are 5 null-weights at each test run).

MatteoArm

comment created time in 2 months

PullRequestReviewEvent

push eventMatteoArm/model-optimization

Matteo Martincigh

commit sha 63cb72e682ed8407e5656d508c1732946b9647a7

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Set the random seed in the sparsity preservation test to a specific value to make sure that some of the weights are null

view details

push time in 2 months

push eventMatteoArm/model-optimization

Benjamin Klimczak

commit sha 849a969fae45e2851069c9b55c48991d9bc7e8c9

Simplify the clustering example The trainable parameters were reduced (from 3,274,698 to 20,410) to - reduce over-paramterization, - make the example faster to run and - make it easier to visualize the clustering results

view details

A. Unique TensorFlower

commit sha d668624f4b01657cd3432c66e8a48ddcd0378758

Merge pull request #516 from benkli01:toupstream/simplify-cluster-example PiperOrigin-RevId: 329721031

view details

Matteo Martincigh

commit sha 8edab76b248fa37ba71c89316e8ce49704ce0f32

Add Anti Zero-Drift functionality for Sparsity-Aware clustering * Implemented the zero-centroid initialization for all clustering methods * Implemented the sparsity masks for forward and backward propagation * Added preserve_sparsity class member to ClusterWeights to make sparsity preservation optional for all clustering methods * Refactored AbstractCentroidsInitialisation to include zero-centroid initialization for all init types * Added unit tests around the new changes

view details

Matteo Martincigh

commit sha 996e89aa8079522ef5e68a2af39d9f6a363b044b

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Created new experimental API for sparsity-aware clustering * Kept the original API implementation * Moved the new feature to a new experimental package, making the original implementation private * Updated the unit tests accordingly * Created init and BUILD files for the new experimental package

view details

Matteo Martincigh

commit sha 3236ce4cda617d373e9a7e2050735eb2e10d057a

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Fixed the signature of the public cluster_weights method

view details

push time in 2 months

Pull request review commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

 def cluster_scope(): def cluster_weights(to_cluster,                     number_of_clusters,                     cluster_centroids_init,+                    preserve_sparsity=False,

That's actually a mistake! In fact we don't have it in our internal master. I've now removed the preserve_sparsity from the method's signature.

MatteoArm

comment created time in 2 months

PullRequestReviewEvent

push eventMatteoArm/model-optimization

Benjamin Klimczak

commit sha 849a969fae45e2851069c9b55c48991d9bc7e8c9

Simplify the clustering example The trainable parameters were reduced (from 3,274,698 to 20,410) to - reduce over-paramterization, - make the example faster to run and - make it easier to visualize the clustering results

view details

A. Unique TensorFlower

commit sha d668624f4b01657cd3432c66e8a48ddcd0378758

Merge pull request #516 from benkli01:toupstream/simplify-cluster-example PiperOrigin-RevId: 329721031

view details

push time in 2 months

pull request commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

I've updated the changes to make the new feature experimental as per suggestions. The relevant code is now only a single method in experimental/cluster.py that calls a generic private method _cluster_weights (not part of the API) in the original cluster.py file. To keep things working for the users not interested in sparsity-aware clustering, the API method calls the private _cluster_weights by forcing the new 'preserve_sparsity' argument to 'False'. The API init files have been updated accordingly. One last remark: I kept the the cluster_scope() method where it was (in cluster.py) instead of moving it to experimental/cluster.py as suggested, since I'm not quite sure that that's the proper thing to do. It makes more sense to me that cluster_scope() should be consumed from cluster.py, and not from an experimental file (but maybe I misunderstood the suggestion).

MatteoArm

comment created time in 2 months

push eventMatteoArm/model-optimization

Matteo Martincigh

commit sha c139613f79e5436b2c5b2936d1025883a5c8be07

Add Anti Zero-Drift functionality for Sparsity-Aware clustering * Implemented the zero-centroid initialization for all clustering methods * Implemented the sparsity masks for forward and backward propagation * Added preserve_sparsity class member to ClusterWeights to make sparsity preservation optional for all clustering methods * Refactored AbstractCentroidsInitialisation to include zero-centroid initialization for all init types * Added unit tests around the new changes

view details

Matteo Martincigh

commit sha 5aa8748823be801cb5de186352d83455965acc67

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Created new experimental API for sparsity-aware clustering * Kept the original API implementation * Moved the new feature to a new experimental package, making the original implementation private * Updated the unit tests accordingly * Created init and BUILD files for the new experimental package

view details

push time in 2 months

push eventMatteoArm/model-optimization

Matteo Martincigh

commit sha 996c0a9c37268ddcc121f284644a192f25a2b1c9

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Created new experimental API for sparsity-aware clustering * Kept the original API implementation * Moved the new feature to a new experimental package, making the original implementation private * Updated the unit tests accordingly * Created init and BUILD files for the new experimental package

view details

push time in 2 months

push eventMatteoArm/model-optimization

Elena Zhelezina

commit sha 53a3100daa7199255b2cb7543e12d0a8c527bee6

Fix for the bug: weights/bias name should be the same for original and stripped model. Change-Id: Ib81f4598356cd96af224ed29e9391d2f26b6bc58

view details

Elena Zhelezina

commit sha 7dc76a4b409dd4cb78c2550080957d852b40143a

Addressed reviewer's comments. Change-Id: Ie5395e5cafc3544f909294650ff1b8de4c6153f7

view details

Elena Zhelezina

commit sha 307ebd834b7a6859af7c938a1be8dd319b8738cf

Fix to make the test more stable. Change-Id: Id9edf30a211cdace462737e2f390d6d697ec458c

view details

Alan Chiao

commit sha 82b698d2798a04b283515e6321b12c8d9c5c7959

Update docs on contributing and create initial maintainers doc. PiperOrigin-RevId: 328425526

view details

A. Unique TensorFlower

commit sha 3535acd352643dd5dc00124b32f1f500765a9df7

Merge pull request #517 from wwwind:bug_inconsistency_weights_name PiperOrigin-RevId: 328666954

view details

Matteo Martincigh

commit sha 19984a8e4dd99c11fc70bfa3c6b31f4eb9aa79d9

Add Anti Zero-Drift functionality for Sparsity-Aware clustering (experimental) * Added new experimental module for sparsity-aware clustering * Implemented the zero-centroid initialization for all clustering methods * Implemented the sparsity masks for forward and backward propagation * Added preserve_sparsity class member to ClusterWeights to make sparsity preservation optional for all clustering methods * Refactored AbstractCentroidsInitialisation to include zero-centroid initialization for all init types * Added new unit tests around the new changes

view details

push time in 2 months

push eventMatteoArm/model-optimization

Elena Zhelezina

commit sha 53a3100daa7199255b2cb7543e12d0a8c527bee6

Fix for the bug: weights/bias name should be the same for original and stripped model. Change-Id: Ib81f4598356cd96af224ed29e9391d2f26b6bc58

view details

Elena Zhelezina

commit sha 7dc76a4b409dd4cb78c2550080957d852b40143a

Addressed reviewer's comments. Change-Id: Ie5395e5cafc3544f909294650ff1b8de4c6153f7

view details

Elena Zhelezina

commit sha 307ebd834b7a6859af7c938a1be8dd319b8738cf

Fix to make the test more stable. Change-Id: Id9edf30a211cdace462737e2f390d6d697ec458c

view details

A. Unique TensorFlower

commit sha 3535acd352643dd5dc00124b32f1f500765a9df7

Merge pull request #517 from wwwind:bug_inconsistency_weights_name PiperOrigin-RevId: 328666954

view details

push time in 2 months

push eventMatteoArm/model-optimization

Alan Chiao

commit sha 82b698d2798a04b283515e6321b12c8d9c5c7959

Update docs on contributing and create initial maintainers doc. PiperOrigin-RevId: 328425526

view details

push time in 2 months

pull request commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

Experimental results:

Model Initialization Clusters Accuracy Accuracy w/ SAC Size Size w/ SAC
MobileNetV2 Linear 32 63.16% 63.11% 1.93Mb 2.02Mb
MobileNetV2 Kmeans++ 32 65.10% 65.33% 2.16Mb 2.28Mb
MobileNetV1 Linear 64 61.60% 60.98% 2.33Mb 2.33Mb
MobileNetV1 Kmeans++ 64 64.51% 65.48% 2.82Mb 2.66Mb

Notes: SAC = Sparsity-Aware Clustering (this PR) The size refers to the compressed size of the model All conv2d layers were clustered but depthwise conv2d for all tests Kmeans++ centroid initialization comes from this new feature #443

MatteoArm

comment created time in 2 months

pull request commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

These PR introduces a feature called Sparsity-Aware Clustering

In the model optimization workflow, clustering typically follows pruning. Therefore, the clustering operation needs to ensure that the level of sparsity is not destroyed during the fine-tuning process.

To avoid sparsity degradation, we introduced two new operations:

  1. We insert sparsity preserving nodes consisting of an element-wise multiplication of the weights with a binary sparsity mask, which ensures that zero-weights stay at zero during re-training. The sparsity masks are initialized when the clustering wrapper is created (right after pruning) and kept constant during re-training. The sparsity masks are used when the weights are updated during both forward and backward passes.
  2. Sparsity preservation needs to be considered during cluster initialization as well. For that, we introduce a simple and effective novel centroid initialization. At first, we set one centroid to zero explicitly to preserve sparsity during clustering. The remaining centroids are then proportionally allocated into the positive and negative intervals using the selected initialization method (linear, density-based, etc.). We refer to this technique as the sparsity-aware centroid initialization.

The idea is that the zero-point centroid will be assigned to the weights that have been set to zero by pruning, while the sparsity masks will keep those zero weights constant throughout re-training.

Implementation details:

A new boolean parameter called "preserve_sparsity" has been added to the clustering API to enable/disable sparsity preservation. If sparsity preservation is desired and has to be enabled during clustering, simply setting it to 'True' will enable both the new zero-centroid initialization and the application of the new sparsity masks during forward and back-propagation.

When the cluster wrapper is applied to a layer, a new sanity check is performed: if sparsity preservation is enabled, the minimum allowed number of clusters has to be 2 (instead of 1) to allow for at least a non-zero centroid other than the newly reserved zero centroid.

If sparsity preservation is enabled, a different approach is followed to initialize the centroids: one centroid is always set to zero, and the remaining number of clusters is proportionally allocated among negative and positive values, depending on the initial weight distribution of the layer. For example, if the selected number of clusters is 32, the chosen initialization strategy is 'linear', and say 40% of the weights are negative while the remaining are positive, the clusters will be initialized such that 12 centroids will be linearly distributed between the minimum weight value and zero (excluded), followed by zero-centroid, then followed the remaining 19 positive centroid will be linearly distributed between zero (excluded) and the maximum weight value. After the centroids have been initialized and the weights have been clustered, the sparsity masks are generated based on the distribution of the null clustered weights. The sparsity masks are then stored in the wrapper to be used later during the re-training process.

MatteoArm

comment created time in 2 months

pull request commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

@googlebot I signed it!

MatteoArm

comment created time in 2 months

pull request commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

Regarding the CLA, it seemed like there was an issue on our side. Now sorted

MatteoArm

comment created time in 2 months

pull request commenttensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering

@googlebot I signed it!

MatteoArm

comment created time in 2 months

PR opened tensorflow/model-optimization

Add Anti Zero-Drift functionality for Sparsity-Aware clustering
  • Implemented the zero-centroid initialization for all clustering methods
  • Implemented the sparsity masks for forward and backward propagation
  • Added preserve_sparsity class member to ClusterWeights to make sparsity preservation optional for all clustering methods
  • Refactored AbstractCentroidsInitialisation to include zero-centroid initialization for all init types
  • Added unit tests around the new changes
+515 -63

0 comment

7 changed files

pr created time in 2 months

create barnchMatteoArm/model-optimization

branch : feature/sparsity_aware_clustering

created branch time in 2 months

push eventMatteoArm/model-optimization

Alan Chiao

commit sha 8169858ec9cf8bd04a53674ccb9ab7fee792bfa1

Create RELEASE.md file for release notes. People creating PRs will be responsible for updating the release notes themselves, for major features and bug fixes. Whatever is in the release notes at the time of a release will go out, with some possible editing. PiperOrigin-RevId: 321267749

view details

Pulkit Bhuwalka

commit sha 8465279d52056baf244d7344a1da140ea639a4c9

Use tf_inspect for quantize_wrapper for py2 TFL Micro has a model which uses py2 and fails, so adding this in for support PiperOrigin-RevId: 321405008

view details

Mohamed Nour Abouelseoud

commit sha 1e7f263decb4505bb8dd4d8deded55782f486bd8

Reflect clustered layer's name in new layer's name

view details

Aron Virginas-Tar

commit sha dcb5f921ded02afb5e404409dcc98d02d55f8ae3

Add clustering module to API docs generator

view details

Mohamed Nour Abouelseoud

commit sha 982abc8919c61918e1cfabe348cadf12db73fef3

Fixed minor bug in mnist example

view details

Aron Virginas-Tar

commit sha 362242c49cc6a49339cedc49c338bdec7a751ed3

Add docstring for cluster_config.CentroidInitialization

view details

A. Unique TensorFlower

commit sha 3472c1aac255622d9209dbd7e6a1114a9c6eff6c

Merge pull request #465 from MohamedNourArm:cluster_layer_name PiperOrigin-RevId: 322165432

view details

A. Unique TensorFlower

commit sha 1bf87edf8f1abf4aea8963ce6ef66486a1f28829

Merge pull request #466 from MohamedNourArm:master PiperOrigin-RevId: 322165616

view details

A. Unique TensorFlower

commit sha 20b20c15a92407d3bd43852e3478cf156001f959

Merge pull request #467 from arovir01:toupstream/centroid_init_doc PiperOrigin-RevId: 322167185

view details

Alan Chiao

commit sha 0cf788277dc7d96c525af9d3a1b6761310f78718

Start enabling contributions for individual issues. PiperOrigin-RevId: 322172830

view details

A. Unique TensorFlower

commit sha 15e730ab4788f9775c77e3576078d844bc4afcc3

Merge pull request #421 from arovir01:toupstream/clustering_docgen PiperOrigin-RevId: 322239236

view details

Saoirse Stewart

commit sha efe9af4de8e333bd9c7b9a4c9167af09bc3dd61b

Added kmeans++ initialization to clustering API

view details

Ruomei Yan

commit sha e616f48bd97b9a06e099969fe0c844072f6b8f96

Create Jupyter notebooks for clustering

view details

A. Unique TensorFlower

commit sha 9b12529b6d99bdbac553204e15a2c3d3c8144424

Merge pull request #330 from Ruomei:toupstream/clustering_jupyter_notebook PiperOrigin-RevId: 322805917

view details

Alan Chiao

commit sha 34dfaddf23c2824532f74ab1eee36a60dcc7726e

Setup continuous integration build scripts for opensource. Verified running TFMOT unit tests with ci/kokoro/build.sh against TF 2.0.X. PiperOrigin-RevId: 322877301

view details

Benjamin Klimczak

commit sha 614c9b19b6b2b1fb50762874bc5ed9462db709a5

Release notes for clustering release

view details

A. Unique TensorFlower

commit sha fe4df5cbcf3118486e55ff3571e2ad30a7b00cb3

Merge pull request #474 from benkli01:toupstream/clustering-release-notes PiperOrigin-RevId: 323416347

view details

Alan Chiao

commit sha 020a0b2bb46d4232067967b5ed17bfc95dcfa7e2

Add labeler workflow to automatically attach what technique type a PR affects. PiperOrigin-RevId: 323427385

view details

A. Unique TensorFlower

commit sha adcbe15fdcc0befbb70bbb5c3846e870d810ad22

Internal cleanup PiperOrigin-RevId: 323439014

view details

Alan Chiao

commit sha 22de15ddc2a1e0fde84b8ec294530b74ed553528

Bump version to 0.4.0 for new release. PiperOrigin-RevId: 323448628

view details

push time in 2 months

more