profile
viewpoint
Abhipray Sahoo Abhipray NeoSensory Houston

Abhipray/audio-codec 2

stanford music422 final project. Implements block switching, spectral band replication and gain shape quantization

cnwalker/savesun 1

Monitoring solar energy usage for the NASA HI-SEAS exploration team

Abhipray/Adafruit_BluefruitLE_nRF51 0

Arduino library for nRF51822-based Adafruit Bluefruit LE modules

Abhipray/CMSIS_5 0

CMSIS Version 5 Development Repository

Abhipray/gestures4vr 0

Gestures for smartphone VR apps using the phone camera

delete branch Abhipray/embedded-sigproc

delete branch : feature/masters

delete time in 20 days

push eventAbhipray/embedded-sigproc

Abhipray Sahoo

commit sha e98647083d8a2f2366f4d3300d57d75fb374d887

ee masters post initial draft

view details

Abhipray Sahoo

commit sha 2774f3b391bad8543fa4a4f7f0401046249fef29

master's post

view details

Abhipray Sahoo

commit sha eb54747dc7abbea6bf1d9a9c6b94e59fd7ba4514

Merge pull request #5 from Abhipray/feature/masters Feature/masters

view details

push time in 20 days

PR opened Abhipray/embedded-sigproc

Feature/masters
+389 -5

0 comment

23 changed files

pr created time in 20 days

push eventAbhipray/embedded-sigproc

Abhipray Sahoo

commit sha 2774f3b391bad8543fa4a4f7f0401046249fef29

master's post

view details

push time in 20 days

push eventneosensory/ext_tensorflow

Abhipray Sahoo

commit sha 2c72cee588c95f5b41b0f4c7b80d23aa3e217bdd

remove fully connected check for bias

view details

push time in 23 days

create barnchAbhipray/embedded-sigproc

branch : feature/masters

created branch time in 24 days

PublicEvent

create barnchneosensory/tflite_micro_compiler

branch : neo/add_ops

created branch time in a month

push eventneosensory/ext_tensorflow

Abhipray Sahoo

commit sha f70363874fe072538f607f9cf3910683c0248fdd

Merge branch 'master' of github.com:neosensory/ext_tensorflow into upstream_master # Conflicts: # tensorflow/lite/micro/kernels/cmsis-nn/mul.cc # tensorflow/lite/micro/kernels/micro_ops.h # tensorflow/lite/micro/kernels/mul.cc

view details

push time in a month

push eventneosensory/ext_tensorflow

Yuanzhong Xu

commit sha 9396e98574418ae24165d66def572466902d7f9b

[XLA:SPMD] A couple of improvements to avoid resharding 1. Lower broadcast priority during propagation. Broadcast(scalar) often gets CSE'ed, and their users could have different sharding 2. Track the dimensions from broadcast in SPMD builder, so that we can skip reshard with collective permutes. PiperOrigin-RevId: 331379273 Change-Id: If84400716ef3bb6b73d80f27e3112e63f634bb9f

view details

A. Unique TensorFlower

commit sha f1eaa0a60f4c7bfb4480314eb4e283e2fb09671b

compat: Update forward compatibility horizon to 2020-09-13 PiperOrigin-RevId: 331393344 Change-Id: Iee0126804a303e1ff07f69bc2a200df0a2444777

view details

A. Unique TensorFlower

commit sha cd8a2bef151b8d870872f8ccb8d798dffd0cfc5a

Update GraphDef version to 523. PiperOrigin-RevId: 331393345 Change-Id: I6d64da434dd7ad13a313fcbd8a0d6b0eecea6c11

view details

yuanbopeng

commit sha c9a8a0385bb8ea121012a605cb5d23eb462db34c

Adding HDFS connection cache on tensorflow side can improve performance by 20+ times #43187

view details

Vo Van Nghia

commit sha 5721ce12d47be87ccd2878fb2067423cf6eeb8e3

Use gcs partial reponse and absl strcat

view details

A. Unique TensorFlower

commit sha 1e3e627444e4f09649f64ab49a27d0b39d49cbd6

Integrate LLVM at llvm/llvm-project@f086e85eea94 Updates LLVM usage to match [f086e85eea94](https://github.com/llvm/llvm-project/commit/f086e85eea94) PiperOrigin-RevId: 331422936 Change-Id: I5d0657393e3c4ef2009a338e73e6d8b8e6cc924d

view details

Peter Hawkins

commit sha 1796238b6abe54e15856fd1b1d84e47e277ef7a1

[XLA] In the QR decomposition implementation, change the Compact WY transformation to use a matrix multiplication instead of a sequence of matrix-vector multiplications in a loop. Happily this seems to be a case where we don't need to avoid materializing the matrix-matrix product: the output matrix is only [n, n] where n is the block size. PiperOrigin-RevId: 331429315 Change-Id: I804296fe7b404cea231759bd67671b53c47420db

view details

amturati

commit sha e51c27f54b4ac4f8ad114483bc00afcb481080c6

deleted entry in BUILD

view details

A. Unique TensorFlower

commit sha 987d8175716502637a9f8cd8b224e75c91348691

Integrate LLVM at llvm/llvm-project@c0bcd11068fc Updates LLVM usage to match [c0bcd11068fc](https://github.com/llvm/llvm-project/commit/c0bcd11068fc) PiperOrigin-RevId: 331438229 Change-Id: I2e494ff752090c31667f12a41e5312ea3ea2c907

view details

Yuanzhong Xu

commit sha 0398fbc19eb1f4923ec3b3f4f853107448aa5617

[XLA] A few fixes for sharding propagation 1. The previous workset algorithm is wrong, and could early stop before fix point. Changed to a more straightforward implementation. 2. Make forward broadcast propagation lower priority. This is to prefer resharding before broadcast. 3. If a new sharding isn't compatible with existing tiled sharding, do not change merely because the new sharding has more tiles. One scenario: a subgraph is 32-way partitioned well at low-aggressiveness propagation, but a high-aggressiveness propagation chooses 64-way for a broadcast operand; if we propagate the 64-way down, the original subgraph will likely have resharding. PiperOrigin-RevId: 331443488 Change-Id: I6c11f7d62f44a6c2240dd18c4b33421a26422bcc

view details

Shlomi Regev

commit sha ffa116ef3f471d61dbe8c1ba571f1f2f22f7a656

Build CMSIS as part of the CortexM test code, instead of the library. Avoid symbol conflicts when the application links against CMSIS. PiperOrigin-RevId: 331446242 Change-Id: I79adc01abf2ef201e7ec14b1a89e519fc58c1e59

view details

Mao Yunfei

commit sha 611cc3652383a9fbbec277acf361f86e50189c0c

Merge branch 'master' into yunfeimao/leaky_relu_fusion_mkl

view details

Taehee Jeong

commit sha 3f947dd2c8cac5ab8eaf8ee5e0aae3076aa1e531

Change `allowsPrecisionLoss` to `isPrecisionLossAllowed` to make style consistent. https://swift.org/documentation/api-design-guidelines/#strive-for-fluent-usage Uses of Boolean methods and properties should read as assertions about the receiver when the use is nonmutating, e.g. x.isEmpty, line1.intersects(line2). PiperOrigin-RevId: 331467999 Change-Id: I6eb753cb2078fba25eb77f3eeb2d7dcf41aed64f

view details

Eugene Brevdo

commit sha 2cb2f064794b6c24158ed09fe2245c5c95e93248

Disable new keras Recurrent code paths. They break deepcopy of LSTM layer. PiperOrigin-RevId: 331469732 Change-Id: Icb6046012b5e4ad5984a74a98e3df5f2eebca49d

view details

Yuanzhong Xu

commit sha 2f3d1f0a4e7104f6aec461e1788c4364ce218540

[XLA] Fix a live-range heuristic in all-gather CSE earlier AG's live range only needs to cover the current AG, not its last user, to be beneficial. PiperOrigin-RevId: 331481736 Change-Id: Id710ebe4b5c0d4b539f25ca108a8649eae2e97a5

view details

yuanbopeng

commit sha 8a773b71258e627ee51bee30b017c27d790a1b3f

Fix a bug that the file copied by TF from HDFS to local may be wrong, when HDFS file is being overwritten #42597

view details

Jacques Pienaar

commit sha ed1e38edcab9bd8d5e4c70221dbfe7dec8692b7b

Use location instead of unknown loc Follow up on TODO as placeholder has been added to rewriter. PiperOrigin-RevId: 331488583 Change-Id: Ia645a449d5d7200f7f3bb819a55994c0a919b31c

view details

A. Unique TensorFlower

commit sha c548d5a9ada064712cc80d8e220810cffa55bb8a

Integrate LLVM at llvm/llvm-project@cb3e1dd6c31e Updates LLVM usage to match [cb3e1dd6c31e](https://github.com/llvm/llvm-project/commit/cb3e1dd6c31e) PiperOrigin-RevId: 331490764 Change-Id: I6ca2b7b4928f7af649c77ba5ad4725b90ec3f2b2

view details

Xunkai Zhang

commit sha d036d6fee33e3e615562d36eb413737f0248798f

Add Java Interpreter#setCancellationFlag API to cancel inference invocation. Usage: Interpreter.Options.setCancellable(true) to turn on the feature Interpreter.setCancelled(true) to cancel (and prevent) any execution Interpreter.setCancelled(false) to resume (and enable) any execution PiperOrigin-RevId: 331494300 Change-Id: I017e647bcf720b21cf4b82f257c5852a74e88f12

view details

Stephan Herhut

commit sha 512e6c767bf76ba67be2b92af13343d8cf3d1680

Add two more passes to the kernel generator pipeline to support more complex inputs. When generating just a cubin, we can now also handle some shape computations. PiperOrigin-RevId: 331495533 Change-Id: I90fd7f392507c89192e92b59d9770e7296d15638

view details

push time in a month

issue commenttensorflow/tensorflow

[TFLu] int8 ops slower than f32

@yair-ehrenwald I am not sure all of it can be attributed to the presence of an FPU. For instance, the MUL CMSIS-NN implementation is slower than the reference op even though the CMSIS-NN kernel uses SIMD instructions.

Abhipray

comment created time in a month

startedchrisbartley/aq-buzz

started time in a month

pull request commenttensorflow/tensorflow

Add CMSIS-NN SVDF kernel.

Hi @jenselofsson, I am curious how you use this kernel? Is there a python implementation of svdf that converts to this tf lite op?

jenselofsson

comment created time in a month

push eventneosensory/ext_tensorflow

Yong Tang

commit sha b8137ab1037c6fca293ed74995e9e7902b913f4d

Fix messy rendering of docs for tf.image.extract_patches While working on using tf.image.extract_patches, noticed that the docs rendering of tf.image.extract_patches was messed. The issue was the missing backtick ("`") at the first line of args section. This PR fixes the docs rendering issue. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>

view details

jgehw

commit sha 8445db1f630fc94ee6589a98160e7087743dbf87

make debug build on Windows MSVC compile - fix missing-return-statement errors - add partial dummy template specializations (which are not needed but without optimization the compiler doesn't find out about it)

view details

jgehw

commit sha 786c85a58bb8c142911cbbe309c7f14b9e86200c

replace unreachable returns by aborts functionally identical but nicer, as discussed in PR

view details

jgehw

commit sha 8cc5a3a9dea9173d5a3eec089de2ee2e8b0791d7

rephrase wording

view details

Vignesh Kothapalli

commit sha 52204b8e32e3f2984f75131a038b5fa59259923a

fix the pylint hang issue

view details

Alexey Ivanov

commit sha 78ffc027e97bae384a16c8b7c14b77a98a075985

[SE] Include absl/memory/memory.h

view details

anencore94

commit sha 83c9227a4b8faa502d34e70e1e6752123f2adb93

Remove redundant lines to make dockerfiles - remove exact identical lines in spec.yml which used to make dockerfiles Signed-off-by: anencore94 <anencore94@kaist.ac.kr>

view details

Vignesh Kothapalli

commit sha 19d13e8b412faa4d940f0213f058e2d79f8411c6

bash syntax change

view details

Jens Elofsson

commit sha e0800968d9971de780b7f9837a2e02fb2a89b087

Specify the optimization level in a variable. Make it possible to specify the optimization level from the command line and make it the same regardless of the BUILD_TYPE.

view details

sshiddib

commit sha b5404a7f9b7b71f04590251b41f7e1f222b888be

Fix UT failures due to explicit padding for MKL-DNN

view details

Sharada Shiddibhavi

commit sha 5c56676cef0292f62b3acb2e5bff737ec07c623c

Update nn_ops.cc

view details

Eugene Burmako

commit sha bee0d8ead2c4445546c089fd59c5c9ff98bbae0a

Add support for legalizing mhlo.slice to lmhlo.slice PiperOrigin-RevId: 330153599 Change-Id: I8b62f003b20742ab11fce19f50e38039be898606

view details

A. Unique TensorFlower

commit sha 3630a71ba33817d4a6474bef47f0bfdbe90c6444

compat: Update forward compatibility horizon to 2020-09-05 PiperOrigin-RevId: 330170758 Change-Id: I142d71689813e90ffe5491fdce1f3bf5e7bd9ef5

view details

A. Unique TensorFlower

commit sha 5bb9e81e1ac7d3944986504bb37a0b1c6b27576c

Update GraphDef version to 515. PiperOrigin-RevId: 330170759 Change-Id: Ice7f7edfce48539d7c60f83b9200d97381e33d7c

view details

David Majnemer

commit sha 38c3134bb1e0150a4376127c46a7eb482b0c66e3

[XLA] Don't use the unblocked expansion for 1 element cholesky sub-blocks PiperOrigin-RevId: 330216235 Change-Id: Iddd545ea047e990e1491758bddb1bdf9f03338a7

view details

TensorFlower Gardener

commit sha de8ecf4592fb3b0bdb0565529981936c5a35fcae

Merge pull request #42942 from SaveTheRbtz:absl_include PiperOrigin-RevId: 330233127 Change-Id: Ib4723d58c7d90f4d4fc68877663980c920b9c03b

view details

A. Unique TensorFlower

commit sha 2d16bd8612c6b2201dd6775838a2f96a24786846

compat: Update forward compatibility horizon to 2020-09-06 PiperOrigin-RevId: 330251052 Change-Id: Ie887edba7af0ccc2ce6c59a9cf783632607b4b79

view details

A. Unique TensorFlower

commit sha bb49eafc080caf205d5ba4478d9c93552ce46d57

Update GraphDef version to 516. PiperOrigin-RevId: 330251072 Change-Id: I13b6ba568be717243aa92e2439a13191dcf115c5

view details

A. Unique TensorFlower

commit sha e9f0135b7ee366d9751e66532f011d4f8e5a1351

[XLA] Convert Abs(a)*Abs(a) to a*a and add an option to allow for numerically unsafe algebraic simplifications PiperOrigin-RevId: 330288395 Change-Id: Iece65fa1cc28a9eb5bcebc2faf8d34235c47e56a

view details

Tian Lin

commit sha 31dfc1deeda674a7be98f5357cc1d993ceb925d4

Add instructions to install stable or nightly versions of tflite-model-maker. PiperOrigin-RevId: 330312338 Change-Id: Id40fe343dc21e71efa9dcbffaa18c3942a36c8f0

view details

push time in 2 months

issue openedtensorflow/tensorflow

int8 ops slower than f32

@tensorflow/micro

System information

  • Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow installed from (source or binary): pip install tf-nightly
  • Tensorflow version (commit SHA if source): 2.4.0-dev20200908
  • Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.): Cortex M4f

Describe the problem I compared the time spent by MicroInterpreter::Invoke() to perform different ops on the same model with with int8 quantization and without. I also tried the CMSIS-NN kernels for some of the ops. The problem is that besides the fully connected op, every other op is the same or slower with the int8 ops.

Here is a table showing the average time in ticks spent by each op's Eval(). The first column shows model with int8 quantization using cmsis-nn kernels for mul, add, fullyconnected. The second column uses the reference kernels and third is floating point.

q7cmsis q7 ref f32
fullyconnected 5990 8611
tanh 15114 15122
add 2686 2887
mul 2202 1834
sub 3301 3299
split_v 898 915
split 794 817
reshape 441 443

The tanh kernel performs the worst with 13x slower than the floating point equivalent. Is this expected or known behavior?

Please provide the exact sequence of commands/steps when you ran into the problem I have attached the models that I used for profiling. profiling_models.zip

created time in 2 months

push eventneosensory/ext_tensorflow

Dmitry Zakharov

commit sha a729898a0904611f970c990b81f8d36b7f79fae9

Fix of problem with copying of just downloaded TCF

view details

Dmitry Zakharov

commit sha a3db6fe20edfe76e6cc4425261608eea4489fc81

- default heap size was increased to cover all tests - minor fix in examples readmes

view details

Dmitry Zakharov

commit sha a1045ae332fb5b16d0b15aa28b842cc67dc568c2

ARC EMSDP: Return uboot.env (was strangely deleted during merge)

view details

Dmitry Zakharov

commit sha dd37e351117357bdd9688183285d574a710e2400

Fix height slicing in fit branch for odd input

view details

Dmitry Zakharov

commit sha 0d5787ca06a06032a80d8ae99caeca14e37a054c

replace MLI switchers with specializations in person_detection_generation

view details

Dmitry Zakharov

commit sha 6971de110a15f1a8ed6b328c227f7c9924b87c2f

ARC EMSDP - MicroSpeech int8 integration - Patch sources to use specialization during project generation

view details

Dmitry Zakharov

commit sha 0df881cdf814d72716250375dd082636e18eed38

ARC EMSDP: update common LCF to be not limit codesize. Update MLI link to more recent one

view details

Dmitry Zakharov

commit sha 5a3f58ef75da826fc3bc78af810510e8b7b5f27a

ARC MLI: scales to be present in 32 bits now

view details

Dmitry Zakharov

commit sha 5b8cb124e3957bcd139912c00116c250a155a130

Update arc_mli_iss_fix branch with the latest state from upstream master

view details

Dmitry Zakharov

commit sha 8f91b0b504fc578159a1796b37ac208294c7e785

Use embarc MLI RC3 for arc targets

view details

Dmitry Zakharov

commit sha 0577515fc3f60049aafc0cdf3b63f67f6d24044c

Update branch with the latest state from upstream master

view details

daria

commit sha 1d7577e984f0d5ec0c8262600136daea3514581a

Update according to tflm changes

view details

daria

commit sha 620b38ab988e03d77ec764e8a8a7a6c6594236df

updates for XY functions to align with tflm changes

view details

daria

commit sha cb69b88d5a2206b293f5fd226a1ad3918c24e55f

added uint32 to int cast, mli_is_applicable check

view details

daria

commit sha a0a042d66cc0f89942ececb1c9278c9a1848b4ef

Updated copyrights

view details

daria

commit sha daeb6be82e2bf175017fdf178bcf36a93b63e238

Avoiding using mli_is_aplicable global var

view details

Ilya Persky

commit sha 3bda04ddc16efa8dad7fce0ac69a3099a91967b3

Add full implementation of PR #37400

view details

Ilya Persky

commit sha cc9faf537f32ff250929821f8a191b840431e6f8

Fixing issues in PR #37400

view details

Ilya Persky

commit sha 21507a97c18c72406511ec0cbaf4b7ee46b2ddc0

Fixing old "spec" term to new "signature"

view details

Ilya Persky

commit sha ddd59fa3c0c00cb5c2fe97eb66f4731f47b9d264

Update goldens for tf.data.Dataset.from_generator

view details

push time in 2 months

push eventneosensory/ext_tensorflow

Abhipray Sahoo

commit sha e82a7c8faf8dd1a93095697027e6fb364f13184c

fixes from different PRs

view details

push time in 2 months

push eventneosensory/ext_tensorflow

bzhao

commit sha 57bd6e0acf5549102c620840a9dc4fd66d79e6f8

Update the right logic like previous The Arm judgment branch is different with previous, and cause the build fail on linux_aarch64.

view details

danielyou0230

commit sha 44aaa40a078a34975634f71809dd55aea2774006

TFLM: added optimized int8/uint8 DepthwiseConv2D for vexriscv

view details

danielyou0230

commit sha e97ecdd36bc8c2e74046970a3fd07a3a9a718f1a

TFLM: fixed C++ styling in vexriscv optimized kernel (DepthwiseConv2d)

view details

bzhao

commit sha bcd2ab52cd19d5d76e7fb56d3cdc02ff5c9d1428

Update PR with reviews

view details

Måns Nilsson

commit sha 246319e5866754aa4144d777a7f0320d829eb499

TFlu: Update third party downloads with Ethos-U Update Ethos-U driver download version.

view details

danielyou0230

commit sha 5639edf23e333bbc56e4e8e6ca1777af38361364

TFLM: Added design doc for optimized DepthwiseConv2D (vexriscv)

view details

danielyou0230

commit sha 8b1a8acd6ac3e4eb44c78839bec0651875c54cdd

TFLM: updated design doc for optimized DepthwiseConv2D (vexriscv)

view details

Advait Jain

commit sha 5deb0e96783156a225ca187501e26f0bcf486e8f

Make effective scale in quantize kernel consistent b/w Lite and Micro. There is no reason for the Micro implementation to do a division in single precision. Fixes #42648

view details

bzhao

commit sha 13049d927a1822972cf65edf2472c8db686d7749

remove the elif branch which already covered by else

view details

Måns Nilsson

commit sha ad75f571a97d3abd73b58d67bb59e086dff910c5

TFLu: update cmsis kernels - Cast to double in mul.cc. - Add missing initializers in Register_MUL(). - Fix compile issue in conv.cc reference (fall back) case.

view details

Advait Jain

commit sha 71e00fff8251d0f34e4fc5fe2421b77f74afaa4d

Merge remote-tracking branch 'upstream/master' into fix-42648

view details

danielyou0230

commit sha 6d48337d3f5090bcd630c275f4a8584a3d5b6f72

TFLite (tools) added reverse_xxd_dump_from_cc.py

view details

Jaesung Chung

commit sha d06067f2e9dec1be082e96a4749fe8d4a7e44d46

Add StringLower op to Flex delegate PiperOrigin-RevId: 329420736 Change-Id: I59dc70405462fb23782ca71e64fd2624edeb70cc

view details

Michael Gester

commit sha fb0cbfb68441287a7ff73329d41278156846d6cc

Add reference type support for tf.Add Also changed type constraint names. PiperOrigin-RevId: 329427710 Change-Id: Ib8bed81489e5d6fc64e3a6f5635b07c6ec40d7a0

view details

Amit Patankar

commit sha 0bf9cf704f96a185dbc1e2e6d2877ab838e03dd4

Create BUILD files and corresponding targets for `tensorflow/core/profiler/internal/testdata`. PiperOrigin-RevId: 329429073 Change-Id: I2655b0b0132e1a4375e8fd80fb53f10deefb1539

view details

Mehdi Amini

commit sha df56f011bd79f8182960a29e73ac9b4c0c4db700

Integrate LLVM at llvm/llvm-project@1d3d9b9cd808 Updates LLVM usage to match [1d3d9b9cd808](https://github.com/llvm/llvm-project/commit/1d3d9b9cd808) PiperOrigin-RevId: 329432069 Change-Id: I4a783860e534e18f769a5312ec7d9305ab1e9587

view details

Pankaj Kanwar

commit sha b8c0f25a4dcfba9da6643eeb0e2003ce23af822d

fix typo in script PiperOrigin-RevId: 329442701 Change-Id: Ia2032ec6603e6d08b7b589ef5bdef19d0afb20df

view details

Blake Hechtman

commit sha 22f5d50f9a868c4c4632d0e2d7d72c4f67b641ef

[TF2XLA] Make dequantize support dynamic range. PiperOrigin-RevId: 329444190 Change-Id: Icac703969a95093dd7982820c1706887a50d1bce

view details

Amit Patankar

commit sha dac18d7a054eaac4d10beb2200fb99d932b0ce2f

Create BUILD files and corresponding targets for `tensorflow/core/lib/bmp`. PiperOrigin-RevId: 329447307 Change-Id: I63aa8cce75f6bd754e83b993182572ad1e0be887

view details

Jongbin Park

commit sha a272ba2ec4892f84f3be1f78130c55516de83bcc

Replace zip/unzip command with wheel pack/unpack. PiperOrigin-RevId: 329453192 Change-Id: I4967c8a5862399cfab9a98fe68dc2e40df3c35a9

view details

push time in 2 months

create barnchneosensory/ext_tensorflow

branch : feature/upgrade

created branch time in 2 months

issue commenttensorflow/tensorflow

SPLIT_V for tensorflow lite micro

Pete, just to add to this, I was able to create my own SPLIT_V op and add to the resolver easily before May. Since then, changes have been made to how ops are added and read from the flatbuffer and I can't figure out the necessary steps. What would be super helpful to me immediately is if you can share a link to an example PR that adds a built-in op that I can emulate. Thanks!

Abhipray

comment created time in 3 months

issue commenttensorflow/tensorflow

SPLIT_V for tensorflow lite micro

@jvishnuvardhan Sorry about that. It's fixed now: https://colab.research.google.com/drive/1XzrvDvxOVizrY5AT23hokcJHdgUcRd4r?usp=sharing

Abhipray

comment created time in 3 months

issue commenttensorflow/tensorflow

SPLIT_V for tensorflow lite micro

@Saduf2019 Here is a colab that has a model that generates a SPLIT_V op with tflite converter. https://colab.research.google.com/drive/1XzrvDvxOVizrY5AT23hokcJHdgUcRd4r?usp=sharing

It generates a tflite file that you can download and view with netron https://github.com/lutzroeder/netron. You will see that the model uses SPLIT_V op which is currently not supported in tf lite micro.

Abhipray

comment created time in 3 months

startedPeteBlackerThe3rd/tflite_analyser

started time in 3 months

more