profile
viewpoint
Samuel siju-samuel Bangalore An aspiring AI/ML/DL developer and a technological evangalist.

siju-samuel/DeepLearning.ai-Summary 3

Some notes on deep learning

siju-samuel/100-Days-Of-ML-Code 1

100 Days of ML Coding

siju-samuel/darkflow-yolo 1

Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

siju-samuel/darknet 1

Convolutional Neural Networks

siju-samuel/nnvm 1

Bring deep learning to bare metal

GreatLearningAIML1/bangalore-feb-batch-siju-samuel 0

bangalore-feb-batch-siju-samuel created by GitHub Classroom

siju-samuel/Arduino 0

ESP8266 core for Arduino

siju-samuel/awesome-deep-learning 0

A curated list of awesome Deep Learning tutorials, projects and communities.

push eventsiju-samuel/tvm

Siju Samuel

commit sha 60e5e087bca84a3288e7c4717f86545e53e96197

Review comment fixed

view details

push time in 5 days

pull request commentapache/incubator-tvm

[DOCS] Fix the QNN TFLite tutorial built

Thanks @tqchen. Our CI is running with tf 2.1 version, right? Anything need to updated for spinhx as well after upgrading tf version?

tqchen

comment created time in 6 days

Pull request review commentapache/incubator-tvm

[TOPI,RELAY][TFLITE] Sparse to dense operator

 inline Tensor one_hot(const Tensor& indices, const PrimExpr on_value, const Prim       name, tag); } +/*!+ * \brief Get a dense tensor.+ * \param sparse_indices sparse_indices[i] contains the complete index where sparse_values[i] will be placed.+ * \param sparse_values is a 0-D or 1-D tensor. Values corresponding to each row of sparse_indices+ * \param default_value is a 0-D tensor. Defaults to zero.+ * \param output_shape is the shape of the dense output tensor+ * \param name output tensor name.+ * \param tag output tensor tag.+ * \return Tensor of output_shape.+ */+inline Tensor sparse_to_dense(const Tensor& sparse_indices,+                              const Tensor& sparse_values,+                              const Tensor& default_value,+                              const Array<Integer>& output_shape,+                              const std::string name = "T_sparse_to_dense",+                              const std::string tag = kInjective) {+  CHECK(sparse_indices->dtype.is_int())+    << "sparse_indices only accepts integer values";++  CHECK_LE(sparse_indices->shape.size(), 3)+    << "sparse_indices tensor should be 0D, 1D, or 2D only";++  CHECK_LE(sparse_values->shape.size(), 2)+    << "sparse_values tensor should be 0D or 1D only";++  const auto rank_sparse_indices = static_cast<int>(sparse_indices->shape.size());

validate_indices flag has any change in behavior? if so, consider that as well. Reference

dhruvaray

comment created time in 6 days

Pull request review commentapache/incubator-tvm

[TOPI,RELAY][TFLITE] Sparse to dense operator

 inline Tensor one_hot(const Tensor& indices, const PrimExpr on_value, const Prim       name, tag); } +/*!+ * \brief Get a dense tensor.+ * \param sparse_indices sparse_indices[i] contains the complete index where sparse_values[i] will be placed.+ * \param sparse_values is a 0-D or 1-D tensor. Values corresponding to each row of sparse_indices+ * \param default_value is a 0-D tensor. Defaults to zero.+ * \param output_shape is the shape of the dense output tensor+ * \param name output tensor name.+ * \param tag output tensor tag.+ * \return Tensor of output_shape.+ */+inline Tensor sparse_to_dense(const Tensor& sparse_indices,+                              const Tensor& sparse_values,+                              const Tensor& default_value,+                              const Array<Integer>& output_shape,

is output_shape mandatory to compute the outputs? tf2.0 removed output_shape. if your topi func is compatible with all frameworks, then it will be better.

dhruvaray

comment created time in 6 days

Pull request review commentapache/incubator-tvm

[TOPI,RELAY][TFLITE] Sparse to dense operator

 def unravel_index(indices, shape):     """      return cpp.unravel_index(indices, shape)++def sparse_to_dense(sparse_indices, sparse_values, default_value, output_shape):

Can you please assign default_value=0 in all places so that it will be optional.

dhruvaray

comment created time in 6 days

PR opened apache/incubator-tvm

[PYTORCH]Padding op support

Handled the below padding cases in pytorch.

  • torch.nn.functional.pad
  • torch.nn.ZeroPad2d
  • torch.nn.ConstantPad1d
  • torch.nn.ConstantPad2d
  • torch.nn.ConstantPad3d @masahi please help to review this PR. TIA. Note: i removed the exisiting handling of input pad values using list(zip(padding, padding) , i was not able to simualte the current code with any of the above cases where i will receive input with single pad value which need duplicating and which can be directly used.
+73 -3

0 comment

2 changed files

pr created time in 6 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha e901e01e9d3bcb6dae244e8403d375a36e8de01b

[PYTORCH]Padding support

view details

push time in 6 days

create barnchsiju-samuel/tvm

branch : pytorch_padding

created branch time in 6 days

issue commenttensorflow/tensorflow

How to quickly add extract_image_patch op support in tflite?

It was quite some time back. First i tried to implement ExtractImagePatches from the eigen. eigen already have api to support ExtractImagePatches. You can add as a op in lite. For reference you can use this .

Eigen impl link

But for me there were some other issues as well. So i abandoned this method and i splitted the tflite graph into two and i did ExtractImagePatches outside tflite network.

I strongly feel tflite must add support for this op. If its in middle of network, its difficult to process.

siju-samuel

comment created time in 7 days

pull request commentapache/incubator-tvm

[TUTORIAL]TFLite QNN Tutorial

@anijain2305 @masahi Please help to review and merge this PR. TIA

siju-samuel

comment created time in 7 days

pull request commentapache/incubator-tvm

[TFLITE]Quantize & Dequantize op

@FrozenGene @masahi @anijain2305 Please help to review and merge this PR. TIA.

siju-samuel

comment created time in 7 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 0a303f8366de6cd830d158f471955b3d57370922

[RELAY][PYTORCH]Resize3d, Upsample3d op support

view details

push time in 7 days

PR opened apache/incubator-tvm

[RELAY][PYTORCH]Resize3d, Upsample3d op support
  • Resize3d implementation in relay
  • aten::upsample_trilinear3d
  • aten::upsample_nearest3d

@masahi please help me to review this PR. TIA.

+241 -0

0 comment

7 changed files

pr created time in 7 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 130bc0af54a18fa36fce3044ee3d93094bceb57e

[RELAY][PYTORCH]Resize3d, Upsample3d op support

view details

push time in 7 days

create barnchsiju-samuel/tvm

branch : resize3d

created branch time in 7 days

issue commentapache/incubator-tvm

[Torch] A list of missing op conversion in need of help

@ShivaKothuru #5624

masahi

comment created time in 8 days

PR opened apache/incubator-tvm

[PYTORCH]ReflectionPad2d op

Added support for below ops. aten::reflection_pad2d aten::elu_

https://github.com/apache/incubator-tvm/issues/5133#issuecomment-630673045 @masahi please help to review this PR. TIA

+28 -0

0 comment

2 changed files

pr created time in 8 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha fda93025390bdb5f6e1944e122471991db6263bb

[PYTORCH]ReflectionPad2d op

view details

push time in 8 days

create barnchsiju-samuel/tvm

branch : pytorch_reflectionpad2d

created branch time in 8 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 329a8238d3955f07322abe26f3252d4043d4ee65

[MXNET]MaxPool3d and AvgPool3d Ops support added

view details

push time in 9 days

PR opened apache/incubator-tvm

[MXNET]MaxPool3d and AvgPool3d Ops support added

MaxPool3D and AvgPool3D op support added in mxnet.

@FrozenGene @masahi Please help to review this PR. TIA

+48 -10

0 comment

2 changed files

pr created time in 9 days

create barnchsiju-samuel/tvm

branch : mxnet_pool3d

created branch time in 9 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 84d4b7631eecc7a259ce25df839f6d00cf3d0af0

Review comment fixed

view details

push time in 9 days

Pull request review commentapache/incubator-tvm

[TFLITE]Quantize & Dequantize op

 def test_forward_squeeze():     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])  +#######################################################################+# Quantize/DeQuantize+# -------------------++def _test_quantize_dequantize(data):+    """ One iteration of quantize and dequantize """++    import tensorflow as tf2

It's importing tensorflow.compat.v1 as tf. It doesn't have tf2.lite.TFLiteConverter.from_keras_model

siju-samuel

comment created time in 9 days

Pull request review commentapache/incubator-tvm

[TFLITE]Quantize & Dequantize op

 def test_forward_squeeze():     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])  +#######################################################################+# Quantize/DeQuantize+# -------------------++def _test_quantize_dequantize(data):+    """ One iteration of quantize and dequantize """++    import tensorflow as tf2+    # Define a dummy model+    data_in = tf2.keras.layers.Input(shape=data.shape[1:])+    act_func =  tf2.keras.layers.Activation('linear')+    keras_model = tf2.keras.models.Model(data_in, act_func(data_in))++    # Load the model+    converter = tf2.lite.TFLiteConverter.from_keras_model(keras_model)++    # To create quantized values with dynamic range of activations, needs representative dataset+    def representative_data_gen():+        for i in range(100):+            yield [data]++    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]+    converter.representative_dataset = representative_data_gen+    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]+    converter.inference_input_type = tf.uint8+    converter.inference_output_type = tf.uint8++    # Convert the model to TensorFlow Lite format+    tflite_model_quant = converter.convert()++    tflite_output = run_tflite_graph(tflite_model_quant, data)+    tvm_output = run_tvm_graph(tflite_model_quant, data, 'input_1')+    tvm.testing.assert_allclose(np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]),+                                rtol=1e-5, atol=1e-5)+++def test_forward_quantize_dequantize():+    """ Quantize Dequantize """+    data = np.random.uniform(0, 1, (1, 4, 4, 3)).astype("float32")+    _test_quantize_dequantize(data)

No need because the ci is already upgraded to tf2.0.

siju-samuel

comment created time in 9 days

pull request commentapache/incubator-tvm

[TFLITE]Quantize & Dequantize op

@masahi @anijain2305 @inadob Please help to review and merge this PR. TIA

siju-samuel

comment created time in 9 days

pull request commentapache/incubator-tvm

[KERAS]Global MaxPool3d and AvgPool3d support

@masahi Please help to review and merge. TIA.

siju-samuel

comment created time in 9 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha c1d2e49221b9a92ff05d5d2881bbc61fdae8eafa

[KERAS]Global MaxPool3d and AvgPool3d support

view details

push time in 9 days

push eventsiju-samuel/tvm

Andrew Reusch

commit sha be54c9844a7dc96a38b19cda50887dade9863a6a

Fix test_ir_type. (#5390) * The void return type is not None/nullptr, it's VoidType or TupleType([]).

view details

Ramana Radhakrishnan

commit sha 72f2aea2dd219bf55c15b3cf4cfc21491f1f60dd

Tf2 test fixups (#5391) * Fix oversight in importing tf.compat.v1 as tf. * Actually disable test for lstm in TF2.1 Since the testing framework actually uses pytest, the version check needs to be moved.

view details

Tianqi Chen

commit sha d3277874a24e775d2476b0eb0ad89f3a46964a14

[PTYTHON] Migrate VTA TIR passes to the new pass manager. (#5397)

view details

Krzysztof Parzyszek

commit sha ef61fd5049eaee6f780fcff5069910fb202ad84c

[LLVM] Use ArrayRef<int> in calls to CreateShuffleVector (#5399) This switch was made in LLVM 11. Previously this function was expecting mask indices of type uint32_t. This variant is now deprecated.

view details

Samuel

commit sha 24f686538578485d02156b6f0239e6ca70c2abf7

[KERAS]Minimum & AlphaDropout op support (#5380)

view details

Ramana Radhakrishnan

commit sha 3e3ccce1135c25dd1d99dc7c2b8ff589c93ee7ea

Factor out import of common tflite.Operator in tflite frontend. (#5355) * Restructure imports in tflite frontend. These python modules are needed for every tflite file parsed. Factorize out imports of the common most ones. Now that the import of operator is common, asserts can be commonized. Loses 473 lines of duplication. * Only restrict to tflite.Operator

view details

Haichen Shen

commit sha 56941fb9dc2a04615e442030b163808d24e719fd

[Fix] Remove the duplicate PrintIR pass in Relay (#5403)

view details

Tianqi Chen

commit sha 8c0f7790266dfd69ee7e7e45ca52da613c116be1

Update dmlc-core to latest (#5401)

view details

Tianqi Chen

commit sha 6cb5b882b149d05cc392a3d70d9cbc57d5e9c7bc

[TIR] Enhance Substitute, python bindings for Substitute/PostOrderVisit/IRTransform. (#5400) Substitute now takes a std::function to customize more replacing behaviors. Co-authored-by: Siyuan Feng <hzfengsy@sjtu.edu.cn> Co-authored-by: Siyuan Feng <hzfengsy@sjtu.edu.cn>

view details

Haichen Shen

commit sha 8f9796bd976874afe28845be7ce19f3acc8f1883

[Relay] Fix memory leak when accessing NDArray (#5413)

view details

Andrew Reusch

commit sha f5c9bc93883d426284057657463942dfdfef2fd8

Customize SI prefix in logging (#5411) * Customize SI prefix in logging * Include unit test

view details

Krzysztof Parzyszek

commit sha 3ab37512193bccc639cf8bd0df45528dea9a7540

[LLVM] Replace calls to Type::getVectorNumElements (#5398) This function has recently been removed from LLVM 11. Use alternative way to obtain vector element count (VectorType::getNumElements) which works for all LLVM versions.

view details

Andrew Reusch

commit sha 8f433febea4d2e16ab15aaf949c355dd85055de5

Don't remove() TempDirectory in __del__ after atexit hook runs. (#5414) * Use atexit to remove TempDirectory before interpreter shutdown. * Can't rely on complex functions from __del__ anyway. * Fixes warning message on my box: Exception ignored in: <function TempDirectory.__del__ at 0x12be10680> Traceback (most recent call last): File ".../tvm/python/tvm/contrib/util.py", line 55, in __del__ File ".../tvm/python/tvm/contrib/util.py", line 51, in remove File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 509, in rmtree AttributeError: 'NoneType' object has no attribute 'path'

view details

Tianqi Chen

commit sha b8c23d660dabfde2e63fb9cac4bd4885f125bc60

[TIR][REFACTOR] Remove ir_pass in favor of analysis/transform. (#5415) This PR removes ir_pass(old style pass functions) in favor of analysis/transform(new style pass manager).

view details

MORITA Kazutaka

commit sha 1acad98edc1a74c0f17faf233af3f28918da6acf

[RUNTIME][CONTRIB] CoreML Runtime (#5283) * [RUNTIME][CONTRIB] CoreML Runtime * fix lint * fix CI * use xcrun to compile coreml model

view details

MORITA Kazutaka

commit sha 708fd9a9165b1c6e177fd50ffc67aa297e221d5e

[DOCS] Migrate HLS documents from md to rst (#5419)

view details

samwyi

commit sha dbd0114011906fe713ddda599bcba33a33b1fb73

fix [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU (#5388) (#5418)

view details

Zhao Wu

commit sha a3b1397363ea86172fdfc2d631c2be03c01e042f

[Frontend] Asymmetric padding of convolution support (#4803)

view details

Wei Pan

commit sha 9c12ec81206b948a180590e387b49d84d39ede35

[cuDNN] Add cuDNN grouped convolutions support (#5319) Signed-off-by: Wei Pan <weip@nvidia.com>

view details

Ramana Radhakrishnan

commit sha d81a4fa13556b02ca6a615df00118558ccc660bc

[CI] Migrate Tensorflow and Tensorflow lite in CI to 2.1.0 (#5392) * Migrate Tensorflow and TFLite in the CI up to 1.15.2 The latest stable version of Tensorflow and Tensorflow lite in the 1.x series is 1.15.2. The tflite frontend is receiving support for versions of tflite > 1.14 but there is no consistent testing. There are 2 failures already in the source base with tf 1.15 and I'm concerned this will just get exacerbated over time if we don't have CI picking this up and I view this as a stepping stone towards stepping CI to TF2.x. The test failures that I have commented will get issues raised for them as issues to be fixed. * Comment out run of qnn_mobilenet_v3_net This is another test that fails with TFlite 1.15.2 * Skip the qnn_mobilenet_v3 test in the pytest fashion. * Switch docker versions to support Tensorflow 2.1.0 * Fix up pytest imports and usage. * Skip these tests currently for Tensorflow 2.1.0

view details

push time in 9 days

push eventsiju-samuel/tvm

masahi

commit sha 0145cd504585e25b776bef83688d10ff0ca44082

[Torch] Support Python list, more realistic recurrent networks (#5306) * use funcs from prelude, pass around convert_map * get relay input type from user ishape * handle tuple unpack * experimenting with static tensor array * use prelude concat instead of cons + rev * minor clean up * fix layer norm conversion bug, unwrap tensor array * add infer shape on tensor array * pass around prelude for now * compile worked but runtime error * fix tensor array wrapping * begin list dynamic test * is_list_dynamic first version * finish dynamic list test * a few fix * use shape_of function if Any is found * improve size conversion * working on adding free vars to loop block * fixed inlined inner loop issue * clean up free var handling * add support for tensor array concat * adding ta concat on last axis * fix concat, but got runtime error * disable concat on axis -1 for now * add lstm tests * revert unrelated change * fix stacked bidir test * minor fix to test * relax tol a bit, revert dnnl change to avoid conflict * simplify infer type, use input tensor shape rather than concat shape * more shape fix

view details

Samuel

commit sha 6805d54370ea657a304c58d610e5371c4add4bdf

[PYTORCH]Reduce_ops support added (#5308) * [PYTORCH]Reduce_ops support added * Review comments updated * typo bug in qnn test

view details

windclarion

commit sha 0d48361a0b284dd312c05a6cde08799ede52eedc

[REALY][OP] fix typo (#5315) Signed-off-by: windclarion <windclarion@gmail.com>

view details

Josh Fromm

commit sha 3df8d560f2b6d34ba43a069cd5809560d2c96983

[Topi] Tensorcore support for Conv3D (#5284) * one weird trick. * Added schedule knob for different workloads. * Initial conv3d tensorcore working. * Added conv3d tensorcore strategy. * Added layout conversion to tensorcore friendly format for conv2d and conv3d. * Add target name check. * Fixed bad names and depthwise check. * Removed duplicated attribute assignment.

view details

Tianqi Chen

commit sha fc75de9d680ada8a8ac4b258a60c3f70de1c2e07

[RUNTIME][IR] Allow non-nullable ObjectRef, introduce Optional<T>. (#5314) * [RUNTIME] Allow non-nullable ObjectRef, introduce Optional<T>. We use ObjectRef and their sub-classes extensively throughout our codebase. Each of ObjectRef's sub-classes are nullable, which means they can hold nullptr as their values. While in some places we need nullptr as an alternative value. The implicit support for nullptr in all ObjectRef creates additional burdens for the developer to explicitly check defined in many places of the codebase. Moreover, it is unclear from the API's intentional point of view whether we want a nullable object or not-null version(many cases we want the later). Borrowing existing wisdoms from languages like Rust. We propose to introduce non-nullable ObjectRef, and Optional<T> container that represents a nullable variant. To keep backward compatiblity, we will start by allowing most ObjectRef to be nullable. However, we should start to use Optional<T> as the type in places where we know nullable is a requirement. Gradually, we will move most of the ObjectRef to be non-nullable and use Optional<T> in the nullable cases. Such explicitness in typing can help reduce the potential problems in our codebase overall. Changes in this PR: - Introduce _type_is_nullable attribute to ObjectRef - Introduce Optional<T> - Change String to be non-nullable. - Change the API of function->GetAttr to return Optional<T> * Address review comments * Upgrade all compiler flags to c++14 * Update as per review comment

view details

Zhi

commit sha 5958d60da9995910213d3dedc122a56a269fdaa0

[BYOC] Enhance partitioning and external codegen (#5310) * Remove duplicated output args * address comment * fix codegen c * improve comment * VisitExprDefault_ * deduce type

view details

Tianqi Chen

commit sha 0ab1803604bfc8670b657becd3f361454eda4920

[COMMUNITY] @mbaret -> Reviewer (#5322)

view details

masahi

commit sha 2c1ca60ea38587401a20f11c5e64f452b79fa777

add memoized expr translator for use by backend codegen (#5325)

view details

LiangLiu

commit sha d2e58ad2fd92c09dffe064e4efbfc484cf2de6e5

[CODEGEN][CUDA] Fix vector load (#5226) * Fix high-low bit bug in __pack_half2 * Fix vector load * Add unit8 support for PrintVecElemLoadExpr and BroadcastNode

view details

Mahesh Ambule

commit sha b7545eb5ca87507ea04ccbe96c1a02040bef26be

[Frontend|MXNet] SwapAxis operator support (#5246) * MXNet swap axis * MXNet swap axis * swap axis review comment * swap axis review comment

view details

Wuwei Lin

commit sha 1df6bb6d3080789e21b9eec8924949463593c757

[TE][BuildModule] Fix import in dump pass ir (#5327)

view details

Samuel

commit sha 4720cf8569e5acd6d5f13f95d78c2f4518c67d55

[RELAY][PYTORCH]isNan, isinf, isfinite, ceil, clamp, round ops (#5316) * [RELAY][PYTORCH]isNan, isinf, isfinite, ceil, clamp, round ops * Review comments

view details

Tianqi Chen

commit sha f08d5d78ee000b2c113ac451f8d73817960eafd5

[TIR] Refactor MakePackedAPI to target dependent stage. (#5326) Previously MakePackedAPI was in the target independent stage, but never the less requires the device_type information that will be binded at a later target dependent stage. The previous implementation was due to the limitation of LoweredFunc which can not carry buffer_map info(so they have to be lowered right away). This is no longer the case after the unified IR refactor. This PR migrates MakePackedAPI to a target dependent stage and removes the un-necessary BindDevice pass.

view details

Tianqi Chen

commit sha 275e317c568a75db8a13960bcb9112f7859ef9aa

[RELAY] Remove re-exports of tvm.transform (#5337)

view details

Krzysztof Parzyszek

commit sha e7fcd9e3cb539eb83e6a31308af48fab8157ad0c

[LLVM] Use llvm::FunctionCallee in IRBuilder::CreateCall with LLVM 11+ (#5338) The older variants of CreateCall have been deprecated and were recently removed from LLVM. This caused compilation failures.

view details

Leandro Nunes

commit sha 92c78266a614c59c310512d3dab4c23bd155d52c

[CI] Fix build.sh to propagate --network=host to the docker build command (#5336) * when passing --net=host to build.sh it needs to be also sent as --network=host to "docker build", so that both build and run will use the same network configuration

view details

Jared Roesch

commit sha 9a8ed5b7abacfdb6a605f3ccd412fd929455fb15

[Runtime][Relay][Cleanup] Clean up for memory pass to enable heterogenous execution support. (#5324) * Cleanup type pack and unpack for tuples. * Clean up the memory_pass using common helpers * Clean up memory.cc * Refactor pass * Add doc strings * Fix CPPlint * Fix PyLint * Fix * Apply suggestions from code review Co-Authored-By: Zhi <5145158+zhiics@users.noreply.github.com> * Fix typo Co-authored-by: Zhi <5145158+zhiics@users.noreply.github.com>

view details

jmorrill

commit sha afcf9397b60ae7ccf46601cf29828992ca9d5f57

Windows Support for cpp_rpc (#4857) * Windows Support for cpp_rpc * Add missing patches that fix crashes under Windows * On Windows, use python to untar vs wsl * remove some CMakeLists.txt stuff * more minor CMakeLists.txt changes * Remove items from CMakeLists.txt * Minor CMakeLists.txt changes * More minor CMakeLists.txt changes * Even more minor CMakeLists.txt changes * Modify readme

view details

Samuel

commit sha b1364ebbedb6bf540d1d2610d772ac441e2f7cb5

[PYTORCH]Take, Topk op support (#5332) * [PYTORCH]take, topk op support * Ci Failure fix

view details

Animesh Jain

commit sha 1265983cf443960ebdc5b04182a65955371c371f

[TOPI] Using x86 schedules for ARM conv2d. (#5334)

view details

push time in 9 days

pull request commentapache/incubator-tvm

[TOPI][RELAY]Global MaxPool3d and AvgPool3d topi & relay implementation

Thanks @masahi . I will update.

siju-samuel

comment created time in 11 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 094dfc4923b9e098c9263c9131482b2b75a24e97

[PYTORCH]Matmul fix for batch_matmul

view details

push time in 12 days

pull request commentapache/incubator-tvm

[PYTORCH]ImplicitTensorToNum support added

pytorch graph

graph(%x : Tensor,
      %y : Tensor):
  %2 : int = prim::ImplicitTensorToNum(%x)
  %3 : int = prim::ImplicitTensorToNum(%y)
  %4 : int[] = prim::ListConstruct(%2, %3)
  %5 : int[] = aten::list(%4)

%5 will be used as reshape's new shape or transpose's axis etc.

siju-samuel

comment created time in 12 days

PR opened apache/incubator-tvm

[PYTORCH]Matmul fix for batch_matmul

Support batched matrix and broadcast matrix matmul Its another issue when running bert model issue

@masahi please help to review this PR. Thanks.

+72 -4

0 comment

2 changed files

pr created time in 12 days

create barnchsiju-samuel/tvm

branch : pytorch_matmul_bugfix

created branch time in 12 days

PR opened apache/incubator-tvm

[PYTORCH]ImplicitTensorToNum support added

#5133 & problems-when-import-bert-model-from-pytorch-relay

prim::ImplicitTensorToNum support is added. @masahi please help to review this PR.

Note: Test_case is not added for this op as we cannot write with simple code. And the processing to just return the input number.

+8 -0

0 comment

1 changed file

pr created time in 12 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 4c9150be9f228e167653faa7c6b5310e8a66126f

[PYTORCH]ImplicitTensorToNum support added

view details

push time in 12 days

create barnchsiju-samuel/tvm

branch : pytorch_tensortonum

created branch time in 12 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 7627c9fd9c1f2e07a28ceeb366942299e95fdcce

Review comments

view details

push time in 12 days

PR opened apache/incubator-tvm

[TUTORIAL]TFLite QNN Tutorial

QNN Tutorial series on tflite. Continuation of PRs #5321(PyTorch) & #5362(MxNet) This PR can be merged after #5362

@anijain2305 @masahi Please help to review this PR.

+244 -0

0 comment

1 changed file

pr created time in 13 days

create barnchsiju-samuel/tvm

branch : tutorial_tflite

created branch time in 13 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 96d027454d7833a8243964f8eed8fe156c0537c7

[MXNET]abs, round, reciprocal, sign, softsign, hard_sigmoid

view details

push time in 14 days

PR opened apache/incubator-tvm

[FRONTEND][MXNET]abs, round, reciprocal, sign, softsign, hard_sigmoid ops support

@masahi @FrozenGene Please help to review and merge these ops in Mxnet. TIA

+22 -1

0 comment

2 changed files

pr created time in 14 days

create barnchsiju-samuel/tvm

branch : mxnet_ops1

created branch time in 14 days

PR opened apache/incubator-tvm

[PYTORCH]expand bug fix

#5575 issue handled @masahi Please help to review this PR. Thanks

+23 -5

0 comment

2 changed files

pr created time in 15 days

create barnchsiju-samuel/tvm

branch : pytorch_expand_bug

created branch time in 15 days

PR opened apache/incubator-tvm

[FRONTEND]onnx, mxnet, pytorch mathops added

Mxnet - cosh, sinh, asin, acos, asinh, acosh, atanh Onnx - sin, cos, tan, sinh, cosh, asin, acos, atan, asinh, acosh, atanh Pytorch - acos, asin. arc-hyperbolic functions are not supported in pt.

@masahi @FrozenGene Please help to review. Thanks.

+67 -5

0 comment

6 changed files

pr created time in 16 days

create barnchsiju-samuel/tvm

branch : frontend_mathops

created branch time in 16 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 26083985f9c1a5489f57f19a984c718c77eb0f79

Review comments fixed

view details

push time in 17 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 489757d4361af65c17a48bd9cfd65dc48c1506a6

Review comments fixed

view details

push time in 18 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha cbfaf820e6e00d64811a04ae1ec32747e9adb424

Review comments fixed

view details

push time in 19 days

pull request commentapache/incubator-tvm

[CRT]fix to reduce RAM size during loading model

@tqchen Thanks a lot. The 40kb ram reduced for me was due to the free of graph_json I freed immediately after saving to runtime. The other fix on setupstorage could save me only around 200bytes totally, which is not very significant. Sorry that I overlooked the other change.

siju-samuel

comment created time in 19 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 2e7f0aba891cbf68a49a6798e9256323c0988c20

Release graph_json memory immediately after reading

view details

push time in 19 days

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] Add parser support for shape and range

 def convert_tanh(self, op):          return out +    def convert_range(self, op):+        """Convert TFLite Range"""+        try:+            from tflite.Operator import Operator+            from tflite.TensorType import TensorType+        except ImportError:+            raise ImportError("The tflite package must be installed")++        if self.is_quantized(op):+            raise tvm.error.OpNotImplemented(+                'TFlite quantized RANGE operator is not supported yet.')++        assert isinstance(op, Operator)+        input_tensors = self.get_input_tensors(op)+        assert len(input_tensors) == 3, "input tensors length should be 3"++        start, limit, delta = input_tensors[0], input_tensors[1], input_tensors[2]+        expressions = []++        for t in [start, limit, delta]:+            if self.has_expr(t.tensor_idx):+                expressions.append(self.get_expr(t.tensor_idx))+            else:+                tensor_type = self.get_tensor_type_str(t.tensor.Type())+                tensor_value = self.get_tensor_value(t)+                expressions.append(self.exp_tab.new_const(tensor_value, dtype=tensor_type))+

use get_tensor_expr function from recent prs.

dhruvaray

comment created time in 19 days

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] Add parser support for shape and range

 def convert_tanh(self, op):          return out +    def convert_range(self, op):+        """Convert TFLite Range"""+        try:+            from tflite.Operator import Operator+            from tflite.TensorType import TensorType+        except ImportError:+            raise ImportError("The tflite package must be installed")++        if self.is_quantized(op):+            raise tvm.error.OpNotImplemented(+                'TFlite quantized RANGE operator is not supported yet.')++        assert isinstance(op, Operator)+        input_tensors = self.get_input_tensors(op)+        assert len(input_tensors) == 3, "input tensors length should be 3"++        start, limit, delta = input_tensors[0], input_tensors[1], input_tensors[2]+        expressions = []++        for t in [start, limit, delta]:+            if self.has_expr(t.tensor_idx):+                expressions.append(self.get_expr(t.tensor_idx))+            else:+                tensor_type = self.get_tensor_type_str(t.tensor.Type())+                tensor_value = self.get_tensor_value(t)+                expressions.append(self.exp_tab.new_const(tensor_value, dtype=tensor_type))++        #out type inference

#out -> # out ....... Add space before starting comment. Change in all places

dhruvaray

comment created time in 19 days

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] Add parser support for shape and range

 def test_all_resize():         _test_resize(tf.image.resize_nearest_neighbor, data, align_corners=False)  +#######################################################################+# Range+# -----+def _test_range(start, limit, delta):+    # tflite 1.13 convert method does not accept empty shapes+    if package_version.parse(tf.VERSION) >= package_version.parse('1.14.0'):+        tf.reset_default_graph()+        with tf.Graph().as_default():+            start_scalar, limit_scalar, delta_scalar = \+                tf.placeholder(dtype=start.dtype, shape=(), name="start"), \+                tf.placeholder(dtype=limit.dtype, shape=(), name="limit"), \+                tf.placeholder(dtype=delta.dtype, shape=(), name="delta")++            out = tf.range(start_scalar, limit_scalar, delta_scalar, name="range")++            compare_tflite_with_tvm(+                [start, limit, delta],+                ["start", "limit", "delta"],+                [start_scalar, limit_scalar, delta_scalar],+                [out],+                mode="vm",+                quantized=False+        )++def _test_range_default():+    # tflite 1.13 convert method does not accept empty shapes+    if package_version.parse(tf.VERSION) >= package_version.parse('1.14.0'):+        tf.reset_default_graph()+        with tf.Graph().as_default():++            inputs = [+                tf.placeholder(dtype=tf.int32, shape=(), name="p1"),+                tf.placeholder(dtype=tf.int32, shape=(), name="p2")+            ]+            leaves = [+                tf.range(start = inputs[0], limit = inputs[1]), #use default delta+                tf.range(start = inputs[1]) #use start as limit with 0 as the first item in the range+            ]++            compare_tflite_with_tvm(+                [np.int32(1), np.int32(18)],+                ["p1", "p2"],+                inputs,+                leaves,+                mode="vm",+                quantized=False+        )++def test_forward_range():+   _test_range(np.int32(1), np.int32(18), np.int32(3))+   _test_range(np.int32(1), np.int32(18), np.float32(3.1)) # increment is of type float+   _test_range(np.float32(1.0), np.int32(18), np.int32(3.1)) # start is of type float+   _test_range_default()++#######################################################################+# Shape+# -----+def test_forward_shape():+    # tflite 1.13 convert method does not accept empty shapes+    if package_version.parse(tf.VERSION) >= package_version.parse('1.14.0'):+        tf.reset_default_graph()+        with tf.Graph().as_default():+            data = np.array([1, 18, 3], dtype=np.int32)+            start = tf.placeholder(dtype=tf.int32, shape=[], name="start")+            limit = tf.placeholder(dtype=tf.int32, shape=[], name="limit")+            delta = tf.placeholder(dtype=tf.int32, shape=[], name="delta")+            r = tf.range(start, limit, delta, tf.int32, name="range")+            out = tf.shape(r, out_type=tf.dtypes.int32)+            compare_tflite_with_tvm(+                [x for x in np.nditer(data)],+                ["start", "limit", "delta"],+                [start, limit, delta],+                [out],+                mode="vm",+                quantized=False+            ) #######################################################################

add new lines here

dhruvaray

comment created time in 19 days

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] Add parser support for shape and range

 def compare_tflite_with_tvm(in_data, in_name, input_tensors,                 continue              tvm_output = run_tvm_graph(tflite_model_buffer, in_data, in_node, target=device,-                                       num_output=len(out_names), out_names=out_names)+                                       num_output=len(out_names), out_names=out_names,mode=mode)

out_names,mode -> space after comma

dhruvaray

comment created time in 19 days

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] Add parser support for shape and range

 def convert_tanh(self, op):          return out +    def convert_range(self, op):+        """Convert TFLite Range"""+        try:+            from tflite.Operator import Operator+            from tflite.TensorType import TensorType+        except ImportError:+            raise ImportError("The tflite package must be installed")++        if self.is_quantized(op):+            raise tvm.error.OpNotImplemented(+                'TFlite quantized RANGE operator is not supported yet.')++        assert isinstance(op, Operator)+        input_tensors = self.get_input_tensors(op)+        assert len(input_tensors) == 3, "input tensors length should be 3"++        start, limit, delta = input_tensors[0], input_tensors[1], input_tensors[2]+        expressions = []++        for t in [start, limit, delta]:+            if self.has_expr(t.tensor_idx):+                expressions.append(self.get_expr(t.tensor_idx))+            else:+                tensor_type = self.get_tensor_type_str(t.tensor.Type())+                tensor_value = self.get_tensor_value(t)+                expressions.append(self.exp_tab.new_const(tensor_value, dtype=tensor_type))++        #out type inference+        if delta.tensor.Type() == TensorType.FLOAT32:+            out_type = self.get_tensor_type_str(delta.tensor.Type())+        else:+            out_type = self.get_tensor_type_str(start.tensor.Type())++        #put type here form op+        out = _op.arange(expressions[0], expressions[1], expressions[2], out_type)++        return out++    def convert_shape(self, op):+        """Convert TFLite Shape"""+        try:+            from tflite.Operator import Operator+        except ImportError:+            raise ImportError("The tflite package must be installed")++        if self.is_quantized(op):+            raise tvm.error.OpNotImplemented(+                'TFlite quantized SHAPE operator is not supported yet.')+

Does the shape output have any impact with quantized inputs? do we need this check?

dhruvaray

comment created time in 19 days

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] Add parser support for shape and range

 def convert_tanh(self, op):          return out +    def convert_range(self, op):+        """Convert TFLite Range"""+        try:+            from tflite.Operator import Operator

remove this import op, its already handled

dhruvaray

comment created time in 19 days

pull request commentapache/incubator-tvm

[TOPI,RELAY][TFLITE] Sparse to dense operator

Please rebase and resolve conflicts

dhruvaray

comment created time in 19 days

pull request commentapache/incubator-tvm

[CRT]fix to reduce RAM size during loading model

@tqchen Ideally yes. There may be some calcuation issue. I will verify once again in case of crt what is happening and get back.

siju-samuel

comment created time in 20 days

pull request commentapache/incubator-tvm

[CRT]fix to reduce RAM size during loading model

@tqchen I work in a device with very limited ram availability. Actually even a few kbs is crucial for me.

Im using a quantized model and the pool is 80% unit8 data. So i dont want to allocate all pool with float and 32bit. If i consider the bit i can restrict all pool to allocate 32bit.

So while allocating space, to compute size we either need bits or dytpe. Now both are not available in TVMGraphRuntimePoolEntry. So i added bits in TVMGraphRuntimePoolEntry

As you suggested, we can compute size while finding maximum space needed and save to pool_entry[sid].size But it requires either

  1. while making TVMNDArray_Empty, always use dtype.code = kDLInt & dtype.bits = 8.

  2. while making TVMNDArray_Empty, use Float & 32bit (like before this change) and reduce the size as per bits and align to 4bytes.

I would like to go with 2nd approach. I havnet done the modification yet. I will do the modifications, test and will update this PR.

siju-samuel

comment created time in 20 days

pull request commentapache/incubator-tvm

[TFLITE]Select op support for tflite frontend

@maheshambule Thanks. You can consider it as part of #5528. .

siju-samuel

comment created time in 20 days

pull request commentapache/incubator-tvm

[TFLITE]Select op support for tflite frontend

LGTM. @siju-samuel After thinking about this pr, I think we should make get_expr and exp_tab.new_const be private and only public get_tensor_expr. Other places use get_expr / exp_tab.new_const should be replaced as get_tensor_expr. Wish you could consider this suggestion after this pr.

@FrozenGene Thanks. I will optimize all this and will do a PR.

siju-samuel

comment created time in 20 days

Pull request review commentapache/incubator-tvm

[RUNTIME] Improve PackedFunc robustness

 struct unpack_call_by_signature { template<typename R, typename ...Args> struct unpack_call_by_signature<R(Args...)> {   template<typename F>-  static void run(const F& f,+  TVM_ALWAYS_INLINE static void run(const F& f,                   const TVMArgs& args,                   TVMRetValue* rv) {

Align

tqchen

comment created time in 21 days

pull request commentapache/incubator-tvm

[CRT]fix to reduce RAM size during loading model

@tqchen could you please review. Thanks.

siju-samuel

comment created time in 21 days

pull request commentapache/incubator-tvm

[TOPI][RELAY][TENSORFLOW]Math ops added

@masahi @FrozenGene @kazum Please help to review. TIA

siju-samuel

comment created time in 21 days

pull request commentapache/incubator-tvm

[TFLITE]Select op support for tflite frontend

@FrozenGene could you please help to merge this PR. TIA

siju-samuel

comment created time in 21 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 19187bd399df56f20c7879cbdd39e3d9a3f2a5f7

[TOPI]][RELAY][MXNET]Reverse/Flip operator

view details

push time in 21 days

Pull request review commentapache/incubator-tvm

[TFLITE]Select op support for tflite frontend

 def __init__(self, model, subgraph, exp_tab):             'PAD': self.convert_pad,             'POW': self.convert_pow,             'PRELU': self.convert_prelu,-            'REDUCE_ANY': self._convert_reduce_any,-            'REDUCE_MAX': self._convert_reduce_max,-            'REDUCE_MIN': self._convert_reduce_min,-            'REDUCE_PROD': self._convert_reduce_prod,+            'REDUCE_ANY': self.convert_reduce_any,+            'REDUCE_MAX': self.convert_reduce_max,+            'REDUCE_MIN': self.convert_reduce_min,+            'REDUCE_PROD': self.convert_reduce_prod,             'RELU':self.convert_relu,

@u99127 Thanks for the review. I totally agree with you. I removed those changes as part of this PR. i will raise another PR[#5515] for those. Could you please check again. TIA.

siju-samuel

comment created time in 22 days

PR opened apache/incubator-tvm

[TFLITE]Nit: Function names made consitent

Some nits in TFLITE. Inconsistencies in the function names in tflite is removed.

@u99127 Could you please review this. Thanks in advance.

+13 -13

0 comment

1 changed file

pr created time in 22 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha e30ef2563417af9f155fff86b9d51644d784ed66

Review comment fixed

view details

push time in 22 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 3c82b624b92918d7ac6d91bfb1270120f960dde7

[TFLITE]Nit: Function names made consitent

view details

push time in 22 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 75ac1f74fbb1b8e7f852a6a6dd2a9b5a2cadd99b

[TFLITE]Nit: Function names made consitent

view details

push time in 22 days

create barnchsiju-samuel/tvm

branch : tflite_func_name_consistency

created branch time in 22 days

Pull request review commentapache/incubator-tvm

[Relay-TFLite] FP32 and Quantized Object Detection Model

 def test_forward_qnn_mobilenet_v3_net():     tvm.testing.assert_allclose(tvm_sorted_labels, tflite_sorted_labels)  +#######################################################################+# Quantized SSD Mobilenet+# -------------

nit , align the dashes

anijain2305

comment created time in 22 days

Pull request review commentapache/incubator-tvm

[Relay-TFLite] FP32 and Quantized Object Detection Model

 def get_workload_official(model_url, model_sub_path):     dir_path = os.path.dirname(model_path)      import tarfile+    import zipfile

suggest to move this imports inside if or elif. let it import based on file type.

anijain2305

comment created time in 22 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 7bc30bb9d096a4dfbf68725498f6de97eac1c920

[TOPI]][RELAY][MXNET]Reverse/Flip operator

view details

push time in 22 days

PR opened apache/incubator-tvm

[TOPI]][RELAY][MXNET]Reverse/Flip operator

reverse/flip added support for axes similar to mxnet/numpy.

@FrozenGene Please help me to review this PR.

+112 -29

0 comment

9 changed files

pr created time in 22 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha db05769d43f8b0edef6a8e65ef6b59b9f3398891

[TOPI]][RELAY][MXNET]Reverse/Flip operator

view details

push time in 22 days

create barnchsiju-samuel/tvm

branch : relay_transform

created branch time in 22 days

Pull request review commentapache/incubator-tvm

[TFLITE]Select op support for tflite frontend

 def __init__(self, model, subgraph, exp_tab):             'PAD': self.convert_pad,             'POW': self.convert_pow,             'PRELU': self.convert_prelu,-            'REDUCE_ANY': self._convert_reduce_any,-            'REDUCE_MAX': self._convert_reduce_max,-            'REDUCE_MIN': self._convert_reduce_min,-            'REDUCE_PROD': self._convert_reduce_prod,+            'REDUCE_ANY': self.convert_reduce_any,+            'REDUCE_MAX': self.convert_reduce_max,+            'REDUCE_MIN': self.convert_reduce_min,+            'REDUCE_PROD': self.convert_reduce_prod,             'RELU':self.convert_relu,

@u99127 for code consistency i changed, its not related to this PR. This 5 methods were starting with _ and i removed those.

siju-samuel

comment created time in 22 days

pull request commentapache/incubator-tvm

[TFLITE] SELECT

@dhruvaray Thanks for the PR. This part is covered in PR #5486

dhruvaray

comment created time in 22 days

PR opened apache/incubator-tvm

[CRT]fix to reduce RAM size during loading model

This fix will help to reduce RAM size during setup storage. I was facing issue with arduino which has only 256kb ram. For my model Setupstorage was consuming 80kb of ram and with this fix i could reduce it to 40kb. In the current code always float32 datatype is used and 4bytes is used to allocate storage pool. If runnng 8/16bit models also it will allocate 32bits.

@liangfu @tmoreau89 Please help to review this code.

Note: This fix can be applied to tvm runtime as well, but not very significant there.

+19 -2

0 comment

2 changed files

pr created time in 23 days

create barnchsiju-samuel/tvm

branch : crt_setupstorage_ram_reduce

created branch time in 23 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 85f4dc860c0a5a1d4afd7ef06d959d1409da7799

CI fix

view details

push time in 24 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha c471fe20e12191c423c9e73833dcc1112e8d316a

Extra newline removed

view details

push time in 25 days

PR opened apache/incubator-tvm

Reviewers
[TOPI][RELAY][TENSORFLOW]Math ops added

Added relay/topi/tensorflow support for the following ops.

  • Acos
  • Acosh
  • Asin
  • Asinh
  • Atanh
  • Cosh
  • Sinh

@FrozenGene @masahi Please help to review this PR.

+343 -75

0 comment

11 changed files

pr created time in 25 days

create barnchsiju-samuel/tvm

branch : tensorflow_math_ops

created branch time in 25 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 08ad3eed0d20456904cc54d8da024799e01b0db9

Review comment fixed

view details

push time in a month

push eventsiju-samuel/tvm

Siju Samuel

commit sha 8a9ca25fad4b609b3dd5f78c5dd781f0b04048e6

[TFLITE]Select/Where op support for tflite frontend

view details

push time in a month

PR opened apache/incubator-tvm

[TFLITE]Select op support for tflite frontend

Added the support of select op in tflite frontend.

@FrozenGene @masahi please help me to review and merge this PR. TIA Note: tflite where op is same as select in tflite.

+49 -13

0 comment

2 changed files

pr created time in a month

create barnchsiju-samuel/tvm

branch : tflite_select

created branch time in a month

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] ADD_N operator

 def convert_add(self, op):             return self._convert_elemwise(_qnn.op.add, op)         return self._convert_elemwise(_op.add, op) +    def convert_add_n(self, op):+        """Convert TFLite ADD_N"""+        output_tensors = self.get_output_tensors(op)+        assert len(output_tensors) == 1, "output tensors length should be 1"++        input_tensors = self.get_input_tensors(op)+        assert not input_tensors[0].qnn_params, "TFLite does not support quantized ADD_N."+        lhs_expr = self.get_tensor_or_const_expr(input_tensors[0])+        for rhs_tensor in input_tensors[1:]:

assert if len(input_tensors) <2 ? Othersiwe input_tensors[1: ] will throw error.

maheshambule

comment created time in a month

Pull request review commentapache/incubator-tvm

[Frontend][TFLite] ADD_N operator

 def test_all_elemwise():         _test_forward_elemwise(_test_floor_divide)         _test_forward_elemwise(_test_floor_mod) ++#######################################################################+# AddN+# ----------------------

remove unnecessary dashes

maheshambule

comment created time in a month

push eventsiju-samuel/tvm

Siju Samuel

commit sha 96b507421c148cc4c410666192826abebec4200c

Review comment fixed

view details

push time in a month

PR closed apache/incubator-tvm

[TFLITE]Argmin & Argmax op support

@anijain2305 @masahi please help to review this PR. Thanks.

+57 -12

0 comment

2 changed files

siju-samuel

pr closed time in a month

push eventsiju-samuel/tvm

Siju Samuel

commit sha da6c2cf8e13a1e6090be9f25658d98f27950ceb3

[TFLITE]Argmin & Argmax op support

view details

push time in a month

PR opened apache/incubator-tvm

[TFLITE]Argmin & Argmax op support

@anijain2305 @masahi please help to review this PR. Thanks.

+58 -12

0 comment

2 changed files

pr created time in a month

create barnchsiju-samuel/tvm

branch : tflite_argmax_argmin

created branch time in a month

more