profile
viewpoint
Samuel siju-samuel Bangalore An aspiring AI/ML/DL developer and a technological evangalist.

siju-samuel/DeepLearning.ai-Summary 3

Some notes on deep learning

siju-samuel/100-Days-Of-ML-Code 1

100 Days of ML Coding

siju-samuel/darkflow-yolo 1

Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

siju-samuel/darknet 1

Convolutional Neural Networks

siju-samuel/nnvm 1

Bring deep learning to bare metal

GreatLearningAIML1/bangalore-feb-batch-siju-samuel 0

bangalore-feb-batch-siju-samuel created by GitHub Classroom

siju-samuel/Arduino 0

ESP8266 core for Arduino

siju-samuel/awesome-deep-learning 0

A curated list of awesome Deep Learning tutorials, projects and communities.

push eventsiju-samuel/tvm

masahi

commit sha a5e54b1decce06e666f4fe0fa348e97853993dbd

[QNN] Add support for per channel weight scale in dense op (#4880) * add test case for per channel dense * add unit arg in tflite frontend * update qnn legalize test * fix output dim index

view details

Cody Yu

commit sha feda150e34a79af3ea688ab27f9742720390b1e2

[AutoTVM] Support range in index based tuners (#4870) * Support range in index based tuners * Address comments * Remove __*state__ * trigger CI

view details

masahi

commit sha 7e9ec7352d487fddb0be25f4b53cce3abfad71c9

improve antlr import error message (#4888)

view details

wpan11nv

commit sha d50ba721eb5f7c0dbeceeaa78335d6f4c8cf2973

[CodeGen][CUDA] Fix issues in cuda codegen (#4876) - Do not emit __shared__ etc. as part of type for casting - Fix fp16 reduction kernels with compiler errors: "no operator "+" matches these operands, volatile half + volatile half This patch inserts casts to remove volatile type qualifier following volatile loads (fp16 only). CUDA fp16 library headers should add volatile member functions. - Update have_fp16 to include compute 6.1 GPUs, which do support fp16, although their fp16 throughput is low. Updated tests. Signed-off-by: Wei Pan <weip@nvidia.com>

view details

masahi

commit sha 529ee1feb6c96967d8ab28e08b72006c6d7e8887

[Relay] Fix VM compiler for while loop with free vars (#4889) * add additional switch to handle nested call node * Fix VM compiler for while loop with free var

view details

Tianqi Chen

commit sha e7be8bf43de4c1b19ea68134812ea7b0cd8e361f

[CI] Cleanup logfile before tutorial runs (#4896)

view details

Zhi

commit sha 95de08ba4f0d90dde308f4b2b401da8aaa333d2b

Fix alpha_equal bug (#4897)

view details

Baden Hughes

commit sha a43e326fb0250a46c61f726a4633633c2af2bf03

Update faq.md (#4893) various minor editorial updates - style, grammar, typos.

view details

Alex Gladkov

commit sha 13140916eeb6fc33b962f3faf9dbe6b702057865

Fast exponent (#4790)

view details

Tianqi Chen

commit sha 0b2d11a5745779ec139a05e8ece73c93fa6d7db8

[DOCS] Introduce how to add hardware backend to FAQ (#4898)

view details

Jon Soifer

commit sha 27a02844cb52e883a4a66da68a527590d76f7d01

[Relay][Pass] Fix bug in re-processing call node in MergeComposite pass (#4879) * Fix bug in re-processing call node * Add test * Add to main * temp changes to work from another machine * fix rest of tests * fix test_reuse_call_merge * fix merge Co-authored-by: Jon Soifer <jonso@microsoft.com>

view details

Tianqi Chen

commit sha 08338dd5f8089b4fbf61ae8a63f02277dfcca713

[REFACTOR][PY] Establish tvm.te and tvm.driver (#4900) - Move the related files to tvm.te - Move build_module.py to tvm.driver

view details

pankratz

commit sha 976c08ad61cca9989331bfa57e83bcf92ed20798

Fixed bugs that occured when using bitwise operators on floating point type expressions. Further crash when using ops <<, >>, %. Finally added regression tests for both types of bug. (#4892)

view details

Tianqi Chen

commit sha 8310b2526e69d1761a67f6a8566691a0eeb2e652

[CI] Update ci docker to add autodocsumm (#4903)

view details

Tianqi Chen

commit sha 38d1dd24a005e2b6902eec7fafeb9297eeb7b996

[CI] Add autodocsum as dep (#4902)

view details

Tianqi Chen

commit sha d1e1ac49b37210334e543f6c4cd8813cbe80e26d

[REFACTOR][PY] Establish tvm.arith (#4904)

view details

Josh Fromm

commit sha 9d646543098580490b85f5865d10d087f75ea22e

[Relay][Frontend][Keras] NHWC import support. (#4899) * Basic test working * Almost all tests working. * all tests passing. * Fixed lint. * Improved Style.

view details

Jon Soifer

commit sha 41835d176d31bc2f3ba1f0ed9e35bdbfd453dc39

[Relay] Expose FunctionGetAttr to Python (#4905) * [Relay] Expose FunctionGetAttr to Python * add test Co-authored-by: Jon Soifer <jonso@microsoft.com>

view details

Tianqi Chen

commit sha d2ae8c95d56d8788b1bf77ef28701eb50bbfb495

[DOCS] Update API docs to reflect the status after the refactor. (#4907)

view details

Andrew

commit sha 406b5f764d0454e9641880310249a69b2fc59e9b

Fix tvm.target.generic_func runtime detection (#4910)

view details

push time in 5 hours

push eventsiju-samuel/tvm

Siju Samuel

commit sha 9267f549dee2f4194759ddb0afc6b17664d1da73

Testcase check added to run in tf version above 1.15.0 & review comments

view details

push time in 8 hours

PR opened apache/incubator-tvm

[CI Upgrade]Update existing testcases to run on TF/Lite 2.0 version

The existing CI upgrade part of tf2.0 upgrade as per the discussion is done as part of this PR

@FrozenGene @srkreddy1238 Please review.

2 testcases of tensorflow is failing, probably need some update, currently i commented that. Later can work on these once the 2.0 framework is in place.

Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

+290 -221

0 comment

12 changed files

pr created time in 2 days

create barnchsiju-samuel/tvm

branch : tf2.0_upgrade

created branch time in 2 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 77e897f79e8e61cf5b407db3d4efee138dbf1a0b

Testcase check added to run in tf version above 1.15.0 & review comments

view details

push time in 2 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha c90efcdedcd3c59b51b23ee7b6edf2fa0e89a7d8

Testcase check added to run in tf version above 1.15.0

view details

push time in 3 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha a4d90238c02112e05be18127ae01bfad2686cf6c

Testcase check added to run in tf version above 1.15.0

view details

push time in 3 days

PR opened apache/incubator-tvm

[FRONTEND][KERAS]GaussianDropout/Noise parsing support

GaussianDropout & GaussianNoise are active only during training time. This can be skipped during inference. @FrozenGene please review. TIA.

Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

+2 -0

0 comment

1 changed file

pr created time in 3 days

push eventsiju-samuel/tvm

Samuel

commit sha 1ca3a5346a760066994103e9073617d9d909a196

[FRONTEND][KERAS]GaussianDropout/Noise parsing support GaussianDropout & GaussianNoise are active only during training time. This can be skipped during inference.

view details

push time in 3 days

PR opened apache/incubator-tvm

[TFLITE][FRONTEND]Reduce_any op parsing support

@FrozenGene @u99127 @kevinthesun Please help to review. Thanks.

+16 -3

0 comment

2 changed files

pr created time in 3 days

create barnchsiju-samuel/tvm

branch : tflite_reduce_any

created branch time in 3 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def convert_logical_or(self, op):         """Convert tflite LOGICAL_OR"""         return self._convert_logical_binary(_op.logical_or, op) +    def convert_gather(self, op):+        """Method to Convert TFLite GATHER operator"""+        try:+            from tflite.BuiltinOptions import BuiltinOptions+            from tflite.GatherOptions import GatherOptions+            from tflite.TensorType import TensorType+        except ImportError:+            raise ImportError("The tflite package must be installed")++        input_tensors = self.get_input_tensors(op)+        assert len(input_tensors) == 2, "input tensors length should be 2"++        data = self.get_expr(input_tensors[0].tensor_idx)++        indices = input_tensors[1]+        indices_type = indices.tensor.Type()+        assert indices_type in (TensorType.INT32, TensorType.INT64)+        indices_type_str = self.get_tensor_type_str(indices_type)+        indices = self.exp_tab.new_const(self.get_tensor_value(indices),+                                         dtype=indices_type_str)++        assert op.BuiltinOptionsType() == BuiltinOptions.GatherOptions+        op_options = op.BuiltinOptions()+        gather_options = GatherOptions()+        gather_options.Init(op_options.Bytes, op_options.Pos)+        axis = gather_options.Axis()++        # Check the indices are oob, tflite is unpredictable in case of oob.

According to tf latest code, if indices are not within bounds, they will return error for CPU and for GPU they will return zero. With Tensorflow, i was getting zero for GPU. For validation using tflite, i was using tf 1.14 version and i was getting random numbers in places of oob indices.

siju-samuel

comment created time in 7 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def test_forward_slice():         _test_slice(np.arange(8, dtype=np.int32).reshape((2, 4)), begin=[0, 1], size=[-1, -1])         _test_slice(np.arange(5, dtype=np.int32).reshape((5, )), begin=[4], size=[-1]) +#######################################################################+# Gather+# ------++def _test_gather(dshape, indices, axis, dtype, quantized=False, oob=False):+    """ One iteration of Gather """+    indices = np.asarray(indices).astype('int32')+    data = np.random.uniform(1, 10, size=dshape)+    data = data.astype(np.uint8) if quantized else data.astype(dtype)+    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype, name="in_data")+        if axis:+            out = array_ops.gather(in_data, indices, axis=axis)+        else:+            out = array_ops.gather(in_data, indices) #tflite conversion fails for None axis+        input_range = {'in_data': (-100, 100)} if quantized else None+        try:+            compare_tflite_with_tvm([data], ['in_data:0'], [in_data], [out],+                                      quantized=quantized, input_range=input_range)+        except ValueError as e:+            if not oob:

out of bounds

siju-samuel

comment created time in 7 days

pull request commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

@u99127 @FrozenGene @wyc-ruiker Thanks for the review. Could you please recheck once again.

siju-samuel

comment created time in 7 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def test_forward_slice():         _test_slice(np.arange(8, dtype=np.int32).reshape((2, 4)), begin=[0, 1], size=[-1, -1])         _test_slice(np.arange(5, dtype=np.int32).reshape((5, )), begin=[4], size=[-1]) +#######################################################################+# Gather+# ------++def _test_gather(dshape, indices, axis, dtype):+    """ One iteration of Gather """+    data = np.random.uniform(1, 10, size=dshape).astype(dtype)+    indices = np.asarray(indices).astype('int32')++    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)+        out = array_ops.gather(in_data, indices, axis=axis)+        compare_tflite_with_tvm(data, 'Placeholder:0', [in_data], [out])++    #Test quantized input+    data = np.random.uniform(1, 10, size=dshape).astype(np.uint8)+    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype, name="in_data")+        out = array_ops.gather(in_data, indices, axis=axis)+        compare_tflite_with_tvm([data], ['in_data:0'], [in_data], [out], quantized=True)++def test_forward_gather():+    """ GATHER """+    _test_gather((4,), [1], 0, 'float32')+    _test_gather((1, 4), [0], 0, 'int32')+    _test_gather((4,), [[[1, 0], [0, 1]]], 0, 'float32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 0, 'int32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 1, 'int32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 0, 'float32')+    _test_gather((3, 3, 3),  [[[1, 0]]], 0, 'int32')+    _test_gather((3, 3, 3), [[[1, 0]]], 2, 'int32')+    _test_gather((4, 3, 5, 6),  [[2, 1, 0, 0]], 0, 'float32')++#######################################################################+# StridedSlice+# ------------++def _test_stridedslice(ip_shape, begin, end, stride, dtype,+                       begin_mask=0, end_mask=0, new_axis_mask=0,+                       shrink_axis_mask=0, ellipsis_mask=0):+    """ One iteration of a Stridedslice """+    data = np.random.uniform(size=ip_shape).astype(dtype)+    with tf.Graph().as_default():+        in_data = tf.placeholder(dtype, ip_shape, name="in_data")+        out = array_ops.strided_slice(in_data, begin, end, stride,+                                      begin_mask=begin_mask,+                                      end_mask=end_mask, new_axis_mask=new_axis_mask,+                                      shrink_axis_mask=shrink_axis_mask,+                                      ellipsis_mask=ellipsis_mask)+        compare_tflite_with_tvm(data, 'in_data:0', [in_data], [out])++    #Test with quantized inputs+    data = np.random.uniform(size=ip_shape).astype(np.uint8)+    with tf.Graph().as_default():+        in_data = tf.placeholder(dtype, ip_shape, name="in_data")+        out = array_ops.strided_slice(in_data, begin, end, stride,+                                      begin_mask=begin_mask,+                                      end_mask=end_mask, new_axis_mask=new_axis_mask,+                                      shrink_axis_mask=shrink_axis_mask,+                                      ellipsis_mask=ellipsis_mask)+        compare_tflite_with_tvm([data], ['in_data:0'], [in_data], [out], quantized=True)++def test_forward_stridedslice():+    '''test StridedSlice'''

Even though tf2.0 supports begin_mask, end_mask, ellipsis_mask, new_axis_mask and shrink_axis_mask, tflite doesn't support these and expect these values to be zero. So we donot need to import all cases from tensorflow testcases.

siju-samuel

comment created time in 12 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def convert_not_equal(self, op):                 'TFlite quantized NOT_EQUAL operator is not supported yet.')         return self._convert_elemwise(_op.not_equal, op) +    def convert_gather(self, op):+        """Method to Convert TFLite GATHER operator"""+        try:+            from tflite.BuiltinOptions import BuiltinOptions+            from tflite.GatherOptions import GatherOptions+            from tflite.TensorType import TensorType+        except ImportError:+            raise ImportError("The tflite package must be installed")++        input_tensors = self.get_input_tensors(op)+        data = self.get_expr(input_tensors[0].tensor_idx)++        indices = input_tensors[1]+        indices_type = indices.tensor.Type()+        assert indices_type in (TensorType.INT32, TensorType.INT64)+        indices_type_str = self.get_tensor_type_str(indices_type)+        indices = self.exp_tab.new_const(self.get_tensor_value(indices),+                                         dtype=indices_type_str)++        assert op.BuiltinOptionsType() == BuiltinOptions.GatherOptions+        op_options = op.BuiltinOptions()+        gather_options = GatherOptions()+        gather_options.Init(op_options.Bytes, op_options.Pos)+        axis = gather_options.Axis()++        out = _op.take(data, indices, axis=axis)+        return out++    def convert_strided_slice(self, op):+        """Method to Convert TFLite STRIDED_SLICE operator"""+        try:+            from tflite.BuiltinOptions import BuiltinOptions+            from tflite.StridedSliceOptions import StridedSliceOptions+        except ImportError:+            raise ImportError("The tflite package must be installed")++        input_tensors = self.get_input_tensors(op)

Added the assert as per your suggestion

siju-samuel

comment created time in 12 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def convert_not_equal(self, op):                 'TFlite quantized NOT_EQUAL operator is not supported yet.')         return self._convert_elemwise(_op.not_equal, op) +    def convert_gather(self, op):+        """Method to Convert TFLite GATHER operator"""+        try:+            from tflite.BuiltinOptions import BuiltinOptions+            from tflite.GatherOptions import GatherOptions+            from tflite.TensorType import TensorType+        except ImportError:+            raise ImportError("The tflite package must be installed")++        input_tensors = self.get_input_tensors(op)

Added the assert for checking length of input tensors

siju-samuel

comment created time in 12 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def convert_not_equal(self, op):                 'TFlite quantized NOT_EQUAL operator is not supported yet.')         return self._convert_elemwise(_op.not_equal, op) +    def convert_gather(self, op):+        """Method to Convert TFLite GATHER operator"""+        try:+            from tflite.BuiltinOptions import BuiltinOptions+            from tflite.GatherOptions import GatherOptions+            from tflite.TensorType import TensorType+        except ImportError:+            raise ImportError("The tflite package must be installed")++        input_tensors = self.get_input_tensors(op)+        data = self.get_expr(input_tensors[0].tensor_idx)++        indices = input_tensors[1]+        indices_type = indices.tensor.Type()+        assert indices_type in (TensorType.INT32, TensorType.INT64)+        indices_type_str = self.get_tensor_type_str(indices_type)+        indices = self.exp_tab.new_const(self.get_tensor_value(indices),+                                         dtype=indices_type_str)++        assert op.BuiltinOptionsType() == BuiltinOptions.GatherOptions+        op_options = op.BuiltinOptions()+        gather_options = GatherOptions()+        gather_options.Init(op_options.Bytes, op_options.Pos)+        axis = gather_options.Axis()++        out = _op.take(data, indices, axis=axis)+        return out+

Added a comment explaining the same. The strided slice code is taken from tensorflow parsing code, kept the same code for now, so that when tflite upgrades, tvm donot need to update.

siju-samuel

comment created time in 12 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def test_forward_slice():         _test_slice(np.arange(8, dtype=np.int32).reshape((2, 4)), begin=[0, 1], size=[-1, -1])         _test_slice(np.arange(5, dtype=np.int32).reshape((5, )), begin=[4], size=[-1]) +#######################################################################+# Gather+# ------++def _test_gather(dshape, indices, axis, dtype):+    """ One iteration of Gather """+    data = np.random.uniform(1, 10, size=dshape).astype(dtype)+    indices = np.asarray(indices).astype('int32')++    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)+        out = array_ops.gather(in_data, indices, axis=axis)+        compare_tflite_with_tvm(data, 'Placeholder:0', [in_data], [out])++    #Test quantized input+    data = np.random.uniform(1, 10, size=dshape).astype(np.uint8)+    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype, name="in_data")+        out = array_ops.gather(in_data, indices, axis=axis)+        compare_tflite_with_tvm([data], ['in_data:0'], [in_data], [out], quantized=True)++def test_forward_gather():+    """ GATHER """+    _test_gather((4,), [1], 0, 'float32')+    _test_gather((1, 4), [0], 0, 'int32')+    _test_gather((4,), [[[1, 0], [0, 1]]], 0, 'float32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 0, 'int32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 1, 'int32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 0, 'float32')+    _test_gather((3, 3, 3),  [[[1, 0]]], 0, 'int32')+    _test_gather((3, 3, 3), [[[1, 0]]], 2, 'int32')+    _test_gather((4, 3, 5, 6),  [[2, 1, 0, 0]], 0, 'float32')++#######################################################################+# StridedSlice+# ------------++def _test_stridedslice(ip_shape, begin, end, stride, dtype,+                       begin_mask=0, end_mask=0, new_axis_mask=0,+                       shrink_axis_mask=0, ellipsis_mask=0):+    """ One iteration of a Stridedslice """+    data = np.random.uniform(size=ip_shape).astype(dtype)+    with tf.Graph().as_default():+        in_data = tf.placeholder(dtype, ip_shape, name="in_data")+        out = array_ops.strided_slice(in_data, begin, end, stride,+                                      begin_mask=begin_mask,+                                      end_mask=end_mask, new_axis_mask=new_axis_mask,+                                      shrink_axis_mask=shrink_axis_mask,+                                      ellipsis_mask=ellipsis_mask)+        compare_tflite_with_tvm(data, 'in_data:0', [in_data], [out])++    #Test with quantized inputs+    data = np.random.uniform(size=ip_shape).astype(np.uint8)+    with tf.Graph().as_default():+        in_data = tf.placeholder(dtype, ip_shape, name="in_data")+        out = array_ops.strided_slice(in_data, begin, end, stride,+                                      begin_mask=begin_mask,+                                      end_mask=end_mask, new_axis_mask=new_axis_mask,+                                      shrink_axis_mask=shrink_axis_mask,+                                      ellipsis_mask=ellipsis_mask)+        compare_tflite_with_tvm([data], ['in_data:0'], [in_data], [out], quantized=True)

Its merged together as per your suggestion. Thanks.

siju-samuel

comment created time in 12 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def test_forward_slice():         _test_slice(np.arange(8, dtype=np.int32).reshape((2, 4)), begin=[0, 1], size=[-1, -1])         _test_slice(np.arange(5, dtype=np.int32).reshape((5, )), begin=[4], size=[-1]) +#######################################################################+# Gather+# ------++def _test_gather(dshape, indices, axis, dtype):+    """ One iteration of Gather """+    data = np.random.uniform(1, 10, size=dshape).astype(dtype)+    indices = np.asarray(indices).astype('int32')++    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)+        out = array_ops.gather(in_data, indices, axis=axis)+        compare_tflite_with_tvm(data, 'Placeholder:0', [in_data], [out])++    #Test quantized input+    data = np.random.uniform(1, 10, size=dshape).astype(np.uint8)+    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype, name="in_data")+        out = array_ops.gather(in_data, indices, axis=axis)+        compare_tflite_with_tvm([data], ['in_data:0'], [in_data], [out], quantized=True)++def test_forward_gather():+    """ GATHER """+    _test_gather((4,), [1], 0, 'float32')+    _test_gather((1, 4), [0], 0, 'int32')+    _test_gather((4,), [[[1, 0], [0, 1]]], 0, 'float32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 0, 'int32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 1, 'int32')+    _test_gather((2, 2), [[[1, 0], [0, 1]]], 0, 'float32')+    _test_gather((3, 3, 3),  [[[1, 0]]], 0, 'int32')+    _test_gather((3, 3, 3), [[[1, 0]]], 2, 'int32')+    _test_gather((4, 3, 5, 6),  [[2, 1, 0, 0]], 0, 'float32')

Tflite has seperate implementation for oob indices, for tflite cpu, the return error and gpu they return 0 for the oob indices. but in my testing, oob cases are not predictable. TVM doesnt support returning zero for 'take' oob indices. So while parsing, im checking whether the indices are oob and throwing exception currently. Testcases are added for the above scenario.

siju-samuel

comment created time in 12 days

push eventsiju-samuel/tvm

kshitij12345

commit sha 396095a3e8ad3d15bdc9c52b938d370d4b5ebbf5

fix #4670: add bias for fc layer (#4801)

view details

masahi

commit sha 9963cf38689c1c246d1b087af27d679f09afdfa7

[QNN] Doc fix on convolution and dequantize (#4799) * QNN doc fix on conv and dequantize * fix param name in tflite frontend * make different fix

view details

Animesh Jain

commit sha 00097b195b207c5368eebabbd673eb869f341d38

[QNN] Conv2D with dilation support. (#4796)

view details

vizero1

commit sha c21d1ee8e48d7246f29e080b699fd8d00a87ac5e

Change color channel from BGR to RGB for darknet preprocessing (#4794)

view details

Zhao Wu

commit sha bb1b7db33e322fbca49493b2633539ba24abbe9d

[ThreadPool] Solve ARM BIG.LITTLE heterogeneous multicores (#4747)

view details

mbarrett97

commit sha c39ab93d69ec7d2553446ba9a31055dd5e4579e8

[TIR] Create a StringImm reference type (#4806) This is motivated by the want to send an array of strings across the python/C++ boundary. Arrays only support ObjectRef types and so can't carry StringImmNodes. This creates a string reference type, StringImm, which can be used with tvm::Arrays. Change-Id: I598a44536c156b97dbfe3e9518e0a1f705da850c

view details

Hua Jiang

commit sha 974195defc32f32c16d99635636d5719558247ab

[TOPI] upsample operator 'NCHWinic' format support. (#4791) * [TOPI] upsample operator 'NCHWinic' format support. some hardware accelerator ask packed format data like NCHWinic to fit the hardware resource, here add upsample NCHWinic format support to help such requirement. * address review comments, add assert for 'else must be NCHWxc' logic.

view details

Tianqi Chen

commit sha 6f7d6fa440be2da3c1d0fe8a9e91e2cbb56cb30d

[LINT] Fix -Wextra (#4804) * [LINT] Fix -Wextra * Fix virtual-dtor

view details

Tianqi Chen

commit sha 3fb937fe019ed824de309d09281a99587df17335

[DOCS] Fix vta tutorial (#4809)

view details

Animesh Jain

commit sha d2799915db87107c83ef105a2a628fc54b1cada4

[AutoTVM] Minor bug fixes in AutoTVM for QNN graphs (#4797) * [AutoTVM] Minor bug fixes in AutoTVM for QNN graphs. * Bring back strided_slice. * Replace tvm.nd change.

view details

Haichen Shen

commit sha 60c52e137f3975fd7933dae486d6cb2790547640

fix memory leak (#4811)

view details

Animesh Jain

commit sha 4a39e521aeaed046092dd1e2916d0f5052423254

[TOPI][x86] Injective schedule improvement (#4786) * [TOPI][x86] Injective Schedule Improvement. * Add tiling. * Vectorize when there is an axis.

view details

Tianqi Chen

commit sha f9b46c43976021e0889777f2470397ebf7d6d3bb

[REFACTOR][PY] tvm._ffi (#4813) * [REFACTOR][PY] tvm._ffi - Remove from __future__ import absolute_import in the related files as they are no longer needed if the code only runs in python3 - Remove reverse dependency of _ctypes _cython to object_generic. - function.py -> packed_func.py - Function -> PackedFunc - all registry related logics goes to tvm._ffi.registry - Use absolute references for FFI related calls. - tvm._ffi.register_object - tvm._ffi.register_func - tvm._ffi.get_global_func * Move get global func to the ffi side

view details

Haichen Shen

commit sha 3e7bd70375bfb8aefbcee60e746c7d39bd432175

allow customize mkldnn library location (#4814)

view details

shoubhik

commit sha 7d263c319f8b0ea17080460433a5974594695b8f

Mxnet parser for Qnn dialect (#4714) * - Additional util methods needed for mxnet frontend for qnn dialect. * - Fixing call to quantize. * [QNN] MxNet-MKLDNN parser support for QNN * [QNN] Relax conv check. * - Merge from origin * [QNN] Channel wise changes * [QNN] Dense changes * Dense fix for QNN ops. * - Removed non-mkl code from utils. - Small refactoring - Remove "with_sum" from conv - Simplified code * - Fixing ring buffer name. * - Fixing pylint issues. * - Fixing lint - Removing redundant commented code. * - Adding test cases - Removing unused methods. * [WIP] end to end test case for mxnet qnn parser * Changes to parse large CV models. * Pylint issues. * Fix Conv2D with sum and quantized pooling. * Reverting the changes made for mxnet-mkldnn test cases. Because of #4753, mxnet could not be updated to mxnet-mkldnn. Co-authored-by: Animesh Jain <anijain@umich.edu>

view details

Tianqi Chen

commit sha fc7dd6d701f3fa74169a300d4db7e46ad64479b2

[REFACTOR][PY] Establish tvm.runtime (#4818) * [REFACTOR][PY] Establish tvm.runtime This PR establishes the tvm.runtime namespace that contains the core runtime data structures. The top-level API are kept inact for now via re-exporting. We will followup later to cleanup some of the top-level APIs. * Fix ndarray name

view details

Seyyed Hossein Hasanpour

commit sha 019356f810f7de27ba5f9f346f051d5b89a88c3e

Fixed subprocess creation under windows (#4820) * fixed subprocess creation under windows this addresses the issue #4819 * Update server.py

view details

Ina Dobreva

commit sha 2989d72561a102b009adb45eb6ad78aa9a800804

[Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range (#4789) * [TFLite] Dynamically calculate input_stats of any fake_quant range * pass the input range to the convertor and caclulate (mean, scale) there * change the range of the second tensor in elemwise operations so that we test inputs with different quant params * change the possible output range for elemwise ops wrt the updated ranges * update the comments for (m, s) calculations * add input range dict to reduce_mean op * Apply requested changes * add exception handling for zero division in input_stats * fix range of the input tensor in elemwsie

view details

Animesh Jain

commit sha 23f3988bc6a1e87dcab42f4a414b4cedf667adc0

[QNN] Optimize lowering for requantize and FixedPointMultiply. (#4798) * [QNN] Optimize lowering for requantize and FixedPointMultiply. * Add check for requantize scale gt 1. * Added test case.

view details

Ina Dobreva

commit sha 79ce87f8da5fd5bcfd107e2def243eb02f3819cc

[Relay][Frontend][TFLite] Add parser support for logical operators (#4642) * [Relay][Frontend][TFLite] Add parser support for logical operators * Add parser support for logical_and, logical_or * Add boolean dtype as a valid tensor type * BOOLEAN dtype is supported only from tf 1.15 so logical ops work only in that and newer versions * Logical_not is ommited since tflite can't convert it --> throws errors for addv2 * Add TFLite vesion check in tests for logical ops * Check is added because of boolean dtype lack of support

view details

push time in 13 days

pull request commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

@FrozenGene @u99127 Thanks for the review and could you please review again. I have updated as per your review comments.

siju-samuel

comment created time in 23 days

Pull request review commentapache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

 def test_forward_slice():         _test_slice(np.arange(8, dtype=np.int32).reshape((2, 4)), begin=[0, 1], size=[-1, -1])         _test_slice(np.arange(5, dtype=np.int32).reshape((5, )), begin=[4], size=[-1]) +#######################################################################+# Gather+# ------++def _test_gather(dshape, indices, axis, dtype):+    """ One iteration of Gather """+    data = np.random.uniform(1, 10, size=dshape).astype(dtype)+    indices = np.asarray(indices).astype('int32')++    with tf.Graph().as_default():+        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)+        out = array_ops.gather(in_data, indices, axis=axis)+        compare_tflite_with_tvm(data, 'Placeholder:0', [in_data], [out])++def test_forward_gather():

Since this is a tensor manipulation op, there is no change in the processing. I have added testcases for quantized input data also.

siju-samuel

comment created time in 23 days

push eventsiju-samuel/tvm

abergeron

commit sha 6798ba80d288e7c6132b30606b3cb70812579fe8

Make sure to visit the arguments of inlined functions (#4783)

view details

Ina Dobreva

commit sha 6914963545a10c3c031c154f89a51a587e154743

[Relay][Frontend][TFlite] Add add parser support for relational ops (#4695) Add support for: greater_equal, less, less_equal, equal, not_equal Add tests for the elemwise relational ops

view details

jmorrill

commit sha 24126b42753d46866384b772b4e11f35e2625aa7

Fix parsing of different exception string formats (#4785)

view details

masahi

commit sha 10f85d03e958c2e55189f8521902005ed7884b16

Dedup BindParamByName function in VM compiler (#4793)

view details

Animesh Jain

commit sha 90b2a1eb8f1f3827784b2ddd459deefe3b928351

[Relay][Topi] Use SimplifyInference for L2 Normazlization. (#4795)

view details

Alex Gladkov

commit sha cf173fde1b6794224aba10ada7a4edab1db2fd09

Add schedule for conv3d NDHWC layout (#4775)

view details

masahi

commit sha 73a9e997b097ff09d2896e74355ccf6d16ccd254

[Relay] Expose vm OptimizeModule to Python (#4800) * Expose VM OptimizeModule to python * added missing imports * fix import

view details

Siju Samuel

commit sha 63211193456c961fb73df5fbd4a2be6eacc2f95e

[FRONTEND][TFLITE]Gather, StridedSlice op added

view details

push time in 24 days

push eventsiju-samuel/tvm

Siju Samuel

commit sha 625ec64d678ae74c1c337cbff36c46c88d502c09

Updated testcases for quantized inputs

view details

push time in a month

push eventsiju-samuel/tvm

Siju Samuel

commit sha cfb275daaaf5b1c42ab7c3d8afaaf978c2b7a3c9

Review comment fixed

view details

push time in a month

push eventsiju-samuel/tvm

Siju Samuel

commit sha 949317a70fbf073996eb1edd831edb34b9c4b66a

Ci error fix, None axis not supported

view details

push time in a month

push eventsiju-samuel/tvm

Siju Samuel

commit sha e13576b45d6b3d0bf80a3f59b89e523fb0d87efb

[FRONTEND][TFLITE]Gather, StridedSlice op added

view details

push time in a month

push eventsiju-samuel/tvm

Siju Samuel

commit sha 47ebc7bcfc09c5f8623657e4d8f160ccb838210c

[FRONTEND][TFLITE]Gather, StridedSlice op added

view details

push time in a month

PR opened apache/incubator-tvm

[FRONTEND][TFLITE]Gather, StridedSlice op support added

Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread. @FrozenGene @kevinthesun Could you please help to review this patch. TIA

+206 -0

0 comment

2 changed files

pr created time in a month

create barnchsiju-samuel/tvm

branch : tflite_gather_stridedSlice

created branch time in a month

push eventsiju-samuel/tvm

Jared Roesch

commit sha c4245e3d05d7ce71ebb6fdabebd71f19f16463a3

[Relay][Prelude] Use the Relay parser to define the Relay prelude (#3043) * Add ability to load Prelude from disk * Port over id * Define compose * Linting errors and style changes * Eliminate unnecessary parens * Rename identType to typeIdent (makes more sense) * Another unnecessary paren * Bump the version number for the text format * Ensure .rly (Relay text files) are permitted * Correct release number and simplify grammar rule * Correct load_prelude docstring * Corrections to _parser * Add Apache headers to prelude source file * Remove test_prelude (redundant) * Correct misleading error message * Add check that parser is enabled in Prelude * Commit pre-generated parser, ensure generated files are treated as binaries, and have parser tests always fire * Permit parser files and git attributes files * Exclude gitattributes and parser files from apache check * Another attempt at appeasing Apache audit checker * Corrections to rat-excludes * Apache should be truly appeased now * Ignore Relay parser files by name * Mark parser files as generated so they don't show up on Github * Add parsing helper function for tests * Mark parser files as not detectable

view details

Alexander Pivovarov

commit sha 7c1c97d2d8d0a99c752d43f95d92618b62b1f015

Add LOGISTIC operator to relay tflite frontend (#3313)

view details

Marcus Shawcroft

commit sha d6c4aba8371c195a8893f65dabf3d8362e3d526b

[CI] Clarify RAT exclude patterns. (#3328)

view details

Tianqi Chen

commit sha c9a2f3da5b4cda38829b933597c48d3bdff28083

[RELAY] Pass infra cleanup (#3336)

view details

Tianqi Chen

commit sha d4ca627a5a5df88f477bd6cc89ee2e3e06931c29

[CI] separate out legacy as a stage (#3337)

view details

hlu1

commit sha 2c41fd2f038e90539479ab08370916c1ecd95d2b

[Topi] Fast mode in take op (#3325)

view details

Luis Vega

commit sha 124f9b7f7c46fd168c892ecc13676974732ad9f2

[VTA][TSIM] update app example (#3343) * add initial support to cycle counter to accelerator * remove prints from c * add event counter support to chisel tsim example * make it more readable * use a config class * update driver * add individual Makefile to chisel * add rule for installing vta package * add makefile for verilog backend * update drivers * update * rename * update README * put default sim back * set counter to zero

view details

Leyuan Wang

commit sha da1ea262ac6e125e8a2e69f33a451974b9e8d50f

Non_maximum_suppression and get_valid_counts add new parameters (#3335)

view details

Marcus Shawcroft

commit sha cedbdfb50bc3e669ebd85536b5f729bee7e3ea3f

[DOC] minor grammatical improvements (#3341)

view details

Marcus Shawcroft

commit sha 499adfdbb8d348368430577433cf42bc44a97982

[DOC] clarfiy explanation (#3340)

view details

Jared Roesch

commit sha d0c45648b56a2eec1f8e44c304f493c6fcb15193

[Relay][Backend] Fix interpreter argument conversion for tuples. (#3349) * Support taking a tuple as an argument * Add test

view details

Haichen Shen

commit sha 29ee8a237badb0707ac88f776c2c532f25f3cc3d

[Relay][Frontend] Fix MxNet RNN without providing state initialization as input (#3326)

view details

Yong Wu

commit sha b67afcd6b9eb064615f2bc69a0ef40e5233bc070

[Relay] add ClipByValue and Neg in tf frontend converter (#3211)

view details

Wei Chen

commit sha 713fc73bda7df5100f915f29b745076f90bdbed8

Support export ADT value in Python (#3299) * Support export ADT value in Python * Cache original functions * Cleanup * Cleanup

view details

Yizhi Liu

commit sha 93e80b3ef71ddb05d1e88498630a0a516e9c0e33

[Team] Jian Weng -> Committer (#3359)

view details

Alexander Pivovarov

commit sha 579e96da4481aa4919d22abba934a0db0f736a6a

Update tflite schema version to 1.13 (#3356)

view details

Zhi

commit sha 6e2c7ede54d0726defc38d404ada9350fc9646e8

[Relay][Transform] quantize opt passes to pass manager (#3289)

view details

Steven S. Lyubomirsky

commit sha a698ad7f4ccb0c8096d0d9a445be084c49f5ea99

[Relay] Check match expressions for completeness (#3203)

view details

Hua

commit sha c9e96d9f2b68d9e30401473974e1f963b2a90942

[Relay] Add Elemwise operator Sub, Divide, Power, Max, Min to tflite frontend. (#3357)

view details

Yong Wu

commit sha 9bb16872b6bb54f1ad5ff24b9146c1bea2cd1ae0

[Relay][Frontend] Add a bunch of ops in tf converter (#3270)

view details

push time in a month

fork siju-samuel/onnxruntime

ONNX Runtime: cross-platform, high performance scoring engine for ML models

https://aka.ms/onnxruntime

fork in a month

startedVandermode/ERRNet

started time in 2 months

issue openedpytorch/android-demo-app

Module.load is giving error "CppException: false CHECK FAILED at aten/src/ATen/Functions.h "

Need help Module.load is giving error "CppException: false CHECK FAILED at aten/src/ATen/Functions.h " What is the reason? Even i can reproduce this issue with simple convolution layer.

2019-12-30 08:51:20.487 28171-28328/org.pytorch.demo E/PyTorchDemo: Error during image analysis com.facebook.jni.CppException: false CHECK FAILED at aten/src/ATen/Functions.h (empty at aten/src/ATen/Functions.h:3535) (no backtrace available) at org.pytorch.Module$NativePeer.initHybrid(Native Method) at org.pytorch.Module$NativePeer.<init>(Module.java:70) at org.pytorch.Module.<init>(Module.java:25) at org.pytorch.Module.load(Module.java:21) at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:166) at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:31) at org.pytorch.demo.vision.AbstractCameraXActivity.lambda$setupCameraX$2$AbstractCameraXActivity(AbstractCameraXActivity.java:90) at org.pytorch.demo.vision.-$$Lambda$AbstractCameraXActivity$t0OjLr-l_M0-_0_dUqVE4yqEYnE.analyze(Unknown Source:2) at androidx.camera.core.ImageAnalysisAbstractAnalyzer.analyzeImage(ImageAnalysisAbstractAnalyzer.java:57) at androidx.camera.core.ImageAnalysisNonBlockingAnalyzer$1.run(ImageAnalysisNonBlockingAnalyzer.java:135) at android.os.Handler.handleCallback(Handler.java:907) at android.os.Handler.dispatchMessage(Handler.java:105) at android.os.Looper.loop(Looper.java:216) at android.os.HandlerThread.run(HandlerThread.java:65)

The compiled android module is attached below.

android_torch_test.zip

Script to reproduce this issue:

` import torch import torch.nn as nn import torchvision from torchsummary import summary

class MyBlock(nn.Module): def init(self, ninput, noutput): super().init() self.conv = nn.Conv2d(ninput, noutput - ninput, (3, 3), stride=2, padding=1, bias=True) def forward(self, input): return self.conv(input)

model = MyBlock(3,16).cuda() model.eval()

print("-----MODEL---------") print(model)

example_input = torch.rand(1, 3, 512, 1024).cuda() print("Input Shape = ", example_input.shape) print("-----MODEL SUMMARY---------") print(summary(model, example_input.shape[1:]))

traced_script_module = torch.jit.trace(model, example_input) traced_script_module.save("android_torch_test.pt")

`

created time in 2 months

push eventsiju-samuel/road-and-damage-detection

obulirajs

commit sha fa276208da7279133c026e39cdd7dc17de5203c3

IndianRoad_drive3_12_anno1

view details

Siju

commit sha 945e4a3a8d71caf56784b4c8e97741de33f66561

Merge pull request #10 from obulirajs/master IndianRoad_drive3_12_anno1

view details

push time in 2 months

PR merged siju-samuel/road-and-damage-detection

IndianRoad_drive3_12_anno1

IndianRoad_drive3_12_anno1

+1516 -0

0 comment

19 changed files

obulirajs

pr closed time in 2 months

issue commenttensorflow/tensorflow

How to quickly add extract_image_patch op support in tflite?

@haozha111 any help is appreciated! i implemented as custom operator and now im facing another issue.

output shape of extract_image_patch is 1, 64, 85, 1536 Next layer after extract_image_patch is reshape new_shape=[1,-1,4,4,96], where this error is coming.

ERROR: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (1 != 0)
ERROR: Node number 150 (RESHAPE) failed to prepare.

Its quite a long time. please help!!

siju-samuel

comment created time in 3 months

push eventsiju-samuel/road-and-damage-detection

debashree-samanta

commit sha c9fbe36b739864e0ffd08ff340637ed64c0cce66

Create tst.txt

view details

debashree-samanta

commit sha c3210f57824cc33cd1ff21384a0cbc3453929aff

Add files via upload

view details

debashree-samanta

commit sha efa144b8d735cf391768101d2c63c42665198e6b

Create tst

view details

debashree-samanta

commit sha 27d6184900b73c6a1c839ce5736064c9f50353b2

Add files via upload

view details

debashree-samanta

commit sha 7bf579c2a0f503e71085fe70f390c439dda1d7d5

Create tst

view details

debashree-samanta

commit sha 05802b2197f2f50151beaebb05739aea98322c7c

Add files via upload

view details

debashree-samanta

commit sha 6f3e162b5fb759050e39f48b2673a4b79059d6ad

Create tst

view details

debashree-samanta

commit sha 7c603268855a5bc847394a7697e8affcffe057e8

Add files via upload

view details

debashree-samanta

commit sha a22cf64248dfe21f70073786c95bb4084f3cffaa

Delete tst

view details

debashree-samanta

commit sha 6307e8484d8411aff72699d3e468ebfb54e71a36

Delete tst.txt

view details

debashree-samanta

commit sha 62484fccdd6f5cac7b141902e798c02f204b4bbc

Delete tst

view details

debashree-samanta

commit sha 011f47ee449c63978da7558d4f609ce39cf83298

Delete tst

view details

debashree-samanta

commit sha 449e7d943f00bce2f3edd9388e62944e1a64e180

Create tst

view details

debashree-samanta

commit sha f77f0e0bb1a3ad9876ba6ac6a0af6e6c620526e0

Add files via upload

view details

debashree-samanta

commit sha deebd2d64085556c38696d99b7e5c5c91597d53d

Delete tst

view details

debashree-samanta

commit sha fd0cad11840d9e90ea2cfb765b075dc134cb44ff

Create IndianRoad_drive3_11_anno3

view details

debashree-samanta

commit sha 8ecbd3249b61987bfaa1753a8ec9134cfbdd3e8f

Delete IndianRoad_drive3_11_anno3

view details

debashree-samanta

commit sha d0ad0924a6d23a59bba2b9384fbebd57ff9041d7

Create tst

view details

debashree-samanta

commit sha ef7db367906c66d8ff3da05a7ed30423459d2410

Add files via upload

view details

debashree-samanta

commit sha 29ca79d9efb3a1e699fd9d0a02a60bbc1435ab2f

Delete tst

view details

push time in 3 months

push eventsiju-samuel/road-and-damage-detection

suchismitade

commit sha bc6ec0341f5fad7ff1682ae11f81a51331ba1e00

Delete Pictures0478_leftImg8bit.json

view details

suchismitade

commit sha 598d916ca355aa4f8965d653a155d7de8b168e04

Delete Pictures0481_leftImg8bit.json

view details

suchismitade

commit sha dfaa7a080d59d95c1d3439f4068a713e571eba35

Delete Pictures0483_leftImg8bit.json

view details

suchismitade

commit sha 0ce7e0d6568752d721d141ca5955d39fdd771107

Delete Pictures0485_leftImg8bit.json

view details

suchismitade

commit sha 764e24ebcfa30374345c4d447bc848b9efa56187

Delete Pictures0502_leftImg8bit.json

view details

suchismitade

commit sha c9ace7e8e98335967e2752bbb5edc145e4e9685f

Delete Pictures0503_leftImg8bit.json

view details

suchismitade

commit sha 34d153b4de2aa704cc83004341f399eb4ef29564

Delete Pictures0508_leftImg8bit.json

view details

suchismitade

commit sha 4c69a074962fcd276a06c73cd51afa2f6bc27e86

Delete Pictures0510_leftImg8bit.json

view details

suchismitade

commit sha 07234502b9f574bc3a3144b67431298d9f489fd9

Delete Pictures0514_leftImg8bit.json

view details

suchismitade

commit sha f2ce6c2f9e22e79547af75fd08200d8e5f61d4d3

Delete Pictures0554_leftImg8bit.json

view details

suchismitade

commit sha 137b0d3d40cecbbf775d39b02541c394ff7b73bf

Delete Pictures0561_leftImg8bit.json

view details

suchismitade

commit sha cf03e671eb89d7c34d27b08244315384dda0f3a8

Delete Pictures0567_leftImg8bit.json

view details

suchismitade

commit sha 328f65d7b06289899b7ce978d3269390ac3a1ef0

Delete Pictures2209_leftImg8bit.json

view details

suchismitade

commit sha 5aff4f6c31a759238fa68e06be81c7259f8cdf4a

Delete Pictures2217_leftImg8bit.json

view details

suchismitade

commit sha 746b30ab2217dd6da919ba33afa4fe46c53d3152

Delete Pictures2225_leftImg8bit.json

view details

suchismitade

commit sha 8a1b91e1ea5a2487d938b1ad00d357729667033a

Delete Pictures2235_leftImg8bit.json

view details

suchismitade

commit sha 1590c84e679c506cfde58d5c5b2eb2bf438774b6

Delete Pictures2238_leftImg8bit.json

view details

suchismitade

commit sha db6e7ad2e503facf75a01821bc8e37ec21389e4b

Delete Pictures2258_leftImg8bit.json

view details

suchismitade

commit sha f7cb48a9a9d90b560eff2040a3682d8adf85c3ec

Delete Pictures2276_leftImg8bit.json

view details

suchismitade

commit sha 75d1c5283e3535bb6dd262b6ace87bec41ebdc0a

Add files via upload

view details

push time in 3 months

push eventsiju-samuel/road-and-damage-detection

Siju Samuel

commit sha bd0c8cd16f438387b77cc805f921bd58f106d212

mine updated

view details

push time in 3 months

push eventsiju-samuel/road-and-damage-detection

Karthikeyan Krishnasamy

commit sha 5ea96cd0887956268eb340dc0ed83287036011fc

Annot_153128

view details

Karthikeyan Krishnasamy

commit sha bc8a96a4c96663091578a3745603d8c3e9d0234f

Add files via upload

view details

Karthikeyan Krishnasamy

commit sha 8d4f9b61803a8eab4b2380d100bcd06135a269ff

Add files via upload

view details

Karthikeyan Krishnasamy

commit sha f5fbe057788052cefc552802c970046e57dca03b

Add files via upload

view details

Karthikeyan Krishnasamy

commit sha 9215890e2d7bb55ac55916b619f15522d0625d1f

Add files via upload

view details

Karthikeyan Krishnasamy

commit sha 2e7f4d4522f423e39d0159a5d7c181da3d2872ef

Add files via upload

view details

Karthikeyan Krishnasamy

commit sha 4088855d3b4845af56723881c107a32137321384

Source Proj

view details

Siju

commit sha 0032ae05e8b94aa019397072255db2fb3ff133c4

Merge pull request #7 from krishnakarthi/master My changes

view details

push time in 3 months

push eventsiju-samuel/road-and-damage-detection

obulirajs

commit sha b61bbe0da1888522395efd7976fe40fe00fdfe9d

Base commit Annot_2_144658_anno4

view details

obulirajs

commit sha ac776089016349aa1f68a03daee1da6d247fd8ce

Base commit IndianRoad_drive3_1_anno3

view details

Siju

commit sha 59e6de1225276a2669426bef0384a0bc6671fc58

Merge pull request #6 from obulirajs/master Base commit Annot_2_144658_anno4

view details

push time in 3 months

PR merged siju-samuel/road-and-damage-detection

Base commit Annot_2_144658_anno4

Base commit Annot_2_144658_anno4

+22500 -0

0 comment

50 changed files

obulirajs

pr closed time in 3 months

fork siju-samuel/awesome-deep-text-detection-recognition

A curated list of resources for text detection/recognition (optical character recognition ) with deep learning methods.

fork in 3 months

issue commenttensorflow/tensorflow

How to quickly add extract_image_patch op support in tflite?

@gbaned Could you please help to assign one person to help me add this as a custom op?

siju-samuel

comment created time in 3 months

issue openedtensorflow/tensorflow

How to quickly add extract_image_patch op support in tflite?

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Ubuntu 18.04
  • TensorFlow installed from (source or binary):Source
  • TensorFlow version (or github SHA if from source):1.14

Command used to run the converter or code if you’re using the Python API

import tensorflow as tf
saved_model_dir="./"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
                                       tf.lite.OpsSet.SELECT_TF_OPS]
#converter.allow_custom_ops=True
tflite_model = converter.convert()
open("./converted_model.tflite", "wb").write(tflite_model)

The output from the converter invocation

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CAST, CONCATENATION, CONV_2D, DIV, ELU, LOGISTIC, MAXIMUM, MINIMUM, MUL, RELU, RESHAPE, RESIZE_NEAREST_NEIGHBOR, REVERSE_V2, SOFTMAX, SPLIT, SQRT, SQUARE, STRIDED_SLICE, SUM, TANH, TRANSPOSE, TRANSPOSE_CONV. Here is a list of operators for which you will need custom implementations: ExtractImagePatches.
Traceback (most recent call last):
  File "/home/siju/.local/bin/toco_from_protos", line 8, in <module>
    sys.exit(main())
  File "/home/siju/.local/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/siju/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/siju/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/siju/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/siju/.local/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CAST, CONCATENATION, CONV_2D, DIV, ELU, LOGISTIC, MAXIMUM, MINIMUM, MUL, RELU, RESHAPE, RESIZE_NEAREST_NEIGHBOR, REVERSE_V2, SOFTMAX, SPLIT, SQRT, SQUARE, STRIDED_SLICE, SUM, TANH, TRANSPOSE, TRANSPOSE_CONV. Here is a list of operators for which you will need custom implementations: ExtractImagePatches.

Also, please include a link to the saved model or GraphDef

[saved_model.zip](https://github.com/tensorflow/tensorflow/files/3890254/saved_model.zip)

Failure details I want to implement a block of contextual attenuation which contains op extract image patches This op is not the first/last one. its in the middle so that i cannot do the processing outside the model.

Any other info / logs NO

created time in 3 months

pull request commenttensorflow/tensorflow

[LITE] ADDN 8bit quantization support

@suharshs Could you please approve again. i rebased to the latest code. Thanks for the review.

siju-samuel

comment created time in 3 months

push eventsiju-samuel/tensorflow-1

Adrian Kuegel

commit sha 9f210bb595e4c462e69259da23d397ed98681b84

Prepare moving from xla_proto_library to tf_proto_library_cc. This adds the missing piece to allow to pass the use_grpc_namespace parameter. PiperOrigin-RevId: 279026689 Change-Id: I006bcd514a605d00d2b977dad341569c494a6e3a

view details

A. Unique TensorFlower

commit sha d1dc4cc7f99784a72bf185712f34cd44338b8a22

compat: Update forward compatibility horizon to 2019-11-07 PiperOrigin-RevId: 279030581 Change-Id: I89a15a572b168703463cd64f4817117f603f0aa0

view details

Adrian Kuegel

commit sha f45272036ccbccc7c8b5a647bc30da8a156ba576

Create tf2xla_proto with tf_proto_library_cc. This makes it possible to depend on the C++ specific version of the protos without breaking open source build. PiperOrigin-RevId: 279032810 Change-Id: Iba1cd6b5d2a0631a28f076c6a159d68ea3a31283

view details

Tiezhen WANG

commit sha c37435f741bc50ac6755ce65587194089d2709fa

TFL & TFLM: Add memory related methods in Context. PiperOrigin-RevId: 279037987 Change-Id: Icf65223a4209ace5c2575a91f77ac3a0cfb0966a

view details

TensorFlower Gardener

commit sha 6eac303dd8534e65905e379d7f42f7b4a06bebfe

Merge pull request #34052 from Intel-tensorflow:memory-fix PiperOrigin-RevId: 279039959 Change-Id: Ic4a51086582cb18fe555079d5517887a5590eabe

view details

TensorFlower Gardener

commit sha 6977b8f7422b436c02aafde371716319149af542

Merge pull request #33953 from yongtang:33799-_logger_find_caller-3.8 PiperOrigin-RevId: 279049116 Change-Id: Ie1e6ecaf358c16bc4b80077b735239c17aecaed4

view details

Adrian Kuegel

commit sha 63c45aacf30e819b00e74b85bd1c9f11b0760cd3

Migrate host_compute_metadata_proto from xla_proto_library to tf_proto_library_cc. PiperOrigin-RevId: 279060623 Change-Id: Ia09624bfc6ba95731814fe2b7994c72e5caeb0e1

view details

Nicolas Vasilache

commit sha 3878d7aa226126baaf6f9fd21c4f5db392f9c7a0

Update Linalg to use std.view Now that a view op has graduated to the std dialect, we can update Linalg to use it and remove ops that have become obsolete. As a byproduct, the linalg buffer and associated ops can also disappear. PiperOrigin-RevId: 279073591

view details

Nicolas Vasilache

commit sha 148f07323f97ef54998f28cd95c195064ce2c426

Update Linalg to use std.view Now that a view op has graduated to the std dialect, we can update Linalg to use it and remove ops that have become obsolete. As a byproduct, the linalg buffer and associated ops can also disappear. PiperOrigin-RevId: 279073591 Change-Id: I999b9ec25c924cd895b3d72cb301a43d6fc6bd74

view details

Guangda Lai

commit sha b0f61d8fbbc28a09de0017b93171ad99fb3a30be

Add some changes to trigger copybara import again.

view details

Jacques Pienaar

commit sha db3fb53f0964cd0544c137afa831845877491bd6

Add compatible query method to infer type interface A return type that differs from the inferred return type need not indicate that an operation is invalid (e.g., tensor<*xf32> vs tensor<10xf32>) but they should be compatible for the operation to be considered valid. Add method to query if inferred type is compatible with return type. Also add InferTypeOpIntefaceDefault trait that considers equality and compatibility as the same. Currently an op has to opt in to using it explicitly. PiperOrigin-RevId: 279085639

view details

Nicolas Vasilache

commit sha 184d722b6abdc4cde4d04e065dfef2aecaa70feb

Fix parameter name and document option in linalg::promoteSubViews PiperOrigin-RevId: 279086352

view details

TensorFlower Gardener

commit sha a44da74d9408a103e01a009a9f552b905fcf4ea9

Merge pull request #33157 from jvicenti:master PiperOrigin-RevId: 279075967 Change-Id: I2a6b4f6d0be2a8d5d590149acdf98f2904189890

view details

Andy Davis

commit sha 6fc957dba97726ef3c817cf4124fea658fee0833

Add canonicalizer for ViewOp which folds constants into the ViewOp memref shape and layout map strides and offset. PiperOrigin-RevId: 279088023

view details

Christian Sigg

commit sha d38c6a6434455b331f161bd58a2afb489fa2684d

Temporarily disable CUPTI tests on Windows. PiperOrigin-RevId: 279081007 Change-Id: I88810034398dd98ee7c1c3e07f901b87f4bb7c8d

view details

Adrian Kuegel

commit sha a080453a3b8fa2eb1951a9a667fd0462919432bf

Migrate backend_configs from xla_proto_library to tf_proto_library_cc. PiperOrigin-RevId: 279084129 Change-Id: I6b0dd551b14897ec893feef61c8a8056e820e6ca

view details

Adrian Kuegel

commit sha 504753ea04a9a4dd422b5e6e99d49b9432802c00

Migrate hlo_execution_profile_data and hlo_profile_printer_data to tf_proto_library_cc. PiperOrigin-RevId: 279084247 Change-Id: If8346cfcbf3046f4dcccad486eb4c58856922249

view details

Jacques Pienaar

commit sha 7d4eccf17ed1ffd61216bb8b1170bdefcc2c99ab

Add compatible query method to infer type interface A return type that differs from the inferred return type need not indicate that an operation is invalid (e.g., tensor<*xf32> vs tensor<10xf32>) but they should be compatible for the operation to be considered valid. Add method to query if inferred type is compatible with return type. Also add InferTypeOpIntefaceDefault trait that considers equality and compatibility as the same. Currently an op has to opt in to using it explicitly. PiperOrigin-RevId: 279085639 Change-Id: Ic702e4c3f6d0b5fb249ab7ceb9208074df31cd69

view details

Nicolas Vasilache

commit sha af202872507aae6f544e59167a5626d17b9d65bb

Fix parameter name and document option in linalg::promoteSubViews PiperOrigin-RevId: 279086352 Change-Id: Ib16645867f438db8530c353880c349ff29237924

view details

A. Unique TensorFlower

commit sha 700263d02a8b52c0ff4a2fc2d37416f4a8e3b71d

Add canonicalizer for ViewOp which folds constants into the ViewOp memref shape and layout map strides and offset. PiperOrigin-RevId: 279088023 Change-Id: I36794dc276ed15c5b735603981a5d08b2ec5f465

view details

push time in 3 months

issue openedmnicnc404/CartoonGan-tensorflow

Could you please share your dataset

HI @mnicnc404, Thanks for the great work. I would like to train this model by myself. Could you please send me the dataset which you have used? For any one style is ok. I need both images and style dataset. Thanks for your efforts.

created time in 3 months

push eventsiju-samuel/road-and-damage-detection

obulirajs

commit sha a99326c8ecb0242f34040ddfbfef4e9d37e5b12d

IndianRoad_drive3_11_anno1

view details

obulirajs

commit sha 740efc1613c8adedae090377e49e9763e36132b8

Annot_2_143726_anno2

view details

obulirajs

commit sha 4c69f9833c2c446230e8138950768aeaf55ded45

IndianRoad_drive3_11_anno2

view details

Siju

commit sha 4985e43ff05e0db765c8e7994a6aa13ee7026764

Merge pull request #5 from obulirajs/master IndianRoad_drive3_11_anno1

view details

push time in 3 months

PR merged siju-samuel/road-and-damage-detection

IndianRoad_drive3_11_anno1

IndianRoad_drive3_11_anno1

+18138 -0

0 comment

50 changed files

obulirajs

pr closed time in 3 months

push eventsiju-samuel/road-and-damage-detection

obulirajs

commit sha 4f8f39049274cb0c5288a5f9b5a1eb88c0c4029f

Annot_2_144658_anno5

view details

Siju

commit sha 411fb6d1a81bedb5fa5858537bf6c119a40402c3

Merge pull request #4 from obulirajs/master Annot_2_144658_anno5

view details

push time in 4 months

PR merged siju-samuel/road-and-damage-detection

Annot_2_144658_anno5

Annot_2_144658_anno5

+15667 -0

0 comment

27 changed files

obulirajs

pr closed time in 4 months

push eventsiju-samuel/road-and-damage-detection

obulirajs

commit sha 2e7aa292d2fd9a0c9eecf16e8c36b7b7979a9bd0

Annot_174335_anno7

view details

obulirajs

commit sha d253f801efcf1de6ce0aeec8da1de4f175cdc25b

Annot_2_151126_anno8

view details

obulirajs

commit sha 642319926f5dbbfa8dd2d3d4caafe362b06d83de

Annot_2_150704_anno7

view details

Siju

commit sha 39deb887899f3078fefee670e819a585be6c0aaa

Merge pull request #3 from obulirajs/master Annot_174335_anno7

view details

push time in 4 months

PR merged siju-samuel/road-and-damage-detection

Annot_174335_anno7

Folder Annot_174335_anno7 Labels.

+24432 -0

0 comment

42 changed files

obulirajs

pr closed time in 4 months

fork siju-samuel/Finger_Tracking

Tracking the path of Finger using OpenCV

fork in 4 months

PR opened wuhuikai/FastFCN

Update deeplab_res50_pcontext.sh
+6 -6

0 comment

1 changed file

pr created time in 4 months

push eventsiju-samuel/FastFCN

Siju

commit sha cd1b6ef301c5a6df125787600a3a1c2bc3639400

Update deeplab_res50_pcontext.sh

view details

push time in 4 months

fork siju-samuel/FastFCN

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.

http://wuhuikai.me/FastFCNProject

fork in 4 months

issue openedwuhuikai/FastFCN

Comparison with deeplab v3

deeplabv3 resnet50 Model size: 98MB Parameters: 39,638,869

deeplabv3 resnet50 with JPU Model size: 489MB Parameters: 63,931,934

https://github.com/wuhuikai/FastFCN/blob/097e7130f77a15eeca12f37a0afcf2f8f7f90439/encoding/models/deeplabv3.py#L16

If you see the above link, we can see the architecture with JPU contains [resnet50(dialations removed) + JPU + ASPP_Module + FCNHead] whereas normal deeplabv3 consists of [resnet50 + ASPP_Module] This difference explains the memory/parameter size

Please correct me if im wrong. Im not able to understand how this will improve the memory footprint & speed of the network in comparison with deeplabv3 with/without jpu.

created time in 4 months

push eventsiju-samuel/road-and-damage-detection

obulirajs

commit sha f48c6d04f389979054673c6bd35fa5de9fa099c2

Base commit

view details

Siju

commit sha cfb8f8bd8b92e85e6270d1b20efc97fb5ab43b62

Merge pull request #2 from obulirajs/master Base commit folder : "\train\Annot_2_155158_10"

view details

push time in 4 months

PR merged siju-samuel/road-and-damage-detection

Base commit folder : "\train\Annot_2_155158_10"

dataset\labels\road_damage\train\Annot_2_155158_10

+17013 -0

0 comment

11 changed files

obulirajs

pr closed time in 4 months

push eventsiju-samuel/road-and-damage-detection

Siju Samuel

commit sha 376eb6be192f2676466d3147bd5753c486a1b7c7

Updated

view details

push time in 4 months

push eventsiju-samuel/road-and-damage-detection

Siju Samuel

commit sha e820202393d3f266e9ff424b5f827734f0165d65

added IndianRoad_drive3_6_anno2

view details

push time in 4 months

push eventsiju-samuel/road-and-damage-detection

Siju

commit sha 053376fa88c51be3639be915006585b7c3dbab75

Create dataset_download.sh

view details

push time in 4 months

push eventsiju-samuel/road-and-damage-detection

suchismitade

commit sha 2a59ace78da7c17a69c7698977846e666c3dbdb3

Create test.txt

view details

suchismitade

commit sha 42771eeedd04fdc23e7d03ef94d52f0f2eea4ff5

Add files via upload

view details

suchismitade

commit sha 8b711f7bd79df39898c9e554d9764b9203336987

Add files via upload

view details

suchismitade

commit sha a4cd58bbf5bfaa504f2fc3631760a8528baca31c

Delete test.txt

view details

suchismitade

commit sha a94070df07db1a364c68cfd47b681a1d069f8c7f

Create test.txt

view details

suchismitade

commit sha 8fe21b15d329a117a76beee7f767777d7a359990

Add files via upload

view details

suchismitade

commit sha 27db387586064ee2a86bb611bb1d0a61f1c5b8b6

Delete test.txt

view details

Siju

commit sha 9fe8a29f9cad9a86f2b8fb22341407283da568c5

Merge pull request #1 from suchismitade/master updated with annotations

view details

push time in 4 months

push eventsiju-samuel/road-and-damage-detection

Siju

commit sha 7c3345560c99890c9bf315c533975bb54bde8fe7

Update README.md

view details

push time in 4 months

push eventsiju-samuel/road-and-damage-detection

Siju Samuel

commit sha 991fafeabf481be5b922dbde7483e42dcc0301e6

labels initial

view details

push time in 4 months

push eventsiju-samuel/road-and-damage-detection

Siju

commit sha caf390276735be2b5020cc0dbb8d1dec88f9cb7b

Update README.md

view details

push time in 4 months

pull request commentdmlc/tvm

Fix the libdarknet_mac2.0.so path error in from_darknet.py

@tmoreau89 no issues. @lkshore please go ahead and upload the .so built for Mac. Thanks.

lkshore

comment created time in 4 months

startedAutoNUE/public-code

started time in 5 months

PublicEvent
PublicEvent

push eventsiju-samuel/road-and-damage-detection

Siju

commit sha 513861581aab262cda6ceecadb909db85b428372

Update README.md

view details

push time in 5 months

push eventsiju-samuel/road-and-damage-detection

Siju

commit sha f811f66e7c91c999c105be856c7ae844da32b9cc

Set theme jekyll-theme-merlot

view details

push time in 5 months

push eventsiju-samuel/tensorflow

Siju Samuel

commit sha f98f7e8df74b8ecae734355e48070b09ba1946ea

Synced for local update from master/origin

view details

Siju Samuel

commit sha a9ca335271521a8bc7f4b8db5651333d066cb86a

Synced for local update from master/origin

view details

Siju Samuel

commit sha 0aba2e1bae849bf03e42bea7c6deb258aeaa4d1a

Synced for local update from master/origin

view details

Siju Samuel

commit sha 95e2507c0ec2e929403835fbdb01e60460a20b4b

Synced for local update from master/origin

view details

Siju Samuel

commit sha e4579bf416669c76bdd49db41429bd4cd1c791c3

Synced for local update from master/origin

view details

Siju Samuel

commit sha 3f863c27b6f10b1231820c0496bdcdb971cc7677

Synced for local update from master/origin

view details

Siju Samuel

commit sha b9b72a3fc899b7fa80a01b4a87688f674b42cdd6

Synced for local update from master/origin

view details

Siju Samuel

commit sha bd955616d6c70443a709138bbd1a1319a9e9a1a0

Synced for local update from master/origin

view details

Siju Samuel

commit sha 6835d373518c9dc97b99403ba502012dcdf046fb

Synced for local update from master/origin

view details

Siju Samuel

commit sha e99e8a9639a527d33e2524b7f6e4161a3f9959ce

Synced for local update from master/origin

view details

Siju Samuel

commit sha f401b73d36dc4933767aaef4b7c8bfeb5f5016b4

Synced for local update from master/origin

view details

Siju Samuel

commit sha 39e5cbd84ab6e2577518a03b11f6a59ec6254a75

Synced for local update from master/origin

view details

Siju Samuel

commit sha 76144a01ca10179f47f5ab5090adde08ece0e373

Synced for local update from master/origin

view details

Siju Samuel

commit sha 0a31173d1abb7cdbaee178deaafd69a51ddb94e8

Synced for local update from master/origin

view details

Siju Samuel

commit sha 747398308684732fe069d8aac4dc6f205686d14f

Synced for local update from master/origin

view details

Siju Samuel

commit sha a1cd534ca7ab714f4d021076f85b06a7f9650b2b

Synced for local update from master/origin

view details

Siju Samuel

commit sha 4f8422daed22145159a354da46ed389efbaff246

Synced for local update from master/origin

view details

Siju Samuel

commit sha 95ac0dc0c8b3534eed23381fd4477e867502973a

Synced for local update from master/origin

view details

Siju Samuel

commit sha 96a943a5a87efd7634ff7ebdf3b543576d50078e

Synced for local update from master/origin

view details

Siju Samuel

commit sha 2caed4e03c8eca649ba81d577e4c056771879372

Synced for local update from master/origin

view details

push time in 5 months

push eventsiju-samuel/dmlc-tvm

Siju Samuel

commit sha 8705cf6534ea5abf1613013759f8ac8fc4a16945

Synced for local update from master/origin

view details

Siju Samuel

commit sha aaeea08d9508082a0749d50b5953d642fd0a3b3d

Synced for local update from master/origin

view details

Siju Samuel

commit sha 5015ef308645c019f1bc7bbe6a961187217708ce

Synced for local update from master/origin

view details

Siju Samuel

commit sha 678a1963481e5ee465d6236d3b446c35b4098077

Synced for local update from master/origin

view details

Siju Samuel

commit sha 41688aac1fafa769e0a641695155ea2fdab50232

Synced for local update from master/origin

view details

Siju Samuel

commit sha b47f584a32b4f6c5cb014dd154af2caa3d6d7247

Synced for local update from master/origin

view details

Siju Samuel

commit sha 2812170e0659fa3ac132a2b34fd2440f7c95693b

Synced for local update from master/origin

view details

Siju Samuel

commit sha 03b87682ddbf3a1ce9ce8d2fa074e851c0a0bac0

Synced for local update from master/origin

view details

Siju Samuel

commit sha 8783958e5941275c7b0f117139781f7c3e2cc3b7

Synced for local update from master/origin

view details

Siju Samuel

commit sha 5c1bc3414ed478a69f1da1fa5c509c52979ab123

Synced for local update from master/origin

view details

Siju Samuel

commit sha 97c222babff5b6a62c5a9557e64cf70b44b863af

Synced for local update from master/origin

view details

Siju Samuel

commit sha d66644fa8a9c007e0baa7c7208bd45f9020075f0

Synced for local update from master/origin

view details

Siju Samuel

commit sha 0661bff1bf2d126fbedd241defaf33dc31cc4d2b

Synced for local update from master/origin

view details

Siju Samuel

commit sha dcef5f8123bf518cc81a81f21540625570cd89f4

Synced for local update from master/origin

view details

Siju Samuel

commit sha f483cfa00c961f5bce5a38502f9e803cd897cfa6

Synced for local update from master/origin

view details

Siju Samuel

commit sha 6f0aa9f9f569786ae41f89d358e3d29ad64a7cd2

Synced for local update from master/origin

view details

Siju Samuel

commit sha 05d91df2bb8ff8850019910166e6da6f3a44ae00

Synced for local update from master/origin

view details

Siju Samuel

commit sha ff6dc7de3d8cbae6d86a506c09234805e295eeec

Synced for local update from master/origin

view details

Siju Samuel

commit sha 609c8275a59e47fdce9f4716fc05151bec91939f

Synced for local update from master/origin

view details

Siju Samuel

commit sha 0cc979fe8a422bc26703f0ae275d46aca6ef5e01

Synced for local update from master/origin

view details

push time in 5 months

more