profile
viewpoint

gmagogsfm/CppCon2014 0

Speaker materials from CppCon 2014

gmagogsfm/ez-vcard 0

A vCard parser library for Java

gmagogsfm/fireplace 0

Hearthstone simulator

gmagogsfm/gitReflections 0

git course on udacity

gmagogsfm/jumpybird 0

a flappy-bird like web game

gmagogsfm/QRNameCard 0

An android app that converts contact information into QR images

gmagogsfm/tensorflow 0

An Open Source Machine Learning Framework for Everyone

gmagogsfm/tensorflow-zh 0

谷歌全新开源人工智能系统TensorFlow官方文档中文版

starteduwsampl/tvm-distro

started time in 20 days

push eventgmagogsfm/tvm

Trevor Morris

commit sha 3188a6829baf4d3fcbb3c795cc4a412e75d966c0

Fix bug with LegalizeLayoutTranform which added duplicate ops (#81)

view details

Trevor Morris

commit sha 747579657922308803a0dde078781475d5833a4c

[Relay/TRT] Support clip for TRT 4 using relu + eltwise (#83) * Support clip for TRT 4 using relu + eltwise * Re-enable consistency check * Invoke convertlayout properly

view details

push time in 21 days

startedapache/incubator-tvm

started time in 24 days

startedgoogle/jax

started time in 24 days

startedgoogle/iree

started time in a month

fork gmagogsfm/tensorflow

An Open Source Machine Learning Framework for Everyone

https://tensorflow.org

fork in a month

push eventgmagogsfm/tvm

Zhi

commit sha 303a47193e40b5f4937c00221b52a2cb45b6dabc

Merge with Apache/incubator-tvm (#71) * Change upstream url * Fix bias_add gradient (#4516) * Fix bias_add gradient A change caused collapse_sum_like to reject implicit dimension broadcasting for bias_add gradient, so switch to explicit sum reduction on the non-bias axis dimensions. * Lint fix * [Bugfix][Frontend][TFlite] Fix wrong function call in TANH tests (#4517) * Replace sigmoid() with tanh() in tests for TANH * Fixed extra reshape parameter bug. (#4524) * Use the best tuner possible (#4397) * Use the best tuner possible * Add comment denoting availability of better tuners * Fix typos and wording * [ir] use DataType instead of Type for readability because Type has been deprecated (#4513) * add bfloat16 typeflag support (#4525) * fix empty config caused KeyError (#4520) * fix onnx shape dtype (#4528) * fix crash issue in tsim backend (#4527) * PIL is depreciated and should be replaced with pillow (a fork of PIL) (#4533) Change-Id: If2075df5475505f2da87dae7145af5a7ab83d8a4 * [Relay] External codegen (#4482) * Update legacy places from nnvm to relay. (#4535) * Update legacy places from nnvm to relay. This PR prepares the current mainline to remove nnvm compiler dep. * remove legacy stage * Implement 1d deconvolution (#4476) * [relay][op] add expand op (from ONNX) to relay frontend (#4483) * Add Expand to onnx.py * add test function for expand * Fix a onnx frontend test * Add tests for the value itself instead of shape only on test_expand * Cleaned up some unnecessary modifications. * [TOPI] Allow batch matmul to be fused into injective ops (#4537) * [TOPI] Fixed nms max_output_size loop (#4541) One of the loops in hybrid_nms used for performing the max_output_size reordering was incorrectly designated as parallel resulting in incorrect behaviour. This patch changes that loop to a serial loop. Change-Id: I97184f5887f5f028d8ab339fa2808eb7630a4017 * [DOCS] Mention Ninja build system in install/from_source.rst (#4554) * [DOCS] Mention Ninja build system in install/from_source.rst * Address comments * [PYTHON][FFI] Cythonize NDArray.copyto (#4549) * [PYTHON][FFI] Cythonize NDArray.copyto * Cythonize the shape property * vm external codegen (#4544) * [COMMUNITY] @cchung100m -> reviewer (#4557) * [VTA] improved virtual memory mapping (#4545) * [VTA] improved virtual memory mapping * Update virtual_memory.cc * [IR] fix style in ir_mutator and ir_visitor (#4561) * [RUNTIME][VULKAN] Fix compiler warning (#4559) * [REFACTOR][DTYPE] Isolate dtype to runtime (#4560) dtype.h -> runtime/data_type.h Changes: - Rename all old reference of tvm::Type to DataType - ExprNode.type -> ExprNode.dtype - Expr.type() -> Expr.dtype() - Change Expr related functions to expr_operator. - DataType::min() -> min_value(DataType) - DataType::max() -> max_value(DataType) - Move type constructor Int, UInt, Float, Handle, Bool into DataType. - Int(bits) -> DataType::Int(bits) - UInt(bits) -> DataType::UInt(bits) * Support standardize runtime module (#4532) * [Relay][Frontend][ONNX] Support auto_pad in Conv and ConvTranspose (#4563) * [TEST] Remove nnvm related code in topi and test script (#4562) * [TEST] Remove nnvm related code in topi and test script * Remove docs dep * [Relay] add max_pool3d in relay and TF converter (#4551) * [Relay] add max_pool3d in relay and TF converter * fix comments * Remove nnvm (#4565) * [VTA][Chisel] End-to-end Inference with Chisel VTA (#4574) * [VTA][Chisel] End-to-end Inference with Chisel VTA * Update TensorAlu.scala * remove unnecessary cast to int32 (#4573) * Fix llvm-enabled build by adding missing intrinsics headers (#4575) * [DEPRECATION] Remove NNVM compiler (#4571) * Remove NNVM compiler * [Relay/Topi][Op] Added native DepthToSpace and SpaceToDepth Operators (#4566) * Added tvm function stencil for subpixel operations to topi. * Topi subpixel operators added and tested. * Added subpixel attrs. * Added depth_to_space relay attributes. * depth_to_space fully working. * Fixed NHWC shape bug. * SpaceToDepth in and all tests passing. * lint fixes. * Added string include * Fixed topi formatting. * Added DCR/CDR mode to depthtospace operator. * [DOC] fix doc in api.py (#4580) * [DEPRECATION] Cleanup legacy verilog support (#4576) This PR cleans up the left over code for legacy verilog support which was experimental. The new hardware backend path is now support by VTA via TSIM. * [RUNTIME] Remove Extension VTable in favor of Unified Object system. (#4578) Before the unified object protocol, we support pass additional extension objects around by declaring a type as an extension type. The old extension mechanism requires the types to register their constructor and deleter to a VTable and does not enjoy the benefit of the self-contained deletion property of the new Object system. This PR upgrades the extension example to make use of the new object system and removed the old Extension VTable. Note that the register_extension funtion in the python side continues to work when the passed argument does not require explicit container copy/deletion, which covers the current usecases of the extension mechanism. * Some Windows and MSVC fixes (#4569) * fix python exception creation in Windows * better string conversion for msvc * fix cpp style issue * [NEWS] add v0.6 release (#4558) * [NEWS] add v0.6 release * remove link prefix * fix issue number * [DOCS]fix typos in autotvm tutorial (#4585) * [Quantization, Calibrate] Fix context creation when current_target is explicity set (#4582) * [Container] Fix NDArray SaveDLTensor declaration and implementation signature different (#4586) * [TOPI][AutoTVM] NHWC conv2d templates for ARM (#3859) * [AutoTVM][TOPI] NHWC conv2d templates (spatial pack) for ARM As some frontends (tflite for example) are using NHWC as the default layout, we are enabling NHWC schedule templates in TOPI and AutoTVM. * some comments fix * [FIX][TOPI][X86] schedule dense pack (#4539) * [Relay] Convert Layout Pass. (#4335) * [Relay][AlterLayout] Broadcast with scalar shape (#4577) * [TOPI] add 3D upsampling Op. (#4584) * [TOPI] add 3D upsampling Op. * fix lint issues * change align_corners to coordinate_transformation_mode * fix resize3d half_pixel * make a simple function and clean up trilinear_resize3d_python * fix doc * [Runtime] add necessary const qualifier for NDArray container of parameters (#4590) * [autotvm] fix typos in comment (#4591) * fix tf.compat.v1 issue for tf verison <=1.12 (#4593) * [FRONTEND][TF] conv2d_transpose 'SAME' support kernel more than 1x1 (#4484) * [FRONTEND][TF] conv3d_transpose 'SAME' support kernel more than 1x1 * revised per as review comments * add more fallback wolkaround to make all tests pass * [GraphRuntime] Support parameter out in the graph runtime debug (#4598) * [GraphRuntime] Support parameter out in the graph runtime debug * Dummy commit to trigger build * [Perf] Add CublasLt extern support for better Igemm performance (#4550) * cublaslt added * fix lint * address comments * address more comments * Trigger CI * Trigger CI * fix codegenc (#4597) * [REFACTOR][RUNTIME] Update NDArray use the Unified Object System (#4581) * [REFACTOR][RUNTIME] Move NDArray to Object System. Previously NDArray has its own object reference counting mechanism. This PR migrates NDArray to the unified object protocol. The calling convention of NDArray remained intact. That means NDArray still has its own type_code and its handle is still DLTensor compatible. In order to do so, this PR added a few minimum runtime type detection in TVMArgValue and RetValue only when the corresponding type is a base type(ObjectRef) that could also refer to NDArray. This means that even if we return a base reference object ObjectRef which refers to the NDArray. The type_code will still be translated correctly as kNDArrayContainer. If we assign a non-base type(say Expr) that we know is not compatible with NDArray during compile time, no runtime type detection will be performed. This PR also adopts the object protocol for NDArray sub-classing and removed the legacy NDArray subclass protocol. Examples in apps/extension are now updated to reflect that. Making NDArray as an Object brings all the benefits of the object system. For example, we can now use the Array container to store NDArrays. * Address review comments * [Relay][Convert Layout] Handling batch norm layout change. (#4600) * [relay][refactor] Cache Op::Get in passes to reduce lookup overhead (#4594) * Refactor to use IsOp utility * retrigger CI * Update dmlc_tvm_commit_id.txt * disable one test_batch_norm unit test for now to check CI * enable test_batch_norm Co-authored-by: SWu <SWu@users.noreply.github.com> Co-authored-by: Ina Dobreva <55383260+inadob@users.noreply.github.com> Co-authored-by: Josh Fromm <jwfromm@uw.edu> Co-authored-by: miheer vaidya <v.miheer@gmail.com> Co-authored-by: Liang ZOU <liang.d.zou@gmail.com> Co-authored-by: YixinBao <yixin.bao@intel.com> Co-authored-by: Cody Yu <comaniac0422@gmail.com> Co-authored-by: masahi <masahi129@gmail.com> Co-authored-by: Liangfu Chen <liangfu.chen@icloud.com> Co-authored-by: lhutton1 <35535092+lhutton1@users.noreply.github.com> Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com> Co-authored-by: Alex Gladkov <gladkov_alex@yahoo.com> Co-authored-by: Takato Yamada <tkclimb0911@gmail.com> Co-authored-by: Haichen Shen <shenhaichen@gmail.com> Co-authored-by: mbarrett97 <55580676+mbarrett97@users.noreply.github.com> Co-authored-by: Hideto Ueno <uenoku.tokotoko@gmail.com> Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn> Co-authored-by: Zhao Wu <wuzhaozju@gmail.com> Co-authored-by: Neo Chien <cchung100m@cs.ccu.edu.tw> Co-authored-by: Yong Wu <55wuyong@163.com> Co-authored-by: Dmitri Makarov <dmakarov@users.noreply.github.com> Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com> Co-authored-by: kice <wslikerqs@gmail.com> Co-authored-by: Yizhi Liu <liuyizhi@apache.org> Co-authored-by: Wang Yucheng <wyc91543@163.com> Co-authored-by: 王振华(Zhenhua WANG) <i@jackwish.net> Co-authored-by: deepIgnorance <zhengsizemax@outlook.com> Co-authored-by: Animesh Jain <anijain@umich.edu> Co-authored-by: optima2005 <56945758+optima2005@users.noreply.github.com> Co-authored-by: zhuochen <zhuochen@outlook.com> Co-authored-by: Leyuan Wang <laurawly@gmail.com>

view details

Zhi

commit sha 9701fe78f9fa8e86b498dfa75035c4692d55d827

Revert "Merge with Apache/incubator-tvm (#71)" (#77) This reverts commit 303a47193e40b5f4937c00221b52a2cb45b6dabc.

view details

Trevor Morris

commit sha 792a8a50345cc3b2e5eab657e84e112d4e533284

Remove NNVM/TRT compiler tests. (#76)

view details

Zhi Chen

commit sha b0bfb81986322fe7a94b13ad4193d9710d57503a

Change upstream url

view details

SWu

commit sha eac3fe1493a5289779776047cccbd3f3bf94155f

Fix bias_add gradient (#4516) * Fix bias_add gradient A change caused collapse_sum_like to reject implicit dimension broadcasting for bias_add gradient, so switch to explicit sum reduction on the non-bias axis dimensions. * Lint fix

view details

Ina Dobreva

commit sha 4f3bc94ed4ba0e9d05ecb45f147a4c33faf430ab

[Bugfix][Frontend][TFlite] Fix wrong function call in TANH tests (#4517) * Replace sigmoid() with tanh() in tests for TANH

view details

Josh Fromm

commit sha 3f3a3a6dc988feae14588ed52cbcd7776bebf2cf

Fixed extra reshape parameter bug. (#4524)

view details

miheer vaidya

commit sha e4da5e8e9c8547897fb223244df682db07eaf870

Use the best tuner possible (#4397) * Use the best tuner possible * Add comment denoting availability of better tuners * Fix typos and wording

view details

Liang ZOU

commit sha 46d143c0f7947278306b7b349cf1ef045b327609

[ir] use DataType instead of Type for readability because Type has been deprecated (#4513)

view details

YixinBao

commit sha b4e8be22dc8bd42f79442002db6f1cf19652f413

add bfloat16 typeflag support (#4525)

view details

Cody Yu

commit sha 27674bdf076eea550d835178cff45b82c10d7b3f

fix empty config caused KeyError (#4520)

view details

masahi

commit sha bd925da4b3b620bbea526672949cefa0982f25ff

fix onnx shape dtype (#4528)

view details

Liangfu Chen

commit sha 198b1b1510ad6bbb093d9feee2d59921765df30d

fix crash issue in tsim backend (#4527)

view details

lhutton1

commit sha 1e80fba878650a5d7264ebd1b89c962bf2496795

PIL is depreciated and should be replaced with pillow (a fork of PIL) (#4533) Change-Id: If2075df5475505f2da87dae7145af5a7ab83d8a4

view details

Zhi

commit sha 5a0301e6161a64a248ba26b95520003462fb24e6

[Relay] External codegen (#4482)

view details

Tianqi Chen

commit sha 1e36f30694b4e4783b7ec74d8124d1d0b2f71ff6

Update legacy places from nnvm to relay. (#4535) * Update legacy places from nnvm to relay. This PR prepares the current mainline to remove nnvm compiler dep. * remove legacy stage

view details

Alex Gladkov

commit sha e184ef4c072ff5a89eff3b76cb77ea64e4e65412

Implement 1d deconvolution (#4476)

view details

Takato Yamada

commit sha 403174fb01ddac8c119adfd77ce579eb28a562b8

[relay][op] add expand op (from ONNX) to relay frontend (#4483) * Add Expand to onnx.py * add test function for expand * Fix a onnx frontend test * Add tests for the value itself instead of shape only on test_expand * Cleaned up some unnecessary modifications.

view details

Haichen Shen

commit sha a8bf7f4eb0d30ebc2bf0330b49c3483c2f57a30c

[TOPI] Allow batch matmul to be fused into injective ops (#4537)

view details

mbarrett97

commit sha 5fa157745eadee9f824699d231916b98eb1df54d

[TOPI] Fixed nms max_output_size loop (#4541) One of the loops in hybrid_nms used for performing the max_output_size reordering was incorrectly designated as parallel resulting in incorrect behaviour. This patch changes that loop to a serial loop. Change-Id: I97184f5887f5f028d8ab339fa2808eb7630a4017

view details

push time in a month

startedawslabs/multi-model-server

started time in a month

startedrenjie-liu/quantization-kernel-codelab

started time in a month

fork gmagogsfm/tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

https://tvm.ai

fork in 2 months

push eventgmagogsfm/incubator-tvm

Hideto Ueno

commit sha ce0b6d5adcd5e3b0fd6b4e7d2326f16b5c5ca5b8

[DOCS] Mention Ninja build system in install/from_source.rst (#4554) * [DOCS] Mention Ninja build system in install/from_source.rst * Address comments

view details

Tianqi Chen

commit sha bc5367a07bdb12bc282d91ad6d75484863e2b5ba

[PYTHON][FFI] Cythonize NDArray.copyto (#4549) * [PYTHON][FFI] Cythonize NDArray.copyto * Cythonize the shape property

view details

Zhi

commit sha d51119e65f588d1bae1efd2f0150752ced984098

vm external codegen (#4544)

view details

Tianqi Chen

commit sha 76076ec318268acb7fe56af1852a52bf7f15436c

[COMMUNITY] @cchung100m -> reviewer (#4557)

view details

Liangfu Chen

commit sha 44cb10540d9010f028cc36da32e118830ae8fe21

[VTA] improved virtual memory mapping (#4545) * [VTA] improved virtual memory mapping * Update virtual_memory.cc

view details

Siyuan Feng

commit sha a4ea0f4bbc44cfe511a8d2d78a2358cd64d14052

[IR] fix style in ir_mutator and ir_visitor (#4561)

view details

Tianqi Chen

commit sha ad81796210e34aba8c614518c380a5d4a76a683b

[RUNTIME][VULKAN] Fix compiler warning (#4559)

view details

Tianqi Chen

commit sha 7fa8aab563cca45797f4a694c1dfc06186549630

[REFACTOR][DTYPE] Isolate dtype to runtime (#4560) dtype.h -> runtime/data_type.h Changes: - Rename all old reference of tvm::Type to DataType - ExprNode.type -> ExprNode.dtype - Expr.type() -> Expr.dtype() - Change Expr related functions to expr_operator. - DataType::min() -> min_value(DataType) - DataType::max() -> max_value(DataType) - Move type constructor Int, UInt, Float, Handle, Bool into DataType. - Int(bits) -> DataType::Int(bits) - UInt(bits) -> DataType::UInt(bits)

view details

Zhao Wu

commit sha f076c839efe52b853c0941b0e6c351ecf868b296

Support standardize runtime module (#4532)

view details

Neo Chien

commit sha 8acc413cf820d25b4039d8ae7bd455607ee1bae8

[Relay][Frontend][ONNX] Support auto_pad in Conv and ConvTranspose (#4563)

view details

Tianqi Chen

commit sha e6ff3f701b2bf3e2204709aea392b0abdf6a85d0

[TEST] Remove nnvm related code in topi and test script (#4562) * [TEST] Remove nnvm related code in topi and test script * Remove docs dep

view details

Yong Wu

commit sha f277da76e4bf6607fac2c7af77c33886cb2939e6

[Relay] add max_pool3d in relay and TF converter (#4551) * [Relay] add max_pool3d in relay and TF converter * fix comments

view details

Tianqi Chen

commit sha 79581dd742d05fe9bd26e8ad2af1abc886dd0c09

Remove nnvm (#4565)

view details

Liangfu Chen

commit sha dfc4009c5db19c03998de386d4f606d580ac626a

[VTA][Chisel] End-to-end Inference with Chisel VTA (#4574) * [VTA][Chisel] End-to-end Inference with Chisel VTA * Update TensorAlu.scala

view details

masahi

commit sha 9ec0e5ce1a0b5b80754870da28c54c76243f4389

remove unnecessary cast to int32 (#4573)

view details

Dmitri Makarov

commit sha 9bf2bee6dd40d81c9fa862c19945d41a49003bd9

Fix llvm-enabled build by adding missing intrinsics headers (#4575)

view details

Tianqi Chen

commit sha f9bc748f53074ea9b9d65b54e8d2654d85f8590e

[DEPRECATION] Remove NNVM compiler (#4571) * Remove NNVM compiler

view details

Josh Fromm

commit sha 9b92c53913a1594a32613df087f2c3e2557be14d

[Relay/Topi][Op] Added native DepthToSpace and SpaceToDepth Operators (#4566) * Added tvm function stencil for subpixel operations to topi. * Topi subpixel operators added and tested. * Added subpixel attrs. * Added depth_to_space relay attributes. * depth_to_space fully working. * Fixed NHWC shape bug. * SpaceToDepth in and all tests passing. * lint fixes. * Added string include * Fixed topi formatting. * Added DCR/CDR mode to depthtospace operator.

view details

Bohan Hou

commit sha c90160b22914e9ee98110e93dc06d06ee36b3946

[DOC] fix doc in api.py (#4580)

view details

Tianqi Chen

commit sha ff65698f403de5404af0bb94f1bc5c45ef58d9a7

[DEPRECATION] Cleanup legacy verilog support (#4576) This PR cleans up the left over code for legacy verilog support which was experimental. The new hardware backend path is now support by VTA via TSIM.

view details

push time in 2 months

push eventgmagogsfm/incubator-tvm

gmagogsfm

commit sha 3b727adfe0d883d4eafdc4b3bec92a4e0e869d77

Revert "Fix a typo testing" This reverts commit a66a756c1006cf0900794436173c0025f86c4451.

view details

push time in 2 months

push eventgmagogsfm/incubator-tvm

Hideto Ueno

commit sha ce0b6d5adcd5e3b0fd6b4e7d2326f16b5c5ca5b8

[DOCS] Mention Ninja build system in install/from_source.rst (#4554) * [DOCS] Mention Ninja build system in install/from_source.rst * Address comments

view details

Tianqi Chen

commit sha bc5367a07bdb12bc282d91ad6d75484863e2b5ba

[PYTHON][FFI] Cythonize NDArray.copyto (#4549) * [PYTHON][FFI] Cythonize NDArray.copyto * Cythonize the shape property

view details

Zhi

commit sha d51119e65f588d1bae1efd2f0150752ced984098

vm external codegen (#4544)

view details

Tianqi Chen

commit sha 76076ec318268acb7fe56af1852a52bf7f15436c

[COMMUNITY] @cchung100m -> reviewer (#4557)

view details

Liangfu Chen

commit sha 44cb10540d9010f028cc36da32e118830ae8fe21

[VTA] improved virtual memory mapping (#4545) * [VTA] improved virtual memory mapping * Update virtual_memory.cc

view details

Siyuan Feng

commit sha a4ea0f4bbc44cfe511a8d2d78a2358cd64d14052

[IR] fix style in ir_mutator and ir_visitor (#4561)

view details

Tianqi Chen

commit sha ad81796210e34aba8c614518c380a5d4a76a683b

[RUNTIME][VULKAN] Fix compiler warning (#4559)

view details

Tianqi Chen

commit sha 7fa8aab563cca45797f4a694c1dfc06186549630

[REFACTOR][DTYPE] Isolate dtype to runtime (#4560) dtype.h -> runtime/data_type.h Changes: - Rename all old reference of tvm::Type to DataType - ExprNode.type -> ExprNode.dtype - Expr.type() -> Expr.dtype() - Change Expr related functions to expr_operator. - DataType::min() -> min_value(DataType) - DataType::max() -> max_value(DataType) - Move type constructor Int, UInt, Float, Handle, Bool into DataType. - Int(bits) -> DataType::Int(bits) - UInt(bits) -> DataType::UInt(bits)

view details

Zhao Wu

commit sha f076c839efe52b853c0941b0e6c351ecf868b296

Support standardize runtime module (#4532)

view details

Neo Chien

commit sha 8acc413cf820d25b4039d8ae7bd455607ee1bae8

[Relay][Frontend][ONNX] Support auto_pad in Conv and ConvTranspose (#4563)

view details

Tianqi Chen

commit sha e6ff3f701b2bf3e2204709aea392b0abdf6a85d0

[TEST] Remove nnvm related code in topi and test script (#4562) * [TEST] Remove nnvm related code in topi and test script * Remove docs dep

view details

Yong Wu

commit sha f277da76e4bf6607fac2c7af77c33886cb2939e6

[Relay] add max_pool3d in relay and TF converter (#4551) * [Relay] add max_pool3d in relay and TF converter * fix comments

view details

Tianqi Chen

commit sha 79581dd742d05fe9bd26e8ad2af1abc886dd0c09

Remove nnvm (#4565)

view details

Liangfu Chen

commit sha dfc4009c5db19c03998de386d4f606d580ac626a

[VTA][Chisel] End-to-end Inference with Chisel VTA (#4574) * [VTA][Chisel] End-to-end Inference with Chisel VTA * Update TensorAlu.scala

view details

masahi

commit sha 9ec0e5ce1a0b5b80754870da28c54c76243f4389

remove unnecessary cast to int32 (#4573)

view details

Dmitri Makarov

commit sha 9bf2bee6dd40d81c9fa862c19945d41a49003bd9

Fix llvm-enabled build by adding missing intrinsics headers (#4575)

view details

Tianqi Chen

commit sha f9bc748f53074ea9b9d65b54e8d2654d85f8590e

[DEPRECATION] Remove NNVM compiler (#4571) * Remove NNVM compiler

view details

Josh Fromm

commit sha 9b92c53913a1594a32613df087f2c3e2557be14d

[Relay/Topi][Op] Added native DepthToSpace and SpaceToDepth Operators (#4566) * Added tvm function stencil for subpixel operations to topi. * Topi subpixel operators added and tested. * Added subpixel attrs. * Added depth_to_space relay attributes. * depth_to_space fully working. * Fixed NHWC shape bug. * SpaceToDepth in and all tests passing. * lint fixes. * Added string include * Fixed topi formatting. * Added DCR/CDR mode to depthtospace operator.

view details

Bohan Hou

commit sha c90160b22914e9ee98110e93dc06d06ee36b3946

[DOC] fix doc in api.py (#4580)

view details

Tianqi Chen

commit sha ff65698f403de5404af0bb94f1bc5c45ef58d9a7

[DEPRECATION] Cleanup legacy verilog support (#4576) This PR cleans up the left over code for legacy verilog support which was experimental. The new hardware backend path is now support by VTA via TSIM.

view details

push time in 2 months

create barnchgmagogsfm/incubator-tvm

branch : fixTypo

created branch time in 2 months

fork gmagogsfm/incubator-tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

https://tvm.ai

fork in 2 months

startedjhuangtw-dev/xg2xg

started time in 4 months

more