profile
viewpoint

xkszltl/Roaster 9

Build open-source tools from scratch with Roaster!

xkszltl/Siphon 1

Public mirror for official git repo

xkszltl/aws-sdk-cpp 0

AWS SDK for C++

xkszltl/caffe 0

Caffe: a fast open framework for deep learning.

xkszltl/detectron2 0

Detectron2 is FAIR's next-generation platform for object detection and segmentation.

xkszltl/filebench 0

File system and storage benchmark that uses a custom language to generate a large variety of workloads.

xkszltl/jsoncpp 0

A C++ library for interacting with JSON.

xkszltl/leveldb 0

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

xkszltl/llvm 0

Mirror of official llvm git repository located at http://llvm.org/git/llvm. Updated every five minutes.

xkszltl/onnx 0

Open Neural Network Exchange

push eventxkszltl/Roaster

Tongliang Liao

commit sha 90273d3d3208b6a1199029b5a96e8db521bd0445

Fix mirroring regression.

view details

push time in 19 hours

push eventxkszltl/Roaster

Tongliang Liao

commit sha 601a68cf9604a40592488085ca5688b3294103c6

PyTorch protobuf issue has been fixed. https://github.com/pytorch/pytorch/issues/42939

view details

push time in a day

issue commentNVIDIA/TensorRT

credentials for downloads - please don't

That's exactly what I did for Linux, but when we building something cross-platform it's more of all-or-none situation.

bvandenbon

comment created time in 2 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha d2b0c7be52ea2936b07af394f3ab95c758e46075

Clean up cache only once. This should improve the success rate of nvidia repo sync in China, if the correct result is returned.

view details

Tongliang Liao

commit sha a2d7f87ae9e7e83f03b7ef892bdc530324e03997

Increase nvidia repo retries.

view details

push time in 2 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 7cc31ba7cf6b39b03bf08c18cbd42ffca812cabb

Expand vars in repo name because they are not supported.

view details

push time in 2 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha c7684189cc80f988381c24b8097e3d9e5635b218

Repo name changed in upstream: "cuda" => "cuda-rhel7-x86_64".

view details

push time in 3 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha cc74b193873767c9558db7fa2f68c5ce551b1989

Clean dnf cache, or reposync may not update to latest.

view details

push time in 3 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 0151eb9834423788548f888f906c71d5b40ea80f

Fetch all LFS objects.

view details

push time in 3 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 4165732a5c03bbf9575b757e5c32ccf756f90f0b

Reduce unnecessary lfs info.

view details

push time in 3 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha ac385f329c23f4bfc1630db00e55f0c8b680f8fd

error: git-lfs died of signal 13

view details

Tongliang Liao

commit sha 9df2c661daa3a051ecb3bc04b5295fd6d511282a

Push all lfs objects.

view details

push time in 3 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 3f62ad0769111403fea7a1eb62310abff1d469b5

Ensure entire history is examed for LFS. Early exit when any file is found.

view details

push time in 3 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 34d25f5679ce09198265675e3feedf372ef2fe93

Mirror without changing remote. The old way somehow creates "refs/remotes" on the second `git remote set-url` (the actually effective one), causing a lot of "remote rejected deny updating a hidden ref" error". Also pull tags not related to branches in this commit.

view details

push time in 3 days

pull request commentmicrosoft/onnxruntime

Move ort flatbuffers helper functions and value info r/w functions into separated lib

That was another one, so it'll be great if it can be captured by CI: https://github.com/microsoft/onnxruntime/issues/5024

gwang-msft

comment created time in 3 days

pull request commentmicrosoft/onnxruntime

Move ort flatbuffers helper functions and value info r/w functions into separated lib

This is the build arg: https://github.com/xkszltl/Roaster/blob/14e68367389ad0b8fd809a48b4b70a76f183f89e/pkgs/ort.sh#L75-L135 Most of it should be irrelevant to this issue.

gwang-msft

comment created time in 3 days

pull request commentmicrosoft/onnxruntime

Move ort flatbuffers helper functions and value info r/w functions into separated lib

I uses ninja, maybe the generator is more sensitive than makefile.

gwang-msft

comment created time in 3 days

pull request commentmicrosoft/onnxruntime

Downgrade GCC

@snnn This is an interesting one. Could you share more details about which symbol is not there? It will prevent us from falling into the same trap in the future.

Based on PEP571, manylinux2010 package should not relies on external openmp. Only glibc/libstdc++ are there, not even libstdc++fs. https://www.python.org/dev/peps/pep-0571/

So maybe you can simply ship libgomp in package and rpath to it? (But yes using distro stock lib is better if you ever what people to use the packaged lib outside python).

Besides, devtoolset-9 seems to suggest something different than previous versions. It has new symbols but in static lib for compatibility. image

snnn

comment created time in 3 days

pull request commentmicrosoft/onnxruntime

Move ort flatbuffers helper functions and value info r/w functions into separated lib

@gwang-msft FYI this PR breaks master build. https://github.com/microsoft/onnxruntime/commit/3a3f26f38e049eb0e8b9fbb97281a6e5dd82074f#commitcomment-42736525

gwang-msft

comment created time in 3 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 14e68367389ad0b8fd809a48b4b70a76f183f89e

Fail if scl not found. This make sure we don't accidentally use the stock gcc 4.8 when devtoolset not installed.

view details

push time in 3 days

issue openedonnx/onnx

Shared vs. static onnx

Feature Request

We had a discussion with onnxruntime team: https://github.com/microsoft/onnxruntime/pull/5236 They're using onnx-ml.proto in multiple shared libs. This causes an porto re-registration issue when used together with shared protobuf. Currently our fix is to compile onnx-ml.proto as a shared lib to make sure it's loaded only once even through multiple shared lib. However, onnxruntime team mentioned since upstream ship it only as static lib, they prefer to keep it that way.

Do you think it is better to ship both static/shared version, and do you have any suggestion for our use case?

Feature Area

Which area in ONNX does this impact? (e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators): build/packaging

created time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha b57d0abf4e6377381802a78022d895470bae5042

Adding nvidia repo is hard.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 24121fbc96e656e55cdc4138f2e737db6e85e169

Bugfix. VERSION_ID has dot.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 9e5b92503eb977af0eaf0df1285682f413ee5d0d

More -xe.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 616f10fa808f23d08630ab19994475e3388c2a11

Add a slash.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 63382b771bd8a01d1a35548be58740ba80236ab2

Clean up yum as well.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 02eee89288229ccb3ac3807854c44acffcf5ac2d

Drop name from git.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 400f623204fc294197e0d9924d8d336b1d55fde6

Bugfix for cred env var.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha d18974455ba5336d108f371a1d1d4f252fccd7aa

Manually add CUDA repo for Ubuntu.

view details

push time in 4 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha b138d0f6cdbece75d3402460786b6fedbf304fc8

Add/pin some basic utils.

view details

push time in 4 days

CommitCommentEvent
CommitCommentEvent

push eventxkszltl/Roaster

Tongliang Liao

commit sha b29f6a37f04e32fc6ccbfee2faa808e9e76fd53d

More git config.

view details

push time in 5 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 916c64cc44853654b6a2128da933c2fd9a54efaf

Pin to CUDA 11.0 for now because 11.1 breaks LLVM build and does not have matching cuDNN/TensorRT yet.

view details

push time in 5 days

issue commentpytorch/vision

CMake Error at /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 (if):

Meanwhile there's a hack to make it work in 3.18.3 without patching cmake. The issue is there's \. which should be \\. instead. Could you change this line to 3.1? https://github.com/pytorch/vision/blob/662373f6057bb0d39eaf6e5fde3083639ed93af3/CMakeLists.txt#L1

That will set CMP0053=New, which makes the \. recognizable as . (escaped .), and matches everything in regex, including a dot in version.

xkszltl

comment created time in 5 days

issue commentpytorch/vision

CMake Error at /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 (if):

I tried their patch and works.

xkszltl

comment created time in 5 days

issue commentpytorch/vision

CMake Error at /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 (if):

That's probably not the root cause. CMake side found their regex bug in 3.18.3.

xkszltl

comment created time in 5 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha f9c8125a509ee022c5d3d4daaa3d4a6965826bdc

Typo

view details

push time in 7 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 7390bb45a8ca351fbe71fb84debb1d8c6d1d5617

Bugfix: Loop var name duplication.

view details

push time in 7 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 5e444b2d3ee47b42e7c58f6ae8094d10e0677c6b

cuda-compat should be installed on CentOS, or nvidia-docker may uses host cuda. Ubuntu uses blacklist to exclude driver so it's fine.

view details

push time in 7 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 392e708498e2aaf0215c9fc4b7c211bad1e96e38

Fix typo.

view details

push time in 7 days

issue commentmicrosoft/onnxruntime

Website not working in China

FYI it's because you''re using google apis to get jquery. image

xkszltl

comment created time in 7 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha c40e9863e69381c892098bbf2b0970a541b46f12

Merge numpy to the same line since no extra env var is needed now.

view details

push time in 7 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha bb1a737ab12901bf4aa93ab6ebbb9a6ea4467322

Add missing URL.

view details

Tongliang Liao

commit sha f303f077da93862601a4fa8b9c40b46eea649b56

Wrong URL.

view details

push time in 7 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 7d2799f2c5f5efe3fd13bc778b9aa4ca44287766

Patch CMake 3.18.3 FindPython issue. https://gitlab.kitware.com/cmake/cmake/-/issues/21223

view details

push time in 7 days

issue commentpytorch/vision

CMake Error at /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 (if):

Also filed the issue here in case it's caused by cmake bug: https://gitlab.kitware.com/cmake/cmake/-/issues/21223

xkszltl

comment created time in 8 days

issue commentpytorch/vision

CMake Error at /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 (if):

Note pytorch was built just fine. Only torchvision has the issue.

xkszltl

comment created time in 8 days

issue openedpytorch/vision

CMake Error at /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 (if):

🐛 Bug

This is a recent issue.

CMake Error at /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 (if):
  Syntax error in cmake code at

    /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:615 

  when parsing string

    python${version_major}\.${version_minor}+([dmu]*)

  Invalid escape sequence \.
Call Stack (most recent call first):
  /usr/local/share/cmake-3.18/Modules/FindPython/Support.cmake:2756 (_python_get_version)
  /usr/local/share/cmake-3.18/Modules/FindPython3.cmake:389 (include)
  CMakeLists.txt:14 (find_package)


-- Configuring incomplete, errors occurred!

Given 3.18.3 is recently released (we use latest, was 3.18.2), we're not sure whether it's from torchvision or cmake.

Environment

  • PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): master/master
  • OS (e.g., Linux): CentOS 7 / Ubuntu 18.04
  • How you installed PyTorch / torchvision (conda, pip, source): source
  • Build command you used (if compiling from source): cmake-3.18.3 + ninja + (gcc-9 from devtoolset-9/stock gcc-8)
  • Python version: Distro stock python3.
  • CUDA/cuDNN version: 11.0/8

created time in 8 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

Code path causing the link in unit test: https://github.com/microsoft/onnxruntime/blob/a90ab12589ce3d4a4323889a7bb77ea67b5e356a/cmake/onnxruntime_unittests.cmake#L436 https://github.com/microsoft/onnxruntime/blob/a90ab12589ce3d4a4323889a7bb77ea67b5e356a/cmake/onnxruntime_unittests.cmake#L550

xkszltl

comment created time in 8 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

For the segfault, so far I found:

  • libonnxruntime_providers_tensorrt.so is in the link line of unit tests, added through ${onnxruntime_test_providers_libs}.
  • Linking unit tests to libonnxruntime_providers_tensorrt.so causes segfault.
  • trt is usable without linking unit test to libonnxruntime_providers_tensorrt.so. (Is it lazy loaded?)
  • On Ubuntu+gcc8, readelf and ldd both shows there's no dependency to libonnxruntime_providers_tensorrt.so in the final executable (onnx_test_runner and onnxruntime_test_all) even it's in the link line. Probably there's nothing to resolve and the linker decide to ignore it.
  • On CentOS+gcc9, it's in the dependency list and segfault. If I manually edit build.ninja to remove it from link command, it works.
xkszltl

comment created time in 8 days

pull request commentmicrosoft/onnxruntime

onnx_proto should be build as `.so` when `onnxruntime_BUILD_SHARED_LIB` is set.

If you think onnx_proto should be build as .so , then please tell ONNX team to do so in their official builds, not us.

Aha good to get some clarity on that. I used to think onnx and ort are handled by different teams, but keep hearing people refer to you as onnx team...

xkszltl

comment created time in 8 days

pull request commentmicrosoft/onnxruntime

onnx_proto should be build as `.so` when `onnxruntime_BUILD_SHARED_LIB` is set.

BTW if you plan to do something in onnx repo, I recommend to merge this first before you have solution ready.

xkszltl

comment created time in 9 days

pull request commentmicrosoft/onnxruntime

onnx_proto should be build as `.so` when `onnxruntime_BUILD_SHARED_LIB` is set.

What do you mean by "build in the same way"? BUILD_SHARED_LIBS is configurable in onnx repo, so it really up to the final user to decide how to set it.

Regarding "official package", I recommend dual build to ship both .so and .a together, so the decision can be made a link time of downstream. Regarding "official python package", usually python package should not have .a right?

xkszltl

comment created time in 9 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

I did some more experiments and found segfault is not limited to LTO. More specifically, I tried onnxruntime_test_all in the following combination of builds:

  1. CentOS 7 + gcc 9 + CPU build, works.
  2. CentOS 7 + gcc 9 + CUDA, works.
  3. CentOS 7 + gcc 9 + CUDA&TRT, segfault.
  4. Ubuntu 18.04 + gcc 8 + CUDA&TRT, works.
  5. Ubuntu 18.04 + gcc 8 + LTO + CUDA&TRT, segfault.
xkszltl

comment created time in 9 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha e37d1574071755202f06d7b520d4dabcbed6c500

Update .gitignore.

view details

push time in 9 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha e221a7ae17ead4bec888d9eb83bbc63a6c6f5dc4

Clean up trailing spaces.

view details

push time in 9 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 54f1531d4f37e6a0dd2a9f4a3cb6081ae1f0a8c1

Install unit tests for ort on Linux.

view details

push time in 9 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 2bc161bd465418383a24ae1c5a91d7f303d1a9d4

`--force` only available in newer git.

view details

push time in 9 days

create barnchxkszltl/onnxruntime

branch : dedup_onnx_proto_140

created branch time in 9 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

BTW here's also a note in your own code... https://github.com/microsoft/onnxruntime/blob/1f69a58105f320e9b79008698b4275fa3f113f2d/cmake/onnx/CMakeLists.txt#L4

xkszltl

comment created time in 9 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

The crash in new() seems to be LTO-specific. onnx-ml issue has been fixed by that PR.

xkszltl

comment created time in 9 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

However it's failed in another place after the fix: image

Here's gdb stacktrace.

root@9408698e9960:/tmp/scratch/onnxruntime/build# gdb -ex r -ex bt --args ./onnxruntime_test_all --gtest_filter=WhereOpTest.BasicNumeric                                                                                                                      
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./onnxruntime_test_all...done.
Starting program: /tmp/scratch/onnxruntime/build/onnxruntime_test_all --gtest_filter=WhereOpTest.BasicNumeric
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffd964c700 (LWP 17583)]
[New Thread 0x7fffd8e4b700 (LWP 17584)]
[New Thread 0x7fffd864a700 (LWP 17585)]
[New Thread 0x7fffd7e49700 (LWP 17586)]
[New Thread 0x7fffd7648700 (LWP 17587)]
[New Thread 0x7fffd6e47700 (LWP 17588)]
[New Thread 0x7fffd6646700 (LWP 17589)]
[New Thread 0x7fffd5e45700 (LWP 17590)]
[New Thread 0x7fffd5644700 (LWP 17591)]
[New Thread 0x7fffd4e43700 (LWP 17592)]
[New Thread 0x7fffd4642700 (LWP 17593)]
[New Thread 0x7fffd3e41700 (LWP 17594)]
[New Thread 0x7fffd3640700 (LWP 17595)]
[New Thread 0x7fffd2e3f700 (LWP 17596)]
[New Thread 0x7fffd263e700 (LWP 17597)]
[New Thread 0x7fffd1e3d700 (LWP 17598)]
[New Thread 0x7fffd163c700 (LWP 17599)]
[New Thread 0x7fffd0e3b700 (LWP 17600)]
[New Thread 0x7fffd063a700 (LWP 17601)]
[New Thread 0x7fffcfe39700 (LWP 17602)]
[New Thread 0x7fffcf638700 (LWP 17603)]
[New Thread 0x7fffcee37700 (LWP 17604)]
[New Thread 0x7fffce636700 (LWP 17605)]
[New Thread 0x7fffcde35700 (LWP 17606)]
[New Thread 0x7fffcd634700 (LWP 17607)]
[New Thread 0x7fffcce33700 (LWP 17608)]
[New Thread 0x7fffcc632700 (LWP 17609)]
[New Thread 0x7fffcbe31700 (LWP 17610)]
[New Thread 0x7fffcb630700 (LWP 17611)]
[New Thread 0x7fffcae2f700 (LWP 17612)]
[New Thread 0x7fffca62e700 (LWP 17613)]
[New Thread 0x7fffc9e2d700 (LWP 17614)]
[New Thread 0x7fffc962c700 (LWP 17615)]
[New Thread 0x7fffc8e2b700 (LWP 17616)]
[New Thread 0x7fff93fff700 (LWP 17617)]
[New Thread 0x7fff937fe700 (LWP 17618)]
[New Thread 0x7fff92ffd700 (LWP 17619)]
[New Thread 0x7fff927fc700 (LWP 17620)]
Note: Google Test filter = WhereOpTest.BasicNumeric
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from WhereOpTest
[ RUN      ] WhereOpTest.BasicNumeric
[New Thread 0x7fff90b48700 (LWP 17621)]
[New Thread 0x7fff8bfff700 (LWP 17622)]

Thread 1 "onnxruntime_tes" received signal SIGSEGV, Segmentation fault.
0x00007fff8a25deba in operator new (n=80) at ../onnxruntime/core/providers/shared_library/provider_bridge_provider.cc:48
48      void* operator new(size_t n) { return onnxruntime::g_host->HeapAllocate(n); }
#0  0x00007fff8a25deba in operator new (n=80) at ../onnxruntime/core/providers/shared_library/provider_bridge_provider.cc:48
#1  0x00007fff8a1e5469 in __gnu_cxx::new_allocator<std::__detail::_Hash_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std
::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >, true> >::allocate(unsigned long, void const*) (
    this=<optimized out>, __n=<optimized out>, this=<optimized out>, __n=<optimized out>) at /usr/include/c++/8/ext/new_allocator.h:99
#2  std::allocator_traits<std::allocator<std::__detail::_Hash_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocato
r<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >, true> > >::allocate(std::allocator<std::__detail::_Hash_node<std::pair<std::__
cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const
&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >, true> >&, unsigned long) (__a=..., __n=<optimized out>, __a=..., __n=<optimized out>) at /usr/include/c++/8/bits/alloc_traits.h:436
#3  std::__detail::_Hashtable_alloc<std::allocator<std::__detail::_Hash_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std
::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >, true> > >::_M_allocate_node<std::pair<std::__cxx11::basic_string<cha
r, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt
::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> > >(std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::alloc
ator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >&&) (this=<optimized out>)
    at /usr/include/c++/8/bits/hashtable_policy.h:2088
#4  std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2t
rt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >, std::allocator<std::pair<std::__cxx11::basic
_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vect
or<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> > >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::
char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_M_emplace<std::pair<std::__cxx11::basic_stri
ng<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<on
nx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> > >(std::integral_constant<bool, true>, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::ve
ctor<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >&&) (this=<optimized out>)
    at /usr/include/c++/8/bits/hashtable.h:1655
#5  std::__detail::_Insert<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector
<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >, std::allocator<std::pair<std::__cxx11
::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, st
d::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> > >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char
, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, false>::insert<std::pair<std::__cxx11::b
asic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::
vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >, void>(std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorO
rWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> >&&) (__v=..., this=<optimized out>)
    at /usr/include/c++/8/bits/hashtable_policy.h:1015
#6  std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext
*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)>, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, 
std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<
onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> > > >::insert(std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, st
d::allocator<char> > const, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::alloc
ator<onnx2trt::TensorOrWeights> >&)> >&&) (__x=..., this=<optimized out>) at /usr/include/c++/8/bits/unordered_map.h:586
#7  onnx2trt::(anonymous namespace)::registerBuiltinOpImporter(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::function<onnx2trt::ValueOrStatus<std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOr
Weights> > > (onnx2trt::IImporterContext*, onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)> const&) [clone .lto_priv.415] () at ../cmake/external/onnx-tensorrt/builtin_op_importers.cpp:110
#8  0x00007fff8a1e1c2b in __static_initialization_and_destruction_0(int, int) [clone .constprop.17] () at ../cmake/external/onnx-tensorrt/builtin_op_importers.cpp:115
#9  0x00007fff8a1e187e in global constructors keyed to 65535_0_provider_bridge_provider.cc.o.75620 () at /usr/include/c++/8/ext/new_allocator.h:86
#10 0x00007ffff7de5783 in call_init (env=0x7fffffffe5c0, argv=0x7fffffffe5a8, argc=2, l=<optimized out>) at dl-init.c:72
#11 _dl_init (main_map=main_map@entry=0x55556241e370, argc=2, argv=0x7fffffffe5a8, env=0x7fffffffe5c0) at dl-init.c:119
#12 0x00007ffff7dea24f in dl_open_worker (a=a@entry=0x7fffffffcb00) at dl-open.c:522
#13 0x00007ffff0d0a51f in __GI__dl_catch_exception (exception=0x7fffffffcae0, operate=0x7ffff7de9e10 <dl_open_worker>, args=0x7fffffffcb00) at dl-error-skeleton.c:196
#14 0x00007ffff7de981a in _dl_open (file=0x55557ece1fe0 "libonnxruntime_providers_tensorrt.so", mode=-2147483646, caller_dlopen=
    0x555555980318 <onnxruntime::(anonymous namespace)::PosixEnv::LoadDynamicLibrary(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**) const [clone .constprop.2530]+56>, nsid=<optimized out>, argc=2, 
    argv=<optimized out>, env=0x7fffffffe5c0) at dl-open.c:605
#15 0x00007fffe5b5ff96 in dlopen_doit (a=a@entry=0x7fffffffcd30) at dlopen.c:66
#16 0x00007ffff0d0a51f in __GI__dl_catch_exception (exception=exception@entry=0x7fffffffccd0, operate=0x7fffe5b5ff40 <dlopen_doit>, args=0x7fffffffcd30) at dl-error-skeleton.c:196
#17 0x00007ffff0d0a5af in __GI__dl_catch_error (objname=0x55555e4ac510, errstring=0x55555e4ac518, mallocedp=0x55555e4ac508, operate=<optimized out>, args=<optimized out>) at dl-error-skeleton.c:215
#18 0x00007fffe5b60745 in _dlerror_run (operate=operate@entry=0x7fffe5b5ff40 <dlopen_doit>, args=args@entry=0x7fffffffcd30) at dlerror.c:162
#19 0x00007fffe5b60051 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
#20 0x0000555555980318 in onnxruntime::(anonymous namespace)::PosixEnv::LoadDynamicLibrary(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**) const [clone .constprop.2530] ()
    at ../onnxruntime/core/platform/posix/env.cc:428
---Type <return> to continue, or q <return> to quit---
#21 0x0000555555b231c2 in onnxruntime::ProviderLibrary::ProviderLibrary(char const*) [clone .lto_priv.5979] () at ../onnxruntime/core/platform/posix/env.cc:181
#22 0x0000555555f617b7 in onnxruntime::test::DefaultTensorrtExecutionProvider() () at /usr/include/c++/8/bits/shared_ptr_base.h:1167
#23 0x00005555561a3f2f in onnxruntime::test::OpTester::Run(onnxruntime::SessionOptions, onnxruntime::test::OpTester::ExpectResult, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_set<std::__cxx11::ba
sic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >,
 std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, OrtRunOptions const*, std::vector<std::unique_ptr<onnxruntime::IExecutionProvider, std::default_delete<onnxruntime::IExecutionProvider> >, std::al
locator<std::unique_ptr<onnxruntime::IExecutionProvider, std::default_delete<onnxruntime::IExecutionProvider> > > >*, std::function<void (std::vector<OrtValue, std::allocator<OrtValue> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, st
d::allocator<char> > const&)> const&, onnxruntime::Graph::ResolveOptions const&) () at ../onnxruntime/test/providers/provider_test_utils.cc:814
#24 0x000055555599a222 in onnxruntime::test::OpTester::Run(onnxruntime::test::OpTester::ExpectResult, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_tr
aits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::__cxx11:
:basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, OrtRunOptions const*, std::vector<std::unique_ptr<onnxruntime::IExecutionProvider, std::default_delete<onnxruntime::IExecutionProvider> >, std::allocator<std::unique_ptr<onnxr
untime::IExecutionProvider, std::default_delete<onnxruntime::IExecutionProvider> > > >*, ExecutionMode, std::function<void (std::vector<OrtValue, std::allocator<OrtValue> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<c
har> > const&)> const&, onnxruntime::Graph::ResolveOptions const&) [clone .constprop.2521] () at ../onnxruntime/test/providers/provider_test_utils.cc:659
#25 0x0000555555f56666 in void onnxruntime::test::(anonymous namespace)::WhereBasicNumericTest<float>() () at /usr/include/c++/8/ext/new_allocator.h:79
#26 0x00007fffe529e52a in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void> (location=0x7fffe529f992 "the test body", method=<optimized out>, object=0x55555e445d60) at ../googletest/src/gtest.cc:2414
#27 testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void> (object=object@entry=0x55555e445d60, method=<optimized out>, location=location@entry=0x7fffe529f992 "the test body") at ../googletest/src/gtest.cc:2469
#28 0x00007fffe52943a3 in testing::Test::Run (this=0x55555e445d60) at ../googletest/src/gtest.cc:2508
#29 testing::Test::Run (this=0x55555e445d60) at ../googletest/src/gtest.cc:2498
#30 0x00007fffe5294505 in testing::TestInfo::Run (this=0x55555e08d9c0) at ../googletest/src/gtest.cc:2684
#31 testing::TestInfo::Run (this=0x55555e08d9c0) at ../googletest/src/gtest.cc:2657
#32 0x00007fffe52945e5 in testing::TestSuite::Run (this=0x55555e01a400) at ../googletest/src/gtest.cc:2816
#33 testing::TestSuite::Run (this=0x55555e01a400) at ../googletest/src/gtest.cc:2795
#34 0x00007fffe5294b03 in testing::internal::UnitTestImpl::RunAllTests() () at /usr/include/c++/8/bits/stl_vector.h:930
#35 0x00007fffe5294d21 in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (location=0x7fffe52a1be0 "auxiliary test code (environments or event listeners)", method=<optimized out>, object=0x55555e01a510)
    at ../googletest/src/gtest.cc:2414
#36 testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (location=0x7fffe52a1be0 "auxiliary test code (environments or event listeners)", 
    method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0x7fffe5294680 <testing::internal::UnitTestImpl::RunAllTests()>, object=0x55555e01a510) at ../googletest/src/gtest.cc:2469
#37 testing::UnitTest::Run() () at ../googletest/src/gtest.cc:4925
#38 0x000055555590e6df in RUN_ALL_TESTS () at /usr/local/include/gtest/gtest.h:2473
#39 main (argc=<optimized out>, argv=<optimized out>) at ../onnxruntime/test/providers/test_main.cc:52
(gdb) 
xkszltl

comment created time in 10 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

I've sent a PR for to build onnx-ml as .so. https://github.com/microsoft/onnxruntime/pull/5236

xkszltl

comment created time in 10 days

PR opened microsoft/onnxruntime

onnx_proto should be build as `.so` when `onnxruntime_BUILD_SHARED_LIB` is set.

When tensorrt provider is moved to its own shared lib, static onnx-ml.pb.cc will causes duplicated descriptor_table_onnx_2fonnx_2dml_2eproto in both (main/provider) libs with re-registration.

+3 -0

0 comment

1 changed file

pr created time in 10 days

create barnchxkszltl/onnxruntime

branch : dedup_onnx_proto

created branch time in 10 days

push eventxkszltl/onnxruntime

Nat Kershaw (MSFT)

commit sha 8a03b6e5c7adebe9bf971d4d3762df7e8c08a7f2

Render Operator documentation as compliant markdown (#3658)

view details

Hariharan Seshadri

commit sha 4fd4b7414968fd1c0ca486cb63897236106f3fe3

Change session option values if they don't work with EPs being registered for the session (#4991)

view details

Dudeldu

commit sha 4a0f6595eb38f4f9fca7c14548d999876fbc1282

Enable metadata and signature changes in graph transformers (#4783) After applying all the graph transformations the metadata and signature could have changes (e.g.: new outputs got added, or the outputs/inputs got renamed). Therefore the local copies of metadata and signature, that InferenceSession administrated for faster lookup, has to be updated. For this the `SaveModelMetadata`, that now has to be idempotent, should be called after resolving the transformed graph

view details

Sherlock

commit sha a935731bd3e01fcd753b1224b70ed9c8b4c2eb49

Neg Gradient (#5022) Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>

view details

Ye Wang

commit sha b4e9e98cee1c7695aae6bb695d18257ef0112774

Add more huggingface models in benchmark tools (#4986) * checkin more huggingface models * review comments * review comments

view details

Hariharan Seshadri

commit sha a9db287bd70b1bfd49ca7ea26e45ebedda13a6c5

Return windows error code for library loading and unloading failure (#5036)

view details

Weixing Zhang

commit sha 32687176151128fe590a2250f9948adc4639cb76

Enable TF32 for training on A100 (#4914) * enable TF32 for training on A100 it can be disabled by env: NVIDIA_TF32_OVERRIDE = 0

view details

Ryan Hill

commit sha e0d1cf19a6e54867dc5f15a2977d3685fbdb417c

Fix allocator bug (#5042)

view details

gwang-msft

commit sha fde7a2c84800309d34eeb2f77d6c8f27014f477f

Temporarily switch SafeInt to a fork for an option to disable exceptions (#5041) * Removed submodule * Add safeint fork

view details

Tim Harris

commit sha bbb9d92a5fe12a36d5fb6f9b7f53548b77eaa2e8

Remove SchedulingParams variants of ThreadPool::TryParallelFor (#5050)

view details

Scott McKay

commit sha 28445c88f926aa255de740836b00d741eb7a7263

Changes to enable saving and loading an ORT format model (#4995) * Changes to enable saving and loading an ORT format model via the public APIs. Cleanup session.py to try and make slightly more understandable. More refactoring is needed here. Couple of bug fixes * Fix bug in handling NodeArg serialization for optional inputs which has a name and no type info. * Address PR comments - tweak SessionOptions config to avoid double lookup - merge duplicated functionality in python binding around registering an EP with optional options Fix a couple of build issues. * Update C API to be consistent with python API - only load model in InferenceSession ctor if required - support loading ORT model in minimal build * Fix nodejs test. We get an invalid path error from LoadInterOp first now * Another attempt at fixing nodejs test. Error message depends on whether ENABLE_LANGUAGE_INTEROP_OPS is defined. Make the output consistent. The interop implementation looks suspicious given it appears to be internal code that is going via the public api. TBD if that should be fixed. * Fix couple of build issues. * Disable test temporarily so PR can be checked in. Will fix in separate PR that adds final pieces for minimal build as the test is required there. * Give up on nodejs test and make the match simpler. Fix init call in TrainingSession python to not pass through sess. it wasn't being used in Session anyway so passing it through just adds confusion. * Fix call to Session.__init__ in TrainingSession. Session now initializes Session._sess to None to make it clearer where the 'ownership' of that member is, and that needs to happen before TrainingSession sets it.

view details

Bowen Bao

commit sha 22ba266bd6036c3d01659d51dcf248e08137666b

Add flag to _internal_use to control export of contrib ops in ort trainer (#4968)

view details

Ashwini Khade

commit sha 9ba2cfb71bb15e5d3a26aaacd940fb340bb97b86

fix py packaging pipeline (#5038) * add test skip logic when opset > allowed opset * fix attribute error * plus fix

view details

Suffian Khan

commit sha 546965c2daf031a4b3bf08ad3125cdf5b9c941db

Add deterministic path for AllReduceL2 (used to compute gradient norm) (#5027) * add deterministic path for reduce l2 * add unit tests * memset zero size off by one * eliminate windows warning as error Co-authored-by: suffian khan <sukha@OrtTrainingDev1.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>

view details

Thiago Crepaldi

commit sha 9d1bdef195e869ca75a6c52e114b7fce980c1be9

Update CODEOWNERS and minor docstring fix (#5002) This PR includes: * Previous CODEOWNERS was encompassing more files than just training files * Polynomial optimizer config is missing part of its docstring

view details

Thiago Crepaldi

commit sha 9388d49c0d66ed966abdfb970e8a72923df746fd

Add warning to non pickable models (#5037)

view details

liqunfu

commit sha bb13b522911be49918790ba12f9420fbf4451cbd

to allow parallel training with mpi4py (#4942) to allow parallel training with mpi4py Co-authored-by: liqun <liqun@OrtTrainingDev4.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>

view details

Sergii Dymchenko

commit sha d7984fe6bad25212eea0fd7139af319796775d98

Add packages from training docker to cgmanifest. (#5033)

view details

xkszltl

commit sha 4b9b5b6146dfae96799a05c44c8d4d05e3fc97ca

Imported protoc cannot have compile options. (#5030)

view details

Nat Kershaw (MSFT)

commit sha d7502eff8f93b693d17d0654ba640e7a58153f8e

Add nodejs samples README (#5005)

view details

push time in 10 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

If onnx-ml are compiled into to .so and loaded to the same address space, this issue is expected to happen.

I don't know how the default build script works. Maybe a special combination happen to shadow the issue.

If proto file need to be used in multiple .so linked together, it should be compiled into a separated .so to avoid re-registration.

xkszltl

comment created time in 10 days

issue openedmicrosoft/onnxruntime

Website not working in China

Describe the bug

https://microsoft.github.io/onnxruntime/ This table doesn't work in China. It's not clickable. It's fine when visited from US.

It has been like this since the first time I found this page.

created time in 10 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha c9cbc488beeb86869fbcf4486c4cc8658bbfa915

`fatal: gc is already running on machine 'buildkitsandbox'` Force `git gc` to run in this case since we have exclusive access.

view details

push time in 10 days

issue openedpytorch/pytorch

Too few arguments to vulkanOptimizeForMobile()

🐛 Bug

Build failed. This is a recent regression in the past few days.

image

This commit may be the reason: https://github.com/pytorch/pytorch/commit/e9941a5dd482e9e80d52edb1833ed9afca5c4557

Environment

  • PyTorch Version (e.g., 1.0): master
  • OS (e.g., Linux): CentOS 7
  • How you installed PyTorch (conda, pip, source): source
  • Build command you used (if compiling from source): cmake+ninja

created time in 10 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

Here's the proto loading symbols:

root@9408698e9960:/tmp/scratch/onnxruntime/build# for i in $(find . -name '*.so' -type f); do echo "$i"; readelf -a "$i" | grep descriptor_table_onnx_2fonnx_2dml_2eproto; done
./libcustom_op_library.so
./onnxruntime/capi/libonnxruntime_providers_shared.so
./onnxruntime/capi/libonnxruntime_providers_tensorrt.so
   117: 00000000000b5c50     8 OBJECT  LOCAL  DEFAULT    15 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_deps
   118: 00000000002d8580   128 OBJECT  LOCAL  DEFAULT    21 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_sccs
   119: 00000000002db280     4 OBJECT  LOCAL  DEFAULT    26 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_once
  4881: 00000000002da8a0   120 OBJECT  LOCAL  DEFAULT    25 descriptor_table_onnx_2fonnx_2dml_2eproto
./onnxruntime/capi/onnxruntime_pybind11_state.so
   835: 0000000000a5c510     8 OBJECT  LOCAL  DEFAULT    15 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_deps
   836: 00000000080663a0   128 OBJECT  LOCAL  DEFAULT    24 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_sccs
   837: 000000000808b8a0     4 OBJECT  LOCAL  DEFAULT    30 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_once
113800: 000000000808a680   120 OBJECT  LOCAL  DEFAULT    28 descriptor_table_onnx_2fonnx_2dml_2eproto
./onnxruntime/capi/libonnxruntime_providers_dnnl.so
./libonnxruntime_providers_shared.so
./libonnxruntime_providers_tensorrt.so
   117: 00000000000b5c50     8 OBJECT  LOCAL  DEFAULT    15 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_deps
   118: 00000000002d8580   128 OBJECT  LOCAL  DEFAULT    21 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_sccs
   119: 00000000002db280     4 OBJECT  LOCAL  DEFAULT    26 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_once
  4881: 00000000002da8a0   120 OBJECT  LOCAL  DEFAULT    25 descriptor_table_onnx_2fonnx_2dml_2eproto
./onnxruntime_pybind11_state.so
   835: 0000000000a5c510     8 OBJECT  LOCAL  DEFAULT    15 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_deps
   836: 00000000080663a0   128 OBJECT  LOCAL  DEFAULT    24 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_sccs
   837: 000000000808b8a0     4 OBJECT  LOCAL  DEFAULT    30 _ZL46descriptor_table_onnx_2fonnx_2dml_2eproto_once
113800: 000000000808a680   120 OBJECT  LOCAL  DEFAULT    28 descriptor_table_onnx_2fonnx_2dml_2eproto
./libonnxruntime_providers_dnnl.so

As you can see they can only be found in libonnxruntime.so and libonnxruntime_providers_tensorrt.so. libonnxruntime_providers_tensorrt.so triggered the issue when it's loaded.

xkszltl

comment created time in 11 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

I think maybe the issue is not related to how protobuf is linked. If onnx-ml.proto.c is linked in multiple lib, probably it can trigger the error as well.

xkszltl

comment created time in 11 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

That doesn't work.

This error is due to libprotobuf.a (which is not compiled with -fPIC by default) used in shared lib.

FAILED: external/onnx-tensorrt/libnvonnxparser.so.7.1.0 
: && /usr/bin/g++-8 -fPIC -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g -march=haswell -mtune=generic -Wall -Wextra -ffunction-sections -fdata-sections -Wno-parentheses -Wno-unused-parameter -Wno-missing-field-initializers -Wall -Wno-deprecated-d
eclarations -Wno-unused-function -O3 -DNDEBUG -DGSL_UNENFORCED_ON_CONTRACT_VIOLATION -flto -fno-fat-lto-objects  -Xlinker --allow-shlib-undefined  -Wl,--version-script=/tmp/scratch/onnxruntime/cmake/external/onnx-tensorrt/libnvonnxparser.version -shared 
-Wl,-soname,libnvonnxparser.so.7 -o external/onnx-tensorrt/libnvonnxparser.so.7.1.0 external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/NvOnnxParser.cpp.o external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o external/onnx-tensorrt/CMakeF
iles/nvonnxparser.dir/builtin_op_importers.cpp.o external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/onnx2trt_utils.cpp.o external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/ShapedWeights.cpp.o external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/ShapeTensor
.cpp.o external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/LoopHelpers.cpp.o external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/RNNHelpers.cpp.o external/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/OnnxAttrs.cpp.o -L/tmp/scratch/onnxruntime/build/dnnl/insta
ll/lib -Wl,-rpath,/tmp/scratch/onnxruntime/build/dnnl/install/lib:  external/onnx/libonnx_proto.a  /usr/lib/x86_64-linux-gnu/libprotobuf.a  /usr/lib/x86_64-linux-gnu/libnvinfer.so  /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so  /usr/lib/x86_64-linux-gnu
/libmyelin.so  /usr/lib/x86_64-linux-gnu/libprotobuf.a  -lpthread && :
/usr/bin/ld: /usr/lib/x86_64-linux-gnu/libprotobuf.a(arena.o): relocation R_X86_64_TPOFF32 against symbol `_ZN6google8protobuf5Arena13thread_cache_E' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status

This one I'm not sure:

FAILED: external/onnx-tensorrt/getSupportedAPITest 
: && /usr/bin/g++-8 -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g -march=haswell -mtune=generic -Wall -Wextra -ffunction-sections -fdata-sections -Wno-parentheses -Wno-unused-parameter -Wno-missing-field-initializers -Wall -Wno-deprecated-declara
tions -Wno-unused-function -O3 -DNDEBUG -DGSL_UNENFORCED_ON_CONTRACT_VIOLATION -flto -fno-fat-lto-objects -Xlinker --allow-shlib-undefined external/onnx-tensorrt/CMakeFiles/getSupportedAPITest.dir/getSupportedAPITest.cpp.o external/onnx-tensorrt/CMakeFil
es/getSupportedAPITest.dir/ModelImporter.cpp.o -o external/onnx-tensorrt/getSupportedAPITest -L/tmp/scratch/onnxruntime/build/dnnl/install/lib -Wl,-rpath,/tmp/scratch/onnxruntime/build/dnnl/install/lib  /usr/lib/x86_64-linux-gnu/libprotobuf.a  external/o
nnx-tensorrt/libnvonnxparser_static.a  -lpthread  -ldl  external/onnx/libonnx_proto.a  /usr/lib/x86_64-linux-gnu/libprotobuf.a  -lpthread  /usr/lib/x86_64-linux-gnu/libprotobuf.a  /usr/lib/x86_64-linux-gnu/libnvinfer.so  /usr/lib/x86_64-linux-gnu/libnvin
fer_plugin.so  /usr/lib/x86_64-linux-gnu/libmyelin.so && :
../cmake/external/onnx-tensorrt/ModelImporter.cpp: In function 'importInputs.constprop':
../cmake/external/onnx-tensorrt/ModelImporter.cpp:250:20: warning: 'tensor_ptr' may be used uninitialized in this function [-Wmaybe-uninitialized]
             tensor = tensor_ptr;
                    ^
../cmake/external/onnx-tensorrt/ModelImporter.cpp:248:32: note: 'tensor_ptr' was declared here
             nvinfer1::ITensor* tensor_ptr;
                                ^
/tmp/ccztaPj9.ltrans0.ltrans.o: In function `global constructors keyed to 65535_0_getSupportedAPITest.cpp.o.56367':
<artificial>:(.text.startup._GLOBAL__I_65535_0_getSupportedAPITest.cpp.o.56367+0x32): undefined reference to `google::protobuf::internal::AddDescriptors(google::protobuf::internal::DescriptorTable const*)'
/usr/bin/ld: Dwarf Error: Invalid abstract instance DIE ref.
/tmp/ccztaPj9.ltrans0.ltrans.o: In function `google::protobuf::internal::ArenaStringPtr::DestroyNoArena(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const*) [clone .isra.234] [clone .constprop.128]':
<artificial>:(.text._ZN6google8protobuf8internal14ArenaStringPtr14DestroyNoArenaEPKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE.isra.234.constprop.128+0x3): undefined reference to `google::protobuf::internal::fixed_address_empty_string[abi:cxx11]
'
/tmp/ccztaPj9.ltrans0.ltrans.o: In function `google::protobuf::internal::ArenaStringPtr::Set(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<c
har> > const&, google::protobuf::Arena*) [clone .constprop.125]':
<artificial>:(.text._ZN6google8protobuf8internal14ArenaStringPtr3SetEPKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERS9_PNS0_5ArenaE.constprop.125+0x9): undefined reference to `google::protobuf::internal::fixed_address_empty_string[abi:cxx11]'
/tmp/ccztaPj9.ltrans0.ltrans.o: In function `onnx2trt::deserialize_onnx_model(void const*, unsigned long, bool, onnx::ModelProto*) [clone .constprop.70]':
<artificial>:(.text._ZN8onnx2trt22deserialize_onnx_modelEPKvmbPN4onnx10ModelProtoE.constprop.70+0x9a): undefined reference to `google::protobuf::io::CodedInputStream::SetTotalBytesLimit(int)'
/tmp/ccztaPj9.ltrans0.ltrans.o: In function `onnx2trt::importInputs(onnx2trt::ImporterContext*, onnx::GraphProto const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, onnx2trt::TensorOrWeights, std::h
ash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_tr
aits<char>, std::allocator<char> > const, onnx2trt::TensorOrWeights> > >*, unsigned int, onnxTensorDescriptorV1 const*) [clone .constprop.74]':
<artificial>:(.text._ZN8onnx2trt12importInputsEPNS_15ImporterContextERKN4onnx10GraphProtoEPSt13unordered_mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEENS_15TensorOrWeightsESt4hashISC_ESt8equal_toISC_ESaISt4pairIKSC_SD_EEEjPK22onnxTensorDescript
orV1.constprop.74+0xf5e): undefined reference to `google::protobuf::internal::fixed_address_empty_string[abi:cxx11]'
/usr/bin/ld: Dwarf Error: Invalid abstract instance DIE ref.
/tmp/ccztaPj9.ltrans1.ltrans.o: In function `onnx::OperatorSetIdProto::GetMetadata() const':
<artificial>:(.text._ZNK4onnx18OperatorSetIdProto11GetMetadataEv+0xe): undefined reference to `google::protobuf::internal::AssignDescriptors(google::protobuf::internal::DescriptorTable const*, bool)'
/tmp/ccztaPj9.ltrans1.ltrans.o: In function `void google::protobuf::internal::RepeatedPtrFieldBase::Destroy<google::protobuf::RepeatedPtrField<onnx::OperatorSetIdProto>::TypeHandler>() [clone .isra.269] [clone .lto_priv.318]':
<artificial>:(.text.unlikely._ZN6google8protobuf8internal20RepeatedPtrFieldBase7DestroyINS0_16RepeatedPtrFieldIN4onnx18OperatorSetIdProtoEE11TypeHandlerEEEvv.isra.269.lto_priv.318+0x5a): undefined reference to `google::protobuf::internal::fixed_address_e
mpty_string[abi:cxx11]'
/tmp/ccztaPj9.ltrans1.ltrans.o: In function `void google::protobuf::internal::RepeatedPtrFieldBase::Destroy<google::protobuf::RepeatedPtrField<onnx::ValueInfoProto>::TypeHandler>() [clone .isra.256] [clone .lto_priv.320]':
<artificial>:(.text._ZN6google8protobuf8internal20RepeatedPtrFieldBase7DestroyINS0_16RepeatedPtrFieldIN4onnx14ValueInfoProtoEE11TypeHandlerEEEvv.isra.256.lto_priv.320+0x60): undefined reference to `google::protobuf::internal::fixed_address_empty_string[a
bi:cxx11]'
/tmp/ccztaPj9.ltrans1.ltrans.o: In function `void google::protobuf::internal::RepeatedPtrFieldBase::Destroy<google::protobuf::RepeatedPtrField<onnx::TensorShapeProto_Dimension>::TypeHandler>() [clone .isra.254] [clone .lto_priv.313]':
<artificial>:(.text.unlikely._ZN6google8protobuf8internal20RepeatedPtrFieldBase7DestroyINS0_16RepeatedPtrFieldIN4onnx26TensorShapeProto_DimensionEE11TypeHandlerEEEvv.isra.254.lto_priv.313+0x2f): undefined reference to `google::protobuf::internal::fixed_a
ddress_empty_string[abi:cxx11]'
/tmp/ccztaPj9.ltrans1.ltrans.o: In function `void google::protobuf::internal::RepeatedPtrFieldBase::Destroy<google::protobuf::RepeatedPtrField<onnx::SparseTensorProto>::TypeHandler>() [clone .isra.253] [clone .lto_priv.321]':
<artificial>:(.text._ZN6google8protobuf8internal20RepeatedPtrFieldBase7DestroyINS0_16RepeatedPtrFieldIN4onnx17SparseTensorProtoEE11TypeHandlerEEEvv.isra.253.lto_priv.321+0xe8): undefined reference to `google::protobuf::RepeatedField<long>::~RepeatedField
()'
/tmp/ccztaPj9.ltrans1.ltrans.o: In function `void google::protobuf::internal::RepeatedPtrFieldBase::Destroy<google::protobuf::RepeatedPtrField<onnx::StringStringEntryProto>::TypeHandler>() [clone .isra.247] [clone .lto_priv.319]':
<artificial>:(.text._ZN6google8protobuf8internal20RepeatedPtrFieldBase7DestroyINS0_16RepeatedPtrFieldIN4onnx22StringStringEntryProtoEE11TypeHandlerEEEvv.isra.247.lto_priv.319+0x60): undefined reference to `google::protobuf::internal::fixed_address_empty_
string[abi:cxx11]'
/tmp/ccztaPj9.ltrans1.ltrans.o: In function `void google::protobuf::internal::RepeatedPtrFieldBase::Destroy<google::protobuf::RepeatedPtrField<onnx::TensorAnnotation>::TypeHandler>() [clone .isra.251] [clone .lto_priv.317]':
<artificial>:(.text.unlikely._ZN6google8protobuf8internal20RepeatedPtrFieldBase7DestroyINS0_16RepeatedPtrFieldIN4onnx16TensorAnnotationEE11TypeHandlerEEEvv.isra.251.lto_priv.317+0x5e): undefined reference to `google::protobuf::internal::fixed_address_emp
ty_string[abi:cxx11]'

xkszltl

comment created time in 11 days

created tagxkszltl/Roaster

tagv7.0.0

Build open-source tools from scratch with Roaster!

created time in 11 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 5ea29f6618b8312472dba7a7be9eb791ab078497

Set fastestmirror for dnf.

view details

push time in 11 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha f4b1a5d527df14be3bffc2da80540481895f1857

makecache still better to be off.

view details

push time in 11 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

What's the reason of enforcing protobuf_USE_STATIC_LIBS=ON with system protobuf? I thought it's to avoid extra deps when delivery (but we ship our protobuf as well) so I bypassed that check.

On Sun, Sep 20, 2020 at 1:27 PM Pranav Sharma notifications@github.com wrote:

@xkszltl https://github.com/xkszltl I believe you're using the onnxruntime_PREFER_SYSTEM_LIB cmake option to build ORT. This option is experimental in that we don't support it fully. Moreover, if onnxruntime_PREFER_SYSTEM_LIB and onnxruntime_USE_FULL_PROTOBUF are ON, then protobuf_USE_STATIC_LIBS must be turned ON too.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/5035#issuecomment-695747475, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHWIUOYCBTTVKNFVHNPHMLSGWHDFANCNFSM4QTYJWNQ .

-- From LTL

xkszltl

comment created time in 11 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 3c7fd0a5553f433f0ce8e9502b6c1ca78c5726f7

makecache.

view details

push time in 11 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha a829f32a6595dc95ec3057bdd71d68c22424c503

Migrate to `dnf reposync`. `--norepopath` is replaced by `./repoid/<id>` symlinks. Remember to enable fastestmirror in /etc/dnf/dnf.conf if needed.

view details

push time in 11 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha e4d7d517cf943845e0ed99d4a879883f3577cc55

git gc src.

view details

push time in 12 days

issue commentmicrosoft/onnxruntime

Duplicated onnx-ml

Any progress on this? Seems the issue is still there.

xkszltl

comment created time in 12 days

issue closedmicrosoft/onnxruntime

file INSTALL cannot find "onnxruntime/cmake/../include/onnxruntime/core/providers/shared"

Describe the bug

Cannot install after build. This is for master, 1.4.0 is fine.

#9 921.2 -- Installing: /tmp/scratch/onnxruntime/install.KvPcvqvz6E/root/usr/local/include/onnxruntime/core/providers/cpu
#9 921.2 -- Installing: /tmp/scratch/onnxruntime/install.KvPcvqvz6E/root/usr/local/include/onnxruntime/core/providers/cpu/cpu_provider_factory.h
#9 921.2 -- Installing: /tmp/scratch/onnxruntime/install.KvPcvqvz6E/root/usr/local/include/onnxruntime/core/providers/cuda
#9 921.2 -- Installing: /tmp/scratch/onnxruntime/install.KvPcvqvz6E/root/usr/local/include/onnxruntime/core/providers/cuda/cuda_provider_factory.h
#9 921.2 CMake Error at cmake_install.cmake:66 (file):
#9 921.2   file INSTALL cannot find
#9 921.2   "/tmp/scratch/onnxruntime/cmake/../include/onnxruntime/core/providers/shared":
#9 921.2   No such file or directory.
#9 921.2
#9 921.2
#9 921.2 FAILED: CMakeFiles/install.util
#9 921.2 cd /tmp/scratch/onnxruntime/build && /usr/local/bin/cmake -P cmake_install.cmake
#9 921.2 ninja: build stopped: subcommand failed.

This is probably the cause: https://github.com/microsoft/onnxruntime/blob/7ca8388dc98cbc23c7c165064bca6b3142d28d79/cmake/onnxruntime_providers.cmake#L296

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): CentOS 7
  • ONNX Runtime installed from (source or binary): source
  • ONNX Runtime version: master
  • GCC/Compiler version (if compiling from source): gcc-8
  • CUDA/cuDNN version: 11.0/8

closed time in 12 days

xkszltl

push eventxkszltl/Roaster

Tongliang Liao

commit sha d67e7859cdf2d10cb9f019bffbccf842ee54ec96

Ort missing dir for installation should have been fixed. https://github.com/microsoft/onnxruntime/issues/5024 https://github.com/microsoft/onnxruntime/pull/5219

view details

push time in 12 days

issue commentmicrosoft/onnxruntime

CUDA 11 still supports 3.5/3.7

Seems the decision is to keep only 3.7? AFAIK the difference between 3.5 and 3.7 is only a few registers, and 3.5 should be compatible with 3.7, but not the other way. So if no noticeable perf difference, maybe keeping 3.5 is better than 3.7?

This is just general suggestion, not a request, as we enable all of them in our build anyway.

xkszltl

comment created time in 12 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 0d7b06304e0b956a99cedba0be1944b6c8dd2f0f

Work around https://github.com/microsoft/onnxruntime/issues/5024

view details

push time in 13 days

issue closedmicrosoft/onnxruntime

TensorRT Windows build failed

Describe the bug

We got a lot of header-not-found errors when building with TensorRT on Windows. It works fine with just CUDA.

Here's the cmake arg for successful build: https://github.com/xkszltl/Roaster/blob/ca52bfccd4c49b2bcbe9526e2f0473ea810a5f48/win/pkgs/ort.ps1#L150-L190 It'll fail if I set -Donnxruntime_USE_TENSORRT=ON.

Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.

  Checking File Globs
  Performing update step for 'project_dnnl'
  No patch step for 'project_dnnl'
  Performing configure step for 'project_dnnl'
  -- GPU support is disabled
  -- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
  -- Configuring done
  -- Generating done
  -- Build files have been written to: D:/roaster-scratch/onnxruntime/build/dnnl/src/project_dnnl-build
  Performing build step for 'project_dnnl'
  Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Framework
  Copyright (C) Microsoft Corporation. All rights reserved.

    Checking Build System
    dnnl_common.vcxproj -> D:\roaster-scratch\onnxruntime\build\dnnl\src\project_dnnl-build\src\common\dnnl_common.dir\Release\dnnl_common.lib
    dnnl_cpu.vcxproj -> D:\roaster-scratch\onnxruntime\build\dnnl\src\project_dnnl-build\src\cpu\dnnl_cpu.dir\Release\dnnl_cpu.lib
    dnnl.vcxproj -> D:\roaster-scratch\onnxruntime\build\dnnl\src\project_dnnl-build\src\Release\dnnl.dll
    Building Custom Rule D:/roaster-scratch/onnxruntime/build/dnnl/src/dnnl/src/CMakeLists.txt
  Performing install step for 'project_dnnl'
  Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Framework
  Copyright (C) Microsoft Corporation. All rights reserved.

    dnnl_common.vcxproj -> D:\roaster-scratch\onnxruntime\build\dnnl\src\project_dnnl-build\src\common\dnnl_common.dir\Release\dnnl_common.lib
    dnnl_cpu.vcxproj -> D:\roaster-scratch\onnxruntime\build\dnnl\src\project_dnnl-build\src\cpu\dnnl_cpu.dir\Release\dnnl_cpu.lib
    dnnl.vcxproj -> D:\roaster-scratch\onnxruntime\build\dnnl\src\project_dnnl-build\src\Release\dnnl.dll
    -- Install configuration: "Release"
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/share/doc/dnnl/LICENSE
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/lib/dnnl.lib
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/bin/dnnl.dll
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/dnnl.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/dnnl.hpp
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/dnnl_debug.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/dnnl_types.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/mkldnn.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/mkldnn.hpp
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/mkldnn_config.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/mkldnn_debug.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/mkldnn_dnnl_mangling.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/mkldnn_types.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/mkldnn_version.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/dnnl_config.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/include/dnnl_version.h
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/lib/cmake/dnnl/dnnl-config.cmake
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/lib/cmake/dnnl/dnnl-config-version.cmake
    -- Up-to-date: D:/roaster-scratch/onnxruntime/build/dnnl/install/lib/cmake/dnnl/dnnl-targets.cmake
    -- Installing: D:/roaster-scratch/onnxruntime/build/dnnl/install/lib/cmake/dnnl/dnnl-targets-release.cmake
    -- Installing: D:/roaster-scratch/onnxruntime/build/dnnl/install/lib/mkldnn.lib
  Completed 'project_dnnl'
  Performing update step for 'pybind11'
  No patch step for 'pybind11'
  No configure step for 'pybind11'
  No build step for 'pybind11'
  No install step for 'pybind11'
  Completed 'pybind11'
  libprotobuf.vcxproj -> D:\roaster-scratch\onnxruntime\build\external\protobuf\cmake\Release\libprotobuf.lib
  libprotoc.vcxproj -> D:\roaster-scratch\onnxruntime\build\external\protobuf\cmake\Release\libprotoc.lib
  protoc.vcxproj -> D:\roaster-scratch\onnxruntime\build\external\protobuf\cmake\Release\protoc.exe
  onnx_proto.vcxproj -> D:\roaster-scratch\onnxruntime\build\onnx\Release\onnx_proto.lib
  NvOnnxParser.cpp
  ModelImporter.cpp
  builtin_op_importers.cpp
  onnx2trt_utils.cpp
  ShapedWeights.cpp
  ShapeTensor.cpp
  OnnxAttrs.cpp
D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\NvOnnxParser.h(26,10): fatal error C1083: Cannot open include file: 'NvInfer.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\NvOnnxParser.cpp) [D:\roaster-scratch\onnxruntime\build\external\onnx-tensorrt\nvonnxparser_static.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\NvOnnxParser.h(26,10): fatal error C1083: Cannot open include file: 'NvInfer.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\ModelImporter.cpp) [D:\roaster-scratch\onnxruntime\build\external\onnx-tensorrt\nvonnxparser_static.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\NvOnnxParser.h(26,10): fatal error C1083: Cannot open include file: 'NvInfer.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\builtin_op_importers.cpp) [D:\roaster-scratch\onnxruntime\build\external\onnx-tensorrt\nvonnxparser_static.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\ShapedWeights.hpp(25,10): fatal error C1083: Cannot open include file: 'NvInfer.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\onnx2trt_utils.cpp) [D:\roaster-scratch\onnxruntime\build\external\onnx-tensorrt\nvonnxparser_static.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\ShapedWeights.hpp(25,10): fatal error C1083: Cannot open include file: 'NvInfer.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\ShapedWeights.cpp) [D:\roaster-scratch\onnxruntime\build\external\onnx-tensorrt\nvonnxparser_static.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\ShapeTensor.hpp(25,10): fatal error C1083: Cannot open include file: 'NvInfer.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\ShapeTensor.cpp) [D:\roaster-scratch\onnxruntime\build\external\onnx-tensorrt\nvonnxparser_static.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\OnnxAttrs.hpp(25,10): fatal error C1083: Cannot open include file: 'NvInfer.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\OnnxAttrs.cpp) [D:\roaster-scratch\onnxruntime\build\external\onnx-tensorrt\nvonnxparser_static.vcxproj]
  onnx.vcxproj -> D:\roaster-scratch\onnxruntime\build\onnx\Release\onnx.lib
  onnxruntime_common.vcxproj -> D:\roaster-scratch\onnxruntime\build\Release\onnxruntime_common.lib
  onnxruntime_framework.vcxproj -> D:\roaster-scratch\onnxruntime\build\Release\onnxruntime_framework.lib
  onnxruntime_graph.vcxproj -> D:\roaster-scratch\onnxruntime\build\Release\onnxruntime_graph.lib
  pyop.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\language_interop_ops\pyop\pyop.h(2,10): fatal error C1083: Cannot open include file: 'core/platform/env.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_pyop.vcxproj]
  platform.cpp
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\platform.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
  threading.cpp
  dgemm.cpp
  sgemm.cpp
  qgemm.cpp
  convolve.cpp
  pooling.cpp
  reorder.cpp
  snchwc.cpp
  activate.cpp
  logistic.cpp
  tanh.cpp
  erf.cpp
  compute.cpp
  quantize.cpp
  qladd.cpp
  qladd_avx2.cpp
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\threading.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\dgemm.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\sgemm.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\convolve.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\qgemm.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\pooling.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\reorder.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\snchwc.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\activate.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\logistic.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\tanh.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\erf.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\intrinsics\avx2\../../mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\intrinsics\avx2\qladd_avx2.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\qladd.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\quantize.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\mlasi.h(20,10): fatal error C1083: Cannot open include file: 'mlas.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\mlas\lib\compute.cpp) [D:\roaster-scratch\onnxruntime\build\onnxruntime_mlas.vcxproj]
  attention_fusion.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\attention_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/graph/graph_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
  bias_gelu_fusion.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\bias_gelu_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
  cast_elimination.cc
  computation_reduction.cc
  constant_folding.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\computation_reduction.cc(4,10): fatal error C1083: Cannot open include file: 'core/common/safeint.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
  conv_activation_fusion.cc
  conv_add_fusion.cc
  conv_bn_fusion.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\constant_folding.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/constant_folding.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
  conv_mul_fusion.cc
  dropout_elimination.cc
  dynamic_quantize_matmul_fusion.cc
  embed_layer_norm_fusion.cc
  expand_elimination.cc
  fast_gelu_fusion.cc
  free_dim_override_transformer.cc
  gelu_approximation.cc
  gelu_fusion.cc
  gemm_activation_fusion.cc
  graph_transformer.cc
  graph_transformer_mgr.cc
  graph_transformer_utils.cc
  identity_elimination.cc
  initializer.cc
  insert_cast_transformer.cc
  layer_norm_fusion.cc
  matmul_add_fusion.cc
  matmul_transpose_fusion.cc
  nchwc_transformer.cc
  optimizer_execution_frame.cc
  relu_clip_fusion.cc
  reshape_fusion.cc
  rule_based_graph_transformer.cc
  shape_to_initializer.cc
  skip_layer_norm_fusion.cc
  slice_elimination.cc
  transformer_memcpy.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\conv_add_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/graph/graph_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
  unsqueeze_elimination.cc
  utils.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\conv_bn_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/graph/graph_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\conv_mul_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/graph/graph_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\embed_layer_norm_fusion.cc(3,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\fast_gelu_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\gelu_approximation.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\gemm_activation_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\gelu_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/optimizer/graph_transformer_utils.h(6,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\graph_transformer_utils.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\graph_transformer_mgr.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/graph_transformer_mgr.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\initializer.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\insert_cast_transformer.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/insert_cast_transformer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\layer_norm_fusion.cc(3,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\matmul_transpose_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\matmul_add_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\relu_clip_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/relu_clip_fusion.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\reshape_fusion.cc(4,10): fatal error C1083: Cannot open include file: 'core/graph/graph_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\slice_elimination.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/slice_elimination.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\shape_to_initializer.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/shape_to_initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\unsqueeze_elimination.cc(4,10): fatal error C1083: Cannot open include file: 'core/optimizer/unsqueeze_elimination.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\skip_layer_norm_fusion.cc(3,10): fatal error C1083: Cannot open include file: 'core/optimizer/initializer.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\conv_activation_fusion.cc(5,10): fatal error C1083: Cannot open include file: 'core/graph/graph_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\nchwc_transformer.cc(5,10): fatal error C1083: Cannot open include file: 'core/graph/graph_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\cast_elimination.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/graph/graph.h(15,10): fatal error C1083: Cannot open include file: 'core/common/path.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\dynamic_quantize_matmul_fusion.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\expand_elimination.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\free_dim_override_transformer.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\dropout_elimination.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx\onnx/onnx_pb.h(50,10): fatal error C1083: Cannot open include file: 'onnx/onnx-ml.pb.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\utils.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/graph/graph.h(15,10): fatal error C1083: Cannot open include file: 'core/common/path.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\graph_transformer.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\identity_elimination.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/graph/graph.h(15,10): fatal error C1083: Cannot open include file: 'core/common/path.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\rule_based_graph_transformer.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\optimizer_execution_frame.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\optimizer\transformer_memcpy.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_optimizer.vcxproj]
  onnxruntime_providers.vcxproj -> D:\roaster-scratch\onnxruntime\build\Release\onnxruntime_providers.lib
  onnxruntime_providers_cuda.vcxproj -> D:\roaster-scratch\onnxruntime\build\Release\onnxruntime_providers_cuda.lib
  onnxruntime_providers_dnnl.vcxproj -> D:\roaster-scratch\onnxruntime\build\Release\onnxruntime_providers_dnnl.dll
  pywrapper.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\language_interop_ops\pyop\pywrapper.cc(9,10): fatal error C1083: Cannot open include file: 'Python.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_pywrapper.vcxproj]
  IOBinding.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\session\IOBinding.cc(4,10): fatal error C1083: Cannot open include file: 'core/session/IOBinding.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
  abi_session_options.cc
  custom_ops.cc
  default_cpu_allocator_c_api.cc
  environment.cc
  inference_session.cc
  inference_session_utils.cc
  onnxruntime_c_api.cc
  ort_env.cc
D:\roaster-scratch\onnxruntime\onnxruntime\core\session\inference_session_utils.cc(4,10): fatal error C1083: Cannot open include file: 'core/session/inference_session_utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\session\onnxruntime_c_api.cc(5,10): fatal error C1083: Cannot open include file: 'core/session/allocator_impl.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\core\session\default_cpu_allocator_c_api.cc(5,10): fatal error C1083: Cannot open include file: 'core/framework/utils.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx\onnx/onnx_pb.h(50,10): fatal error C1083: Cannot open include file: 'onnx/onnx-ml.pb.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\session\abi_session_options.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx\onnx/onnx_pb.h(50,10): fatal error C1083: Cannot open include file: 'onnx/onnx-ml.pb.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\session\custom_ops.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
D:\roaster-scratch\onnxruntime\cmake\external\onnx\onnx/onnx_pb.h(50,10): fatal error C1083: Cannot open include file: 'onnx/onnx-ml.pb.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\session\inference_session.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/platform/threadpool.h(24,10): fatal error C1083: Cannot open include file: 'core/platform/env.h': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\session\environment.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
D:\roaster-scratch\onnxruntime\include\onnxruntime\core/common/logging/capture.h(7,10): fatal error C1083: Cannot open include file: 'gsl/gsl': No such file or directory (compiling source file D:\roaster-scratch\onnxruntime\onnxruntime\core\session\ort_env.cc) [D:\roaster-scratch\onnxruntime\build\onnxruntime_session.vcxproj]
  gmock.vcxproj -> D:\roaster-scratch\onnxruntime\build\lib\Release\gmock.lib
  gtest.vcxproj -> D:\roaster-scratch\onnxruntime\build\lib\Release\gtest.lib
  compare_ortvalue.cc
  default_providers.cc
  file_util.cc
  temp_dir.cc
  test_allocator.cc
  test_environment.cc
  test_random_seed.cc
D:\roaster-scratch\onnxruntime\onnxruntime\test\util\compare_ortvalue.cc(4,10): fatal error C1083: Cannot open include file: 'test/compare_ortvalue.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_test_utils.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\test\util\temp_dir.cc(4,10): fatal error C1083: Cannot open include file: 'test/util/include/temp_dir.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_test_utils.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\test\util\test_environment.cc(4,10): fatal error C1083: Cannot open include file: 'test/test_environment.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_test_utils.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\test\util\test_random_seed.cc(4,10): fatal error C1083: Cannot open include file: 'test/util/include/test_random_seed.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_test_utils.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\test\util\file_util.cc(1,10): fatal error C1083: Cannot open include file: 'file_util.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_test_utils.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\test\util\default_providers.cc(4,10): fatal error C1083: Cannot open include file: 'default_providers.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_test_utils.vcxproj]
D:\roaster-scratch\onnxruntime\onnxruntime\test\util\test_allocator.cc(4,10): fatal error C1083: Cannot open include file: 'test_allocator.h': No such file or directory [D:\roaster-scratch\onnxruntime\build\onnxruntime_test_utils.vcxproj]
  onnxruntime_util.vcxproj -> D:\roaster-scratch\onnxruntime\build\Release\onnxruntime_util.lib
  re2.vcxproj -> D:\roaster-scratch\onnxruntime\build\external\re2\Release\re2.lib

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows Server 2019
  • ONNX Runtime installed from (source or binary): source
  • ONNX Runtime version: 1.4.0
  • Python version: 3.8.5
  • Visual Studio version (if applicable): 2019
  • CUDA/cuDNN version: 11.0/8

closed time in 13 days

xkszltl

issue commentmicrosoft/onnxruntime

TensorRT Windows build failed

Great, now it works!

xkszltl

comment created time in 13 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha ea3ef4f5132a8fd0d32c206eebb36f295e328bb4

Add more PDB.

view details

Tongliang Liao

commit sha 9a3e0d151f24f7bf6cc606a0bb17046e636a2836

Enable ort trt build on Windows.

view details

push time in 13 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 7ebd97c63aa24298c24b80baf9e08113f8cd90ed

Double quote

view details

Tongliang Liao

commit sha ea3270a27e8410f7f04e080e977be7bcd8abe690

Update ort nuget

view details

push time in 13 days

issue openedHomebrew/homebrew-cask

Wrong uninstallation path for asix-ax88179

Description of issue

asix-ax88179 update failed.

% brew upgrade 
Updating Homebrew...
==> Casks with `auto_updates` or `version :latest` will not be upgraded
==> Upgrading 1 outdated package:
asix-ax88179 2.16.0 -> 2.17.0
==> Upgrading asix-ax88179
==> Caveats
asix-ax88179 requires a kernel extension to work.
If the installation fails, retry after you enable it in:
  System Preferences → Security & Privacy → General

For more information, refer to vendor documentation or this Apple Technical Note:
  https://developer.apple.com/library/content/technotes/tn2459/_index.html

You must reboot for the installation of asix-ax88179 to take effect.

==> Downloading https://www.asix.com.tw/FrootAttach/driver/AX88179_178A_macOS_10.9_to_10.15_Driver_Installer_v2.17.0.zip
Already downloaded: /Users/xkszltl/Library/Caches/Homebrew/downloads/b135a7bb063e70f2987fb3d1ac34931eaf23412c1d2e1803c99884daef9895fc--AX88179_178A_macOS_10.9_to_10.15_Driver_Installer_v2.17.0.zip
==> Verifying SHA-256 checksum for Cask 'asix-ax88179'.
==> Running uninstall script /usr/local/Caskroom/asix-ax88179/2.16.0/AX88179_178A_Uninstall_v150.command
==> Purging files for version 2.17.0 of Cask asix-ax88179
Error: asix-ax88179: uninstall script /usr/local/Caskroom/asix-ax88179/2.16.0/AX88179_178A_Uninstall_v150.command does not exist.

The path to uninstallation script is wrong.

% sudo ls /usr/local/Caskroom/asix-ax88179/2.16.0 
AX88179_178A_Uninstall_v1.6.0.command	AX88179_178A_v2.16.0.pkg		readme.txt

created time in 13 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 05b66b7ba4e344e43708a43d6c0e4df58f4becca

Keep git dir in src.

view details

push time in 13 days

issue commentmicrosoft/onnxruntime

TensorRT Windows build failed

TRT is installed. Scroll down and you'll see fatal error C1083: Cannot open include file: 'mlas.h'. So most likely -I is broken.

xkszltl

comment created time in 13 days

issue closedfacebook/rocksdb

CMake error with PyTorch for 6.5.2

Start to get error when building pytorch with RocksDB today. It works when using 6.4.6, but 6.5.2 doesn't work.

Here's what I got:

#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6   Target "caffe2_rocksdb" links to target "snappy::snappy" but the target was
#9 537.6   not found.  Perhaps a find_package() call is missing for an IMPORTED
#9 537.6   target, or an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6   Target "caffe2_rocksdb" links to target "lz4::lz4" but the target was not
#9 537.6   found.  Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6   an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6   Target "caffe2_rocksdb" links to target "zstd::zstd" but the target was not
#9 537.6   found.  Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6   an ALIAS target is missing?
#9 537.6
#9 537.6
#9 537.6 CMake Error at modules/rocksdb/CMakeLists.txt:58 (add_library):
#9 537.6   Target "caffe2_rocksdb" links to target "NUMA::NUMA" but the target was not
#9 537.6   found.  Perhaps a find_package() call is missing for an IMPORTED target, or
#9 537.6   an ALIAS target is missing?

Version: CentOS 7 / Ubuntu 18.04 CMake 3.16.1 + Ninja + gcc-8 PyTorch: master branch

Issue was also posted to pytorch repo: https://github.com/pytorch/pytorch/issues/31264

closed time in 15 days

xkszltl

issue commentfacebook/rocksdb

CMake error with PyTorch for 6.5.2

Close since all 3 PR are in latest release.

xkszltl

comment created time in 15 days

push eventxkszltl/Roaster

Tongliang Liao

commit sha 5983f50677f0cd1a58e4b06bcaa33ce4d36d9307

All 3 PR for https://github.com/facebook/rocksdb/issues/6179 has been released in 6.11.4.

view details

push time in 15 days

Pull request review commentpytorch/pytorch

RNN args renaming in memonger.

 def canonical_name(blob):             op.output[i] = canonical_name(output)  - def apply_recurrent_blob_assignments(op, blob_assignments, canonical_name):

For backward, it's the same reason. We were doing inferencing at that time, so not sure about those things outside of our code coverage.

xkszltl

comment created time in 15 days

PullRequestReviewEvent

Pull request review commentpytorch/pytorch

RNN args renaming in memonger.

 def canonical_name(blob):             op.output[i] = canonical_name(output)  - def apply_recurrent_blob_assignments(op, blob_assignments, canonical_name):

This is to fix an error we found in our model, so it's tested there. Regarding other attributes, I remember there're definite some of them. But it's too long ago and I don't remember those details. All I remember is "they didn't cause trouble in our model, so I'd rather not touch them to make sure the correctness of my PR".

xkszltl

comment created time in 15 days

PullRequestReviewEvent

issue commentboostorg/build

Windows Defender does not like b2 in 1.74.0

Yes....Windows defender still kicks in....

PS C:\Users\tolia\Projects\roaster\win> .\setup.ps1 boost
Re-entry "C:\Users\tolia\Projects\roaster\win\setup.ps1" with args "boost" for protection.
rm : There is a mismatch between the tag specified in the request and the tag present in the reparse point
At C:\Users\tolia\Projects\roaster\win\pkgs\env\scratch.ps1:9 char:1
+ rm -Force -Recurse -ErrorAction SilentlyContinue -WarningAction Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Remove-Item], Win32Exception
    + FullyQualifiedErrorId : System.ComponentModel.Win32Exception,Microsoft.PowerShell.Commands.RemoveItemCommand
xkszltl

comment created time in 15 days

issue commentboostorg/build

Windows Defender does not like b2 in 1.74.0

(or maybe Windows defender somehow decide to scan it regradless)

xkszltl

comment created time in 15 days

more