profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/hjwdzh/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Jingwei Huang hjwdzh Stanford University CA http://stanford.edu/~jingweih PhD -- Computer Graphics and Vision.

hjwdzh/QuadriFlow 432

QuadriFlow: A Scalable and Robust Method for Quadrangulation

hjwdzh/Manifold 290

Convert any Triangle Mesh to Watertight Manifold

hjwdzh/ManifoldPlus 237

ManifoldPlus: A Robust and Scalable Watertight Manifold Surface Generation Method for Triangle Soups

AutodeskAILab/Fusion360GalleryDataset 150

Data, tools, and documentation of the Fusion 360 Gallery Dataset

hjwdzh/AdversarialTexture 133

Adversarial Texture Optimization from RGB-D Scans (CVPR 2020).

hjwdzh/FrameNet 95

FrameNet: Learning Local Canonical Frames of 3D Surfaces from a Single RGB Image

hjwdzh/DeepLM 71

DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

hjwdzh/MeshODE 63

MeshODE: A Robust and Scalable Framework for Mesh Deformation

hjwdzh/pyRender 47

Lightweight Cuda Renderer with Python Wrapper.

hjwdzh/PrimitiveNet 20

PrimitiveNet: Primitive Instance Segmentation with Local Primitive Embedding under Adversarial Metric (ICCV 2021)

issue closedhjwdzh/DeepLM

BACore importerror

after run python3 bundle_adjuster.py --balFile filename --device cpu there is an error: Traceback (most recent call last): File "bundle_adjuster.py", line 5, in <module> import BACore ImportError: /home/wlh/DeepLM/build/BACore.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z16THPVariable_WrapN2at6TensorE ubuntu1804 without cuda

closed time in 11 days

Airplane5

issue commenthjwdzh/DeepLM

BACore importerror

That's probably caused by pytorch so. Try to set TORCH_USE_RTLD_GLOBAL=YES as suggested in example.sh

Airplane5

comment created time in 11 days

startedhjwdzh/DeepLM

started time in 16 days

PublicEvent

push eventhjwdzh/PrimitiveNet

hjwdzh

commit sha 4044c55247b39932493819c56ea9a7f1d6140fc1

readme

view details

push time in 20 days

push eventhjwdzh/PrimitiveNet

hjwdzh

commit sha edc1023eda52300ccb33c4f3474af9cd4b36984e

rm...

view details

push time in 20 days

push eventhjwdzh/DeepLM

Jingwei Huang

commit sha a5536cb5cf95b9c65400e23f2b4de5847181e1c1

Update README.md

view details

push time in 23 days

push eventhjwdzh/DeepLM

Jingwei Huang

commit sha 39acd08bd84d5487168da2dc1833736d9e59d875

Update README.md

view details

push time in 23 days

issue closedhjwdzh/DeepLM

cmake error

cmake .. -DCMAKE_BUILD_TYPE=Release -DWITH_CUDA=ON -- The CXX compiler identification is GNU 8.3.0 -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5")
-- The CUDA compiler identification is unknown -- Check for working CUDA compiler: /opt/tiger/cuda/bin/nvcc -- Check for working CUDA compiler: /opt/tiger/cuda/bin/nvcc -- broken CMake Error at /usr/share/cmake-3.16/Modules/CMakeTestCUDACompiler.cmake:46 (message): The CUDA compiler

"/opt/tiger/cuda/bin/nvcc"

is not able to compile a simple test program.

It fails with the following output: Change Dir: /opt/tiger/code/DeepLM/build/CMakeFiles/CMakeTmp

closed time in a month

ForrestPi

issue commenthjwdzh/DeepLM

cmake error

Does it mean that the cuda toolkit is not installed properly? Can you nvcc other cuda code and run it correctly? If so, a simple solution would be "export PATH=$PATH:{Directory that contains your nvcc}", remove your cmake cache file and cmake configure again.

ForrestPi

comment created time in a month

issue closedhjwdzh/DeepLM

Low speed for shared camera intrinsic optimization in bundle adjustment

Hi, I have implemented a new loss function in bundle adjustment to solve the shared camera intrinsic optimization. But I found when all images share the same camera, the optimization speed is about 20 times slower than the case that each image has an camera to be optimized.

closed time in a month

longchao343

issue commenthjwdzh/DeepLM

Low speed for shared camera intrinsic optimization in bundle adjustment

Thank you for bring this issue up. When it refers to shared intrinsics, the problem is not sparse anymore and the slow performance is expected.

Our solution is to optimize intrinsics and other extrinsics alternatively, which proves to work effectively. For sparse extrinsics, our solver is used. For dense intrinsics, you will only need an Eigen solver:)

longchao343

comment created time in a month

push eventhjwdzh/spconv

Jingwei Huang

commit sha bcbed5ebd1306556737b7d9ad836f1741d1c62a0

Update all.cc

view details

push time in a month

fork hjwdzh/spconv

Spatial Sparse Convolution in PyTorch

fork in a month

push eventhjwdzh/hjwdzh

Jingwei Huang

commit sha e0d1a4e56f0e831fc4be44db7057107311fbea55

Create README.md

view details

push time in 3 months

create barnchhjwdzh/hjwdzh

branch : main

created branch time in 3 months

created repositoryhjwdzh/hjwdzh

created time in 3 months