profile
viewpoint

densechen/AReLU 47

AReLU: Attention-based-Rectified-Linear-Unit

densechen/CASS 6

CASS:Learning Canonical Shape Space for Category-Level 6D Object Pose and Size Estimation

densechen/Bomberman 0

Bomberman with Pommerman

densechen/densechen.github.io 0

densechen.github.io

densechen/DirIP 0

DirIP: Differentiable Renderer for Iterative 6D Pose Estimation

push eventdensechen/AReLU

densechen

commit sha 4aebc28024a723cb0e569378ecc028bb47b65695

clean code

view details

push time in a day

issue commentkuangliu/pytorch-cifar

ResNet

@ww-zwj 谢谢!

iwanggp

comment created time in 3 days

issue commentkuangliu/pytorch-cifar

ResNet

我想问一下,你们训练出来的精度,和作者readme里面给的精度比起来吻合吗? 我训练出来的精度普遍高了1-1.5个百分点

iwanggp

comment created time in 3 days

issue commentkuangliu/pytorch-cifar

Using Scheduler

I also notice that. I think the main difference is that resume from checkpoint will cotinuous training from the best state of model, but scheduler will continuous training with current state, which may not be the best state. However, I don't know it matters or not.

HeekangPark

comment created time in 3 days

startedWarBean/zhihuzhuanlan

started time in 3 days

issue openedyanx27/Pointnet_Pointnet2_pytorch

code bug

This line at train_cls.py should be included in the loop
mean_correct = []

created time in 5 days

issue commenttomgoldstein/loss-landscape

Using the code for models trained for different tasks

Hi, @Urinx @euler16 is the pretrained model for this code necessary? If we do not provide the pretrained model, can this code train the network first?

euler16

comment created time in 7 days

issue commenttomgoldstein/loss-landscape

--model_file in plot_surface.py should be specified

Hi, @zdhNarsil , If we do not specify the model-file, will the program train the model automatically? Or just use the initialize value?

zdhNarsil

comment created time in 7 days

startedtomgoldstein/loss-landscape

started time in 7 days

startedDLR-RM/rl-baselines3-zoo

started time in 9 days

startedDLR-RM/stable-baselines3

started time in 13 days

startedhill-a/stable-baselines

started time in 13 days

startedshaneshixiang/rllabplusplus

started time in 17 days

startedrll/rllab

started time in 17 days

startedBreakend/DeepReinforcementLearningThatMatters

started time in 17 days

startedalexvbogdan/DeepCalib

started time in a month

starteddmlc/dgl

started time in a month

issue commentScanNet/ScanNet

Align Color to Depth (640x480)

@mangdian If you just use rgbd as input, just scale them to the size you wanted. That should be OK. But, if you want to convert the depth to xyz, you should first convert them in original depth image, them get three channels xyz map, and do scale on it. It should be ok. If you scale the image, the intrinsic is changed.

densechen

comment created time in a month

startedaboulch/ConvPoint

started time in a month

startedcharlesCXK/PyTorch_Semantic_Segmentation

started time in a month

issue openedmicrosoft/vscode

Sort Import operation will be trapped or take a long time in python script

Version: Version: 1.48.1 Commit: 3dd905126b34dcd4de81fa624eb3a8cbe7485f13 Date: 2020-08-19T17:09:41.484Z Electron: 7.3.2 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Darwin x64 19.6.0

image

created time in a month

startedyanx27/Pointnet_Pointnet2_pytorch

started time in a month

startedyanx27/PointASNL

started time in a month

issue commentdensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

关于这个问题,作者没有给出答复。

Liuchongpei

comment created time in a month

startedhszhao/PointWeb

started time in a month

issue commentdensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

@pengsida 我的工作是和6PACK的工作,属于同期的工作。所以我们当时还没有6PACK的这项工作。 关于准确性,你可以去可视化出来验证一下就知道了。在生成数据集上,因为它渲染的时候是严格对应的,所以基本上可以当作不存在任何误差。但是,真实的数据集,效果如何大家其实都知道了。

Liuchongpei

comment created time in a month

startedQingyongHu/SoTA-Point-Cloud

started time in a month

issue commentdensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

@pengsida

对于6PACK的pose,我已经处理好了,只要另外生成scale和对pose做个变换就可以了。 想知道你们用来训练的RT是怎么算的?

这个其实也算是NOCS这个数据机的一个不足的地方,就是他没有直接给我们数据集的GT信息,而是release了计算gt的代码。所以我们不得不使用这个代码,重新计算一遍pose。 但是,NOCS这个方法自身是不需要pose做监督的,它只要nocs map就够了。所以这应该也是这个数据集不直接包含pose的原因吧。

Liuchongpei

comment created time in a month

issue commentdensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

@pengsida

NOCS的Umeyama算法还会给出一个scales,用于resize 3d bounding box,6PACK似乎没有提供这个参数。 Scale并不属于6DoF里面的参数。NOCS的scale是因为nocs map是归一化空间下的表示,所以需要这个scale参数去计算iou。但是6pack是没有这个限制的。

NOCS算出的rotation matrix需要transpose,我进行了transpose,发现投影的bounding box更不准了 NOCS的pose确实是transpose的。我建议你可以用nocs自己提供的画bbox的代码去测试一下他这个bbox究竟是怎么变换的。另外,建议你可以在生成数据上先画一画。因为生成数据集的pose是极为准确的,如果你用对了,那肯定不会有偏差。

NOCS会对pose进行from camera world to computer vision frame的变换, 具体的细节我也不记得很清楚了。关于nocs代码的问题,你可以在他的github上提issue去咨询一下他本人。

6PACK的pose好像你可以使用6pack提供的代码计算出结果。而且,6pack的数据好像就是nocs的github上release出来的。

Liuchongpei

comment created time in a month

issue commentdensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

@Liuchongpei 关于这个GT的误差,其实我之前也有写信问过NOCS的作者,也和6PACK的作者沟通过。包括最近来自新加坡国立大学的一个工作(Shape Prior Deformation for Categorical 6D Object Pose and Size Estimation)。大家比较一致的看法是:这个GT的POSE虽然是存在误差的,但是,还是属于可以容忍的。至于你提到很多偏差都超过10度这个问题,其实你仔细去check一下,这类物体可能占的比重不是非常大。如果你觉得真的对你的指标产生了大影响的话,可以自己逐个的手动调整一下。另外,关于数据集的问题,你可以在NOCS的github上面提一个issue,希望作者后期可以release出重新标注的结果(不过应该不会。之前相关的问题都被ignore了。)

Liuchongpei

comment created time in a month

startedYochengliu/awesome-point-cloud-analysis

started time in a month

push eventdensechen/Bomberman

densechen

commit sha c1343b20d47b951800660607d08bd322e76b1512

fix bug

view details

push time in a month

issue commentdensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

额,我印象中是只调整了test的位姿,没有调整train的位姿才对。不过这个位姿的不准确,大家觉得是可以容忍的。所以,也一直都将就的用着

Liuchongpei

comment created time in a month

issue closeddensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

你好,我运行了NOCS的检测程序会生成许多GT位姿可视化的图片(名为“*bbox_gt.png”)。从这些图片中明显可以看出一些物体的GT位姿不对,请问你的有没有遇到这样的情况? 2345截图20200813153632 2345截图20200813153709

closed time in a month

Liuchongpei

issue commentdensechen/CASS

NOCS-REAL275的 GT位姿准确吗?

是这样。NOCS的真实数据集无论是测试还是训练,其标注的位姿都存在一些误差。 这个问题大家普遍承认。如果你想要得到一个更准确的GT的话, 你可以自己手动调整一下, 或者直接下载6PACK(https://github.com/j96w/6-PACK)代码提供的他们调整过后的位姿。

Liuchongpei

comment created time in a month

PublicEvent

push eventdensechen/densechen.github.io

densechen

commit sha 801306566039c87e072d0e4de9b1590d09177562

fix

view details

push time in a month

push eventdensechen/densechen.github.io

densechen

commit sha ce2e9be4896e1c9bb3ec15aac0b3d334cdb00a75

add project

view details

densechen

commit sha eb8be9fdda97b54d605ce678c23d9bff6e7371f8

add new project

view details

push time in a month

create barnchdensechen/DirIP

branch : master

created branch time in a month

created repositorydensechen/DirIP

DirIP: Differentiable Renderer for Iterative 6D Pose Estimation

created time in a month

startedbamsumit/slayerPytorch

started time in a month

push eventdensechen/densechen.github.io

densechen

commit sha 66008c4ec870597206f144b8b72f3a14e87a3a8c

update

view details

push time in a month

issue commentpytorch/pytorch

Multiprocessping-distributed ERROR

@JiaRu2016 I think the most simple way to solve this is that do not modified your code while training.

densechen

comment created time in 2 months

startedjeffdelmerico/pointcloud_tutorial

started time in 2 months

startedjunhyukoh/self-imitation-learning

started time in 2 months

startedsupersaiyanmode/VideoLabeller

started time in 2 months

startedikostrikov/pytorch-a2c-ppo-acktr-gail

started time in 2 months

startedrwightman/pytorch-pommerman-rl

started time in 2 months

startedBorealisAI/pommerman-baseline

started time in 2 months

startedMultiAgentLearning/playground

started time in 2 months

issue openedpytorch/pytorch

The cuda version of torch.det is much slower than cpu version, why?

I test torch.det function with the following script:

` import torch import time

t = torch.randn(4, 784, 4, 3, 3) tic_start = time.time() torch.det(t) print("cpu: ", time.time() - tic_start)

t = t.cuda() tic_start = time.time() torch.det(t) print("cuda: ", time.time() - tic_start)

`

and get the following output: cpu: 0.01205134391784668 cuda: 1.553579330444336

I am using a TITAN X GPU. Why the det calculation on cuda is much slower than it on cpu?

created time in 2 months

startedchrischoy/fully-differentiable-deep-ndf-tf

started time in 2 months

issue commentfacebookresearch/pytorch3d

The render result will become unexpected if the model on a batch is not the same!!!

@gkioxari I also run the code you provided.

` R, T = look_at_view_transform(dist=0.3, elev=89, azim=-178) # for master_chef_can

cameras = OpenGLPerspectiveCameras(device=device, R=R, T=T)

raster_settings = RasterizationSettings( image_size=512, blur_radius=0.0, faces_per_pixel=1, )

lights = PointLights(device=device, location=[[0.0, 0.0, 0.1]])

renderer = MeshRenderer( rasterizer=MeshRasterizer( cameras=cameras, raster_settings=raster_settings ), shader=TexturedSoftPhongShader( device=device, cameras=cameras, lights=lights ) ) images = renderer(mesh) filename = "crackerbox.png" rgb = images[1, ..., :3].detach().squeeze().cpu() Image.fromarray((rgb.numpy() * 255).astype(np.uint8)).save(filename) `

Pay attention to that: I only change the next-to-last line, which is rgb = images[0, ..., :3].detach().squeeze().cpu() to rgb = images[1, ..., :3].detach().squeeze().cpu() Just change the batch to last, and I get the wrong render image: 截屏2020-07-23 下午1 44 55

I am using a Titan x on ubuntu 16.04.

densechen

comment created time in 2 months

startedjchibane/if-net

started time in 2 months

issue commentfacebookresearch/pytorch3d

The render result will become unexpected if the model on a batch is not the same!!!

@gkioxari @nikhilaravi Thanks for your time. I have written a script and provided some testing data to reprodece the result. You can refer here to download the testing zip.

densechen

comment created time in 2 months

push eventdensechen/AReLU

densechen

commit sha 9ee0eadb0b6bfb0dd322572c610841a0a8c5d51b

fixed error

view details

push time in 2 months

issue openedfacebookresearch/pytorch3d

The render result will become unexpected if the model on a batch is not the same!!!

I use batch render for different model with different pose. I found that if the models on a batch are different, the render color image will go wrong, but the silhouette is ok. (not sure for depth image.) The color image is fetched by: phong_raster_settings = RasterizationSettings(image_size=image_size, blur_radius=0.0, faces_per_pixel=1) phong_renderer = MeshRendererDepth( rasterizer=MeshRasterizer(cameras=cameras, raster_settings=phong_raster_settings), shader=TexturedSoftPhongShader(device=device)).to(device)

and the silhouette is fetched by: ` # To blend the 100 faces we set a few parameters which control the opacity and the sharpness of edges. Refer to blending.py for more details. blend_params = BlendParams(sigma=1e-4, gamma=1e-4)

# Define the settings for rasterization and shading. Here we set the output image to be of size 640x640. To form the blended image we use 100 faces for each pixel. Refer to rasterize_meshes.py for an explanation of this parameter.
silhouette_raster_settings = RasterizationSettings(
    image_size=image_size,  # longer side or scaled longer side
    # blur_radius=np.log(1. / 1e-4 - 1.) * blend_params.sigma,
    blur_radius=0.0,
    # The nearest faces_per_pixel points along the z-axis.
    faces_per_pixel=1)

# Create a silhouette mesh renderer by composing a rasterizer and a shader
silhouete_renderer = MeshRenderer(
    rasterizer=MeshRasterizer(cameras=cameras,
                              raster_settings=silhouette_raster_settings),
    shader=SoftSilhouetteShader(blend_params=blend_params)).to(device)

`

Can anyone tell me that, is it a feature or it is a bug?

The wrong result is look like: 19881595393137_ pic_hd (left is expected, right is rendered by pytorch3d.)

created time in 2 months

issue closedfacebookresearch/pytorch3d

How can I fix the black hole in rendered image?

I am using the follwoing parameters for rendering: ` # We will also create a phong renderer. This is simpler and only needs to render one face per pixel. phong_raster_settings = RasterizationSettings(image_size=image_size, blur_radius=0.0, faces_per_pixel=1)

# We can add a point light in front of the object.
# lights = PointLights(device=device, location=[[0.0, 0.0, 3.0]])
phong_renderer = MeshRendererDepth(
    rasterizer=MeshRasterizer(cameras=cameras,
                              raster_settings=phong_raster_settings),
    shader=TexturedSoftPhongShader(device=device)).to(device)

` The mesh used for render is look like this(rendered by meshlab): 截屏2020-07-22 下午12 42 17

However, the result rendered by pytorch3d is look like: 截屏2020-07-22 下午12 43 03 There are many black hole in the image. What can I do for getting a correct render image?

closed time in 2 months

densechen

issue commentfacebookresearch/pytorch3d

How can I fix the black hole in rendered image?

It seems texture map error.

densechen

comment created time in 2 months

issue openedfacebookresearch/pytorch3d

How can

If you do not know the root cause of the problem / bug, and wish someone to help you, please post according to this template:

🐛 Bugs / Unexpected behaviors

<!-- A clear and concise description of the issue -->

NOTE: Please look at the existing list of Issues tagged with the label 'bug`. Only open a new issue if this bug has not already been reported. If an issue already exists, please comment there instead..

Instructions To Reproduce the Issue:

Please include the following (depending on what the issue is):

  1. Any changes you made (git diff) or code you wrote
<put diff or code here>
  1. The exact command(s) you ran:
  2. What you observed (including the full logs):
<put logs here>

Please also simplify the steps as much as possible so they do not require additional resources to run, such as a private dataset.

created time in 2 months

issue commentpytorch/pytorch

Multiprocessping-distributed ERROR

After almost one years later, now, I know why. If we are using multi-GPU to train our model, it seens that we have to star multi-thread for different GPUs. And each thread has to rerun the script. If we change the code during this time, the other threads may load the modified code, which caused this problem.

densechen

comment created time in 2 months

startedshaohua0116/MMAML-Classification

started time in 2 months

issue commentdensechen/CASS

关于NOCS实验结果的疑问

@Liuchongpei 是的。因为我们当时评估算法的时候,也遇到了很多的问题。比如,我们的框架里面是没有做分割这件事情的。所以导致我们没发直接像NOCS那样同时评估检测结果和位姿估计结果。另外,NOCS自身提供的评估代码比较复杂难懂,我们为了保证和NOCS的比较metric是一致的,所以不得不重新用我们的代码一起评估了NOCS的结果。至于我们的评估代码是否和6PCAK的一模一样,这个我还没有去比对。不过,可以肯定的是,6PACK在计算结果的时候使用的GT,和我们是有一点点差异的。我们的GT是没有修正过的。

Liuchongpei

comment created time in 2 months

issue closeddensechen/CASS

关于NOCS实验结果的疑问

你好,请问你的论文中 NOCS在32bins的设定下为什么会比原论文中的结果的大部分指标高一些?

closed time in 2 months

Liuchongpei

issue commentdensechen/CASS

关于NOCS实验结果的疑问

@Liuchongpei 关于这个问题,我也没法给你足够准确的回复,因为我目前也没有仔细研究过6PACK的评估代码,所以不好乱下结论 。但是我们可以保证的是,我们在论文中评估我们算法的结果,和NOCS算法的结果的时候,是采用一致的评估代码的。 我们有发布了我们的测试结果,如果你拿捏不准的话,可以直接将我们的实验结果重新用6PACK的代码去做一个评估。或者,如果你想要评估你自己的结果,那我们也公布了我们使用的评估代码,你也可以将你自己的结果去测试一下。 另外,NOCS的评估代码其实是包含了一些设置参数的,比如(use_match),评估的时候,请注意这些参数的设置。

Liuchongpei

comment created time in 2 months

issue commentfxia22/pointnet.pytorch

Why is it necessary to add the identity matrix in STN network

In my experience, in most case, we just need a lightly delta transform of original features, so, add identity make the output x become a delta transform matrix related to original pose, which means the value of x is all near zero. If you didn't, the value of x will be like [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0], you see, same values of x are near one, but others are near zero. which is not efficient for network to learn.

ShichaoJin

comment created time in 2 months

startedczq142857/IM-NET-pytorch

started time in 2 months

startedgoogle/ldif

started time in 2 months

startedTonsty/polygonizer

started time in 2 months

startedjcranch/octrees

started time in 2 months

startedinducer/boxtree

started time in 2 months

startedmicrosoft/O-CNN

started time in 2 months

push eventdensechen/AReLU

densechen

commit sha b5975013d807867fc1d33390f15889481b636132

reconstruct code

view details

densechen

commit sha aafdaba3bb3bcd7a93ac3373ba0b24cacc8df713

reconstruct code

view details

push time in 2 months

issue openeddebanjanmahata/icdm2017

Any update?

created time in 2 months

startedPrimozLavric/MPUImplicits

started time in 2 months

startedKeson96/ResNet_LibTorch

started time in 2 months

startedOmegastick/pytorch-cpp-rl

started time in 2 months

starteddaviddoria/MPUReconstruction

started time in 2 months

startedChunyuanLI/spectral_descriptors

started time in 2 months

startedhuanglianghua/open-vot

started time in 2 months

issue closedfacebookresearch/pytorch3d

How can I do a faster rendering?

Hi, When I use pytorch3d to do a rendering of 640x480 RGBD image, it takes almost 1 second to do that on a TITANX GPU. How can I speed up this process?

closed time in 2 months

densechen

issue commentfacebookresearch/pytorch3d

How can I do a faster rendering?

@jcjohnson Thanks a lot. The advice is useful.

densechen

comment created time in 2 months

issue commentdensechen/CASS

训练部分的代码会公开吗?

你好,目前是没有计划公开训练部分的代码的。

Liuchongpei

comment created time in 2 months

issue commentdensechen/CASS

关于NOCS实验结果的疑问

你好,是这样的,因为NOCS原文的比较方式较复杂,我们也发邮件咨询了作者,作者建议我们采用6PACK的方式,在这里,我们重新计算了指标(和6PACK有略微的不一样,主要是因为6PACK的作者发现NOCS的真实数据集的label有些不太准确,所以重新计算了label,我们用的是原始的label),所以才会和NOCS的不一样的。 如果您需要跟进NOCS的工作的话,建议您可以参考6PACK这个工作,它给出了整理后的更加好的标签数据。

Liuchongpei

comment created time in 2 months

push eventdensechen/densechen.github.io

densechen

commit sha 7a3946c5de2615779a280c1900376e271c633edb

add new paper

view details

push time in 2 months

issue commentfacebookresearch/pytorch3d

How can I do a faster rendering?

@nikhilaravi The size of model is about 2000 vertexs and 2000 faces. We use the following settings for obtaining a mask and rgbd image: `

def define_camera(image_size=640, image_height=480, image_width=640, fx=500, fy=500, cx=320, cy=240, device="cuda:0"): # define camera cameras = OpenGLRealPerspectiveCameras( focal_length=((fx, fy), ), # Nx2 principal_point=((cx, cy),), # Nx2 # x0=cx - 6 - image_width / 2, x0=0, y0=0, w=image_size, h=image_size, znear=0.01, zfar=10.0, device=device)

# To blend the 100 faces we set a few parameters which control the opacity and the sharpness of edges. Refer to blending.py for more details.
blend_params = BlendParams(sigma=1e-4, gamma=1e-4)

# Define the settings for rasterization and shading. Here we set the output image to be of size 640x640. To form the blended image we use 100 faces for each pixel. Refer to rasterize_meshes.py for an explanation of this parameter.
silhouette_raster_settings = RasterizationSettings(
    image_size=image_size,  # longer side or scaled longer side
    # blur_radius=np.log(1. / 1e-4 - 1.) * blend_params.sigma,
    blur_radius=0.0,
    # The nearest faces_per_pixel points along the z-axis.
    faces_per_pixel=100,
    bin_size=0)

# Create a silhouette mesh renderer by composing a rasterizer and a shader
silhouete_renderer = MeshRenderer(
    rasterizer=MeshRasterizer(cameras=cameras,
                              raster_settings=silhouette_raster_settings),
    shader=SoftSilhouetteShader(blend_params=blend_params)).to(device)

# We will also create a phong renderer. This is simpler and only needs to render one face per pixel.
phong_raster_settings = RasterizationSettings(image_size=image_size,
                                              blur_radius=0.0,
                                              faces_per_pixel=1,
                                              bin_size=0)

# We can add a point light in front of the object.
lights = PointLights(device=device, location=((
    2.0,
    2.0,
    -2.0,
), ))
phong_renderer = MeshRendererDepth(
    rasterizer=MeshRasterizer(cameras=cameras,
                              raster_settings=phong_raster_settings),
    shader=TexturedSoftPhongShader(device=device,
                                   lights=lights)).to(device)

return silhouete_renderer, phong_renderer

`

densechen

comment created time in 3 months

issue openedfacebookresearch/pytorch3d

How can I do a faster rendering?

Hi, When I use pytorch3d to do a rendering of 640x480 RGBD image, it takes almost 1 second to do that on a TITANX GPU. How can I speed up this process?

created time in 3 months

push eventdensechen/AReLU

densechen

commit sha 785ff7e6dd2dc2a233aca9bd0f508b5dc27aa77e

add link to tensorflow version

view details

push time in 3 months

starteddensechen/AReLU

started time in 3 months

push eventdensechen/densechen.github.io

densechen

commit sha 2c78f11f12f80a8bd864ac780ebb5b836cf4fd45

replace pdf with paper

view details

push time in 3 months

push eventdensechen/densechen.github.io

densechen

commit sha b0e61b0b74eb7553665512efdbd04a7d80935c84

clean

view details

push time in 3 months

push eventdensechen/densechen.github.io

densechen

commit sha 4c8911c0007b5713749190bf883bf5abf063021a

rewrite

view details

push time in 3 months

push eventdensechen/densechen.github.io

densechen

commit sha d6c9fe9862cff388bcd9b96c83fadd4e75fd00b0

add new papers

view details

push time in 3 months

startedClementPinard/FlowNetPytorch

started time in 3 months

startedNVIDIA/flownet2-pytorch

started time in 3 months

startedliyi14/mx-DeepIM

started time in 3 months

startednkalavak/awesome-object-pose

started time in 3 months

more