profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/joe-siyuan-qiao/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Siyuan Qiao joe-siyuan-qiao Johns Hopkins University Baltimore MD http://www.cs.jhu.edu/~syqiao/ Ph.D. student in Computer Science

joe-siyuan-qiao/DetectoRS 1047

DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution

joe-siyuan-qiao/WeightStandardization 500

Standardizing weights to accelerate micro-batch training

google-research/deeplab2 485

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

joe-siyuan-qiao/FewShot-CVPR 103

Few-Shot Image Recognition by Predicting Parameters from Activations

joe-siyuan-qiao/NeuralRejuvenation-CVPR19 44

Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization at CVPR'19

joe-siyuan-qiao/pytorch-classification 20

Classification with PyTorch.

Chenglin-Yang/LESA 16

Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

joe-siyuan-qiao/SupermarketPlugin-AutoShuffleWindow 12

Unreal Engine Plugin to Automatically Shuffle Products on Shelves

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

@HarborYuan The output includes temporally consistent predictions for two frames. To get the predictions for the entire sequence, please propagate the IDs using predictions on the overlapping frames.

HarborYuan

comment created time in 21 days

startedzacjiang/GMA

started time in 21 days

startedHRNet/HRFormer

started time in 22 days

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

@lxtGH Please save the predictions and use the offline evaluation code to compute DSTQ.

HarborYuan

comment created time in 24 days

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

@lxtGH I think the base learning rate is also different. You will also need to change that part.

HarborYuan

comment created time in 25 days

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

@lxtGH @HarborYuan

Good to see the evaluation works on your end.

As mentioned earlier in this thread, please refer to config for learning rate, and batch size originally used in the paper. Hope this helps.

Thanks!

HarborYuan

comment created time in a month

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

@HarborYuan @lxtGH

Here's another tip: based on this solution, the error might be from tf.where. The solution proposed to do the transpose manually. Maybe you can try similar things for the tf.where function calls in post-processor/panoptic_deeplab.py.

Hope this helps.

HarborYuan

comment created time in a month

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

Hello @HarborYuan and @lxtGH

We tried again to reproduce the error but the evaluations were fine on our end. I think we need to figure out the error step by step. Could you please first try TF2.5 + CPU evaluation on your end? If it works, then move to TF2.6 and/or GPU to see which part is not working. Hope this can help you locate the error.

Thanks!

HarborYuan

comment created time in a month

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

Hello @HarborYuan

Yes, the batch size is the whole batch size not the size per device.

We didn’t see similar errors on our end. According to the output, it is likely that there is something wrong with the post processor that ViP-DeepLab uses. Could you please try to set merge_semantic_and_instance_with_tf_op back to false — the default setting, and then see if the error still remains?

Thanks.

HarborYuan

comment created time in a month

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

Hello @HarborYuan

Thanks for reaching out. May I know more details about the error you observed? For example, the tf version, the script you were using for evaluation, the config file, and if there were any changes made to the repo.

Please refer to config for learning rate, and batch size originally used in the paper. The number of GPUs will depend the devices you have access to, and will also limit the batch size for training. As long as the model fits in the memory, the number of GPUs shouldn't be affecting the results.

Hope this helps.

HarborYuan

comment created time in a month

startedcleardusk/3DDFA_V2

started time in a month

startedvitoralbiero/img2pose

started time in a month

starteddeepinsight/insightface

started time in a month

started1adrianb/face-alignment

started time in a month

startedBenWhetton/keras-surgeon

started time in 2 months

startedYadiraF/face3d

started time in 2 months

startedcleardusk/3DDFA

started time in 2 months

startedYadiraF/PRNet

started time in 2 months

startedYadiraF/DECA

started time in 2 months

issue commentjoe-siyuan-qiao/ViP-DeepLab

About the SemanticKITTI-DVPS label names.

Yes, that's correct.

HarborYuan

comment created time in 3 months

issue commentjoe-siyuan-qiao/ViP-DeepLab

About the SemanticKITTI-DVPS label names.

Hi @HarborYuan

Thanks for the question. The SemanticKITTI label names can be found here:

https://github.com/PRBonn/semantic-kitti-api/blob/master/config/semantic-kitti-all.yaml#L144

The difference is that in SemKITTI-DVPS we make 255 unlabeled, and shift the rest IDs by 1, i.e., 0: car, 1: bicycle, 2: motorcycle, and so on.

Hope it helps.

HarborYuan

comment created time in 3 months