profile
viewpoint
Amit amitsrivastava78 Huawei Bangalore

amitsrivastava78/MobileFaceNet_Tensorflow 0

Tensorflow implementation for [MobileFaceNet]

amitsrivastava78/tensorflow 0

An Open Source Machine Learning Framework for Everyone

issue openedswordcheng/FCSR-GAN

Details about running and training Models

Hi, Went though your paper and it is quite impressive, can you please provide the details about training and running model(pre-trained if possible). So that i can independently evaluate the paper

Regards Amit

created time in 3 months

issue openedStar-Clouds/CenterFace

Need Training Code and DataSet

Hi , Your paper looks exiting but there is no training and data-set available, can you please provide the same so that we can evaluate the paper independently.

Regards Amit

created time in 3 months

issue openedbecauseofAI/MobileFace

Training framework & DataSet

@becauseofAI , Went though your code and the results of the model are very impressive, we would like to ask you if you have the training framework as well as the dataset as we would like to integrate this and test the model after training and check it accuracy and inference time.

Regards Amit

created time in 3 months

issue commentwuhuikai/FastFCN

Regarding the computation time and memory footprint of JPU based models

image

The above data that we gathered does not display the same results claimed by the paper.

Regards Amit

amitsrivastava78

comment created time in 3 months

issue commentwuhuikai/FastFCN

Regarding the computation time and memory footprint of JPU based models

@wuhuikai , can you please explain your comments in details

The output stride of models w/o JPU is 8, while the output stride of models w/ JPU is 32.

How this will reduce the model size if the JPU is add on on top of existing mode ?

Regards Amit

amitsrivastava78

comment created time in 3 months

issue openedwuhuikai/FastFCN

Regarding the computation time and memory footprint of JPU based models

Hi @wuhuikai , First of all i would like to congratulate you on the excellent paper you have written, I have gone though your paper and source code and i have few queries about this : -

  1. As per the paper and the results you have published in the github, these seems to be improvement in terms of mIOU and FPS compared to the model which does not have the JPU, Can you please explain in detail how this is happening ? As i can see and which is evident from your JPU diagram as well that you add JPU unit in the existing model. This make the number of parameters more and hence increasing the memory and computation cost . So i want to know how exactly the FPS is going up.

  2. In the paper page 2 listing the advantages of JPU you have mentioned following : -

the computation time and memory footprint of the whole segmentation framework can be reduced by a factor of more than 3 and meanwhile achieves better performance

Since you are adding more extra JPU unit which contains dialated convolutions as well, the parameters will go up, so how we can realise the above statement from you(Memory foot print).

_

Regards Amit

created time in 3 months

issue openedcddlyf/GCANet

Pretrained Model output on real images not giving Good Results

Hi @cddlyf , We are using your pretrained model on the data set, https://sites.google.com/view/reside-dehaze-datasets/reside-v0
(RTTS (Real-world Task-Driven Testing Set)) but the PSNR is only about 11dB, however on a different dataset SOTS (Synthetic Objective Testing Set), this is giving above 30dB PSNR.

Can you please let me know how can i improve the PSNR for this model in real world images.

Regards Amit

.

created time in 4 months

issue commentcddlyf/GCANet

About Training Process

@cddlyf , thanks for sending the training code, i will train the model with new dataset and update you the results here.

Regards Amit

yxxxxxxxx

comment created time in 4 months

issue commentcddlyf/GCANet

About Training Process

Hi , I have read your paper and it is some excellent work, can you please send me the training Code, My email is amit.rinku78@gmail.com

yxxxxxxxx

comment created time in 4 months

issue commentsubmission2019/cnn-quantization

Advantage of the 4-bit Quantization

@submission2019 , thanks for the reply for the Mobilenet part, yes we are facing the same issue with mobilenetV2 of low accuracy. can you please describe in the measures you have take, for us the Top1 accuracy with 4-bit for mobilenet_v2 is coming to ~49%, can you please tell the exact steps for making it 70%.

Regards Amit

amitsrivastava78

comment created time in 5 months

IssuesEvent
more