profile
viewpoint

CoinCheung/pytorch-loss 541

label-smooth, amsoftmax, focal-loss, triplet-loss. Maybe useful

CoinCheung/BiSeNet 465

Add bisenetv2. My implementation of BiSeNet

CoinCheung/DeepLab-v3-plus-cityscapes 108

mIOU=80.02 on cityscapes. My implementation of deeplabv3+ (also know as 'Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation' based on the dataset of cityscapes).

CoinCheung/triplet-reid-pytorch 69

My implementation of the paper [In Defense of the Triplet Loss for Person Re-Identification]

CoinCheung/SphereReID 35

My implementation of paper: SphereReID: Deep Hypersphere Manifold Embedding for Person Re-Identification

CoinCheung/Deeplab-Large-FOV 25

My Implementation of the deeplab_v1 (known as deeplab large fov)

CoinCheung/SFT-ReID 19

My implementation of Spectral-Feature-Transformation-ReID, link to the paper: https://arxiv.org/abs/1811.11405

CoinCheung/fixmatch-pytorch 17

90%+ with 40 labels. please see the readme for details.

CoinCheung/porn_identification 3

implement image processing with a combination of C and python3 based on socket communication

CoinCheung/Segmentatron 2

My implementation of some segmentation algorithms

push eventCoinCheung/pytorch-loss

coincheung

commit sha cb95d2216c553e60a9ee6c65f889a0117c3dbbc7

tiny modify

view details

push time in 2 days

push eventCoinCheung/classification-baseline

coincheung

commit sha 687ecaa1606608a7ae00722209cd27ed650eae81

refactor effnet

view details

push time in 6 days

issue openedlucidrains/lambda-networks

question about hybrid lambdaResnet

Hi,

In the paper, there is this paragraph:

When working with hybrid LambdaNetworks, we use a single lambda layer in c4 for LambdaResNet50, 3 lambda layers for LambdaResNet101, 6 lambda layers for LambdaResNet-152/200/270/350 and 8 lambda layers for LambdaResNet-420.

I have several questions about constructing the hybrid lambdaResnet:

  1. Do we only need to replace the 3x3conv with lambda layer in the C4 stage rather than C4 and C5(as in the ablation study)?
  2. When there is more than 1 lambda layers, such as the case of LambdaResNet101, are we replacing the 3x3conv with 3 lambda layers? And in the resnet50 case, we replace the 3x3conv with 1 lambda layers ?

created time in 6 days

push eventCoinCheung/pytorch-loss

coincheung

commit sha 474fe4aa2c0809b9522b452f05a27bf3a12e4073

refine focal loss

view details

push time in 9 days

issue openedHIT-SCIR/ltp

请问ltp支持分词之后同时输出每个分词的权重吗

大佬好,

我想知道一个句子在被分词之后每个token的重要程序的分数,请问ltp里面有实现这个功能的api吗

created time in 9 days

push eventCoinCheung/classification-baseline

coincheung

commit sha 798b27d4e53e6050fd5d6342d882519e25c90faa

before refactor

view details

coincheung

commit sha e2ccd0660424b65943663a39217fcd3e281b1195

refactor

view details

push time in 9 days

issue openedhankcs/HanLP

词性标注结果对不上

我运行的是readme里面的example:

import hanlp

text = "我的希望是希望和平"


tokenizer = hanlp.load('LARGE_ALBERT_BASE')
tagger = hanlp.load(hanlp.pretrained.pos.CTB9_POS_ALBERT_BASE)
ner_recog = hanlp.load(hanlp.pretrained.ner.MSRA_NER_BERT_BASE_ZH)

tokens = tokenizer(text)
tag = tagger(tokens)
ner = ner_recog(list(text))

tok_tag = [(a, b) for a, b in zip(tokens, tag)]

print(tok_tag)

输出的结果是:

[('我', 'PN'), ('的', 'DEG'), ('希望', 'NN'), ('是', 'NN'), ('希望', 'VC'), ('和平', 'NN')]

貌似输入是token的list,但是解析出来却是按照单个字进行标注并且解析的?

created time in 9 days

issue openedthunlp/THULAC-Python

请问这个有没有ner的功能

如果有的话,请问api是什么样的呢

created time in 11 days

push eventCoinCheung/pytorch-loss

coincheung

commit sha 6bcfb76434c82b5ca65b1a293c022becc2ff49eb

improve lovasz to make it faster

view details

push time in 13 days

push eventCoinCheung/pytorch-loss

coincheung

commit sha b0882eb9b38b1f96a1aa76742e7100142988bbf3

add lovasz softmax loss

view details

push time in 13 days

issue closedCoinCheung/BiSeNet

TypeError: pad() missing 1 required positional argument: 'mode'

/home/nvidia/anaconda3/envs/pytorch-2/bin/python /home/nvidia/yuyi/BiSeNet-master/tools/train.py /home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/cuda/init.py:125: UserWarning: GeForce RTX 2080 with CUDA capability sm_75 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 compute_37. If you want to use the GeForce RTX 2080 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) Traceback (most recent call last): File "/home/nvidia/yuyi/BiSeNet-master/tools/train.py", line 225, in <module> main() File "/home/nvidia/yuyi/BiSeNet-master/tools/train.py", line 221, in main train() File "/home/nvidia/yuyi/BiSeNet-master/tools/train.py", line 164, in train for it, (im, lb) in enumerate(dl): File "/home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data) File "/home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise() File "/home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/nvidia/anaconda3/envs/pytorch-2/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/nvidia/yuyi/BiSeNet-master/lib/base_dataset.py", line 48, in getitem im_lb = self.trans_func(im_lb) File "/home/nvidia/yuyi/BiSeNet-master/lib/base_dataset.py", line 71, in call im_lb = self.trans_func(im_lb) File "/home/nvidia/yuyi/BiSeNet-master/lib/transform_cv2.py", line 150, in call im_lb = comp(im_lb) File "/home/nvidia/yuyi/BiSeNet-master/lib/transform_cv2.py", line 42, in call im = np.pad(im, ((pad_h, pad_h), (pad_w, pad_w), (0, 0))) TypeError: pad() missing 1 required positional argument: 'mode'

help.... thanks a lot

closed time in 13 days

wind19970924

issue commentCoinCheung/BiSeNet

TypeError: pad() missing 1 required positional argument: 'mode'

Close this since you might have figured it out.

wind19970924

comment created time in 13 days

issue closedCoinCheung/BiSeNet

License

Would you add a license please? Thanks

closed time in 13 days

aaronrmm

issue commentCoinCheung/BiSeNet

License

Hi, There is a license, but I moved it to other sub-directory. I have drag it back to root folder now.

aaronrmm

comment created time in 13 days

push eventCoinCheung/BiSeNet

root

commit sha 90de8f9bdb6281f700087a5384996b18a4c6dae7

move license out

view details

push time in 13 days

push eventCoinCheung/classification-baseline

coincheung

commit sha c2b59f73d9b48c8181ad46179121d6321f380e31

more resnet models

view details

push time in 14 days

issue commentCoinCheung/BiSeNet

TypeError: pad() missing 1 required positional argument: 'mode'

What is your numpy version ? I am using 1.17.

wind19970924

comment created time in 16 days

PublicEvent

issue closedCoinCheung/BiSeNet

problem about training log

hi, thank you for the contribution. i learn a lot from your code. here is my question. when training, the log outputs two results each 100 inter. such as below. do you know why?

iter: 100/151000, lr: 0.006295, eta: 1 day, 3:01:43, time: 65.13, loss: 5.2069, loss_pre: 0.6938, loss_aux0: 1.2169, loss_aux1: 0.9999, loss_aux2: 1.1010, loss_aux3: 1.1953 iter: 100/151000, lr: 0.006295, eta: 1 day, 3:01:46, time: 65.13, loss: 4.3021, loss_pre: 0.5993, loss_aux0: 1.0043, loss_aux1: 0.7578, loss_aux2: 0.9266, loss_aux3: 1.0140 iter: 200/151000, lr: 0.007924, eta: 1 day, 2:02:22, time: 59.82, loss: 5.5728, loss_pre: 1.1768, loss_aux0: 1.2945, loss_aux1: 1.1485, loss_aux2: 1.0133, loss_aux3: 0.9397 iter: 200/151000, lr: 0.007924, eta: 1 day, 2:02:24, time: 59.82, loss: 6.6900, loss_pre: 0.8763, loss_aux0: 1.6241, loss_aux1: 1.4571, loss_aux2: 1.5158, loss_aux3: 1.2166 iter: 300/151000, lr: 0.009976, eta: 1 day, 1:37:55, time: 59.35, loss: 8.0209, loss_pre: 2.0541, loss_aux0: 1.8272, loss_aux1: 1.5216, loss_aux2: 1.2999, loss_aux3: 1.3181 iter: 300/151000, lr: 0.009976, eta: 1 day, 1:37:55, time: 59.36, loss: 8.0241, loss_pre: 2.3563, loss_aux0: 1.5317, loss_aux1: 1.4104, loss_aux2: 1.3514, loss_aux3: 1.3743 iter: 400/151000, lr: 0.012559, eta: 1 day, 1:27:22, time: 59.71, loss: 7.5488, loss_pre: 1.4626, loss_aux0: 2.1464, loss_aux1: 1.4746, loss_aux2: 1.2468, loss_aux3: 1.2185 iter: 400/151000, lr: 0.012559, eta: 1 day, 1:27:30, time: 59.73, loss: 6.4421, loss_pre: 1.6951, loss_aux0: 1.6171, loss_aux1: 1.1467, loss_aux2: 1.0038, loss_aux3: 0.9794

closed time in a month

yongguanjiangshan

issue commentCoinCheung/BiSeNet

problem about training log

May be you have solved this problem, I am closing this.

yongguanjiangshan

comment created time in a month

issue closedCoinCheung/BiSeNet

why inference so slow (bisenetv2)?

According to paper, for a 2048*1024 input, bisenetv2 can inference at a speed of 156FPS on GTX1080ti. But the inference speed for this repo is only 33FPS under the some conditions and removing booster branch .

closed time in a month

vraivon

issue commentCoinCheung/BiSeNet

why inference so slow (bisenetv2)?

Hi,

I updated specification of speed, and how I tested it. I am closing this, but you can still discuss here.

vraivon

comment created time in a month

issue closedCoinCheung/BiSeNet

inference time

Hi, what's your inference time of BiSeNetV2 ?

closed time in a month

songqi-github

issue commentCoinCheung/BiSeNet

inference time

Hi,

On my Tesla T4 gpu with fp16 mode, the bisenetv2 fps is 50, and with fp32 mode, the fps is 16...

I tested it with input size of 1024x2048.

T4 is much slower than 2080ti, I do not have 2080ti now, but I have upload the method and code how I test this. If you have other gpus, you can test it on your platform.

I am closing this, you can still leave a message or open new issues if you want more discussions.

songqi-github

comment created time in a month

issue closedCoinCheung/BiSeNet

Failed to load onnx model into tensorRT

Hi, thanks a lot for your nice works!

Currently, I attempt to train a BiSeNet model using my own data. The trainning code is directly taken from your repo. It works quite well during trainning and the evaluation result also seems fine. After that, I converted the torch model to onnx model, which was then converted to tersorRT. However, I got some errors during model converison.

  • torch to onnx

When it comes to torch.onnx.export, the conversion process run into errors if the default opset_verison = 9 is used. But if I set the opset_version as 11, no error showed up visibly. Although I've gotten an onnx model, I am just wandering that whether this also happened to you? If not, I would have a worry about the side effects behind this way. Perhaps it is the reason for the error I got during onnx2tensorRT conversion.

  • onnx to tensorRT

When it comes to onnxToTRTModel, an error came up in fornt as below:

ERROR: onnx2trt_utils.hpp:277 In function convert_axis: [8] Assertion failed: axis >=0 && axis < nbDims

I have no idea about how to fix it. Could you please introduce the details of model converison when you do this? It would be great if you could share them online.

Environment in use:

  • pytorch 1.1 during trainning

  • pytorch 1.3 for onnx model generation [because it failed using torch1.1]

  • onnx IR version 0.0.4

  • tensorRT 5.1.5, CUDA9.0

closed time in a month

zldodo

issue commentCoinCheung/BiSeNet

Failed to load onnx model into tensorRT

Hi,

I add a demo on how to export the model to onnx and compile with tensorrt. You can see if this will help you.

I am closing this. You can open new issue if you still have problem on this.

zldodo

comment created time in a month

push eventCoinCheung/BiSeNet

root

commit sha 12cd095c4824028f44b6af4617d440c2f2cd7578

add tensorrt

view details

push time in a month

push eventCoinCheung/BiSeNet

root

commit sha e786eae93fe03e09b4f94fe47e204367a8ba6b9b

add tensorrt

view details

push time in a month

push eventCoinCheung/pytorch-loss

coincheung

commit sha c657c853b08f9ba7ef42f4747c9999eb88e424fe

add dy-conv2d and coord-conv2d

view details

push time in a month

issue openedaksnzhy/xlearn

What is the format of training data?

Hi,

I see the example dataset is here in the demo/classification/titanic/titanic_train.txt, the first 5 lines are:

0       1       0       1       0       0       0       1       0       1       0       0       1       -0.56136323207  -0.502445171436
1       1       0       0       1       1       0       0       1       0       1       0       0       0.613181832266  0.786845293588
1       0       0       1       0       0       0       1       1       0       0       0       1       -0.267726965986 -0.488854257585
1       1       0       0       1       0       0       1       1       0       1       0       0       0.392954632703  0.420730236069
0       0       0       1       0       0       0       1       0       1       0       0       1       0.392954632703  -0.486337421687

I do not know which is training feature and which is label in each line. Would you please tell me which is the label and feature vector ?

created time in a month

issue commentpytorch/pytorch

ONNX support for AdaptiveMax/AvgPool ?

has this problem really been solved ? When I use tensorrt 7.0.0 to compile the onnx export from pytorch with nn.AvgPool2d, the I still met some problem if I use opset11, but the opset10 will be fine.

jaidmin

comment created time in a month

issue commentCoinCheung/pytorch-loss

use on multi-label 3D medical image segmentation task?

Hi,

Focal loss would be fine, since it is point-wise loss, and each sample is considered as a binary classification problem. As for other losses, I am afraid that they are not designed for 3d operations, but you might try to reform your problem and see if the 3d loss can be expressed in a 2d form. Good luck !!

xwjBupt

comment created time in a month

issue commentCoinCheung/BiSeNet

How to use single GPU training for refactored code

Are you using my original script ? The tools/train.py has only 225 lines of code, how can the error message show that the error lies in line 233 ?

cherrysnowy

comment created time in a month

issue commentCoinCheung/BiSeNet

How to use single GPU training for refactored code

Would you please print the whole error message ?

cherrysnowy

comment created time in a month

issue commentCoinCheung/BiSeNet

How to use single GPU training for refactored code

Just do this:

$ export CUDA_VISIBLE_DEVICES=0
$ python -m torch.distributed.launch --nproc_per_node=1 tools/train.py --model bisenetv2 # or bisenetv1
cherrysnowy

comment created time in a month

issue commentpytorch/pytorch

RFC: Add torch.deterministic flag to force deterministic algorithms

Does it mean that, we could use torch.manual_seed(111) to set everything deterministic, including the interpolation operation ?

colesbury

comment created time in a month

issue closedCoinCheung/BiSeNet

How can I run on cpu??

When I run on cpu,,

dist.init_process_group( AttributeError: module 'torch.distributed' has no attribute 'init_process_group'

closed time in a month

wangjing60755

issue commentCoinCheung/BiSeNet

How can I run on cpu??

So you want to train you model with cpu, am I right ?

Training with cpu is not supported, since until now the speed of training on cpu is very slow... You can try to remove code associated with distributed and fp16 training, and see if it will work...

wangjing60755

comment created time in a month

issue commentCoinCheung/BiSeNet

How can I run on cpu??

Please post the details of how you tried to run inference if you still have problem. I cannot determine what the problem is from two lines of error message.

wangjing60755

comment created time in a month

issue commentCoinCheung/BiSeNet

How can I run on cpu??

Have you ever refer to the tools/demo.py for inference ?

wangjing60755

comment created time in a month

issue commentCoinCheung/BiSeNet

How can I run on cpu??

Are you going to train your model with cpu ? That would be very slow.

wangjing60755

comment created time in a month

issue commentCoinCheung/pytorch-loss

BCEWithLogitsLoss has combined a Sigmoid layer and the BCELoss in one single class, But why to use torch.sigmoid again

Because this is focal-loss, which adds a coefficient to the standard bce loss. The added coefficient is based on the sigmoid prob of the input, so we still need to compute this. Please refer to this paper for the details of the focal loss.

sunpeng981712364

comment created time in a month

issue openedzhang17173/event-element-extraction-based-on-judgments

How could I download original dataset?

Hi,

Thanks for sharing this !! Would you please upload the original dataset that you run the code with, such that I could reproduce the result ??

Thanks a lot !!

created time in a month

issue commentCoinCheung/BiSeNet

evaluation on val

What do you mean, I posted the miou tested on the val set along with the pretrained model on cityscapes. You can try the pretrained model with the command in the README.md

zzwen1

comment created time in a month

issue closedCoinCheung/BiSeNet

about the module "inplace_abn"

I trained the model with only one GPU.

When I train the code with "CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 train.py", there are some errors:

/home/min/JHW/code/modules/src/inplace_abn_cuda.cu(318): error: no instance of overloaded function "thrust::transform_if" matches the argument list
            argument types are: (<error-type>, thrust::device_ptr<float>, thrust::device_ptr<float>, thrust::device_ptr<float>, lambda [](const float &)->float, lambda [](const float &)->__nv_bool)
          detected during instantiation of "void elu_backward_impl(T *, T *, int64_t) [with T=float]" 
(330): here

12 errors detected in the compilation of "/tmp/tmpxft_00002758_00000000-7_inplace_abn_cuda.cpp1.ii".
ninja: build stopped: subcommand failed.

However, when I trained it with “CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=2 train.py”, the traceback is:

Traceback (most recent call last):
  File "train.py", line 6, in <module>
    from model import BiSeNet
  File "/home/min/JHW/code/model.py", line 10, in <module>
    from resnet import Resnet18
  File "/home/min/JHW/code/resnet.py", line 9, in <module>
    from modules.bn import InPlaceABNSync as BatchNorm2d
  File "/home/min/JHW/code/modules/__init__.py", line 1, in <module>
    from .bn import ABN, InPlaceABN, InPlaceABNSync
  File "/home/min/JHW/code/modules/bn.py", line 10, in <module>
    from .functions import *
  File "/home/min/JHW/code/modules/functions.py", line 18, in <module>
    extra_cuda_cflags=["--expt-extended-lambda"])
  File "/home/min/anaconda2/envs/py3/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 645, in load
    is_python_module)
  File "/home/min/anaconda2/envs/py3/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 825, in _jit_compile
    return _import_module_from_library(name, build_directory, is_python_module)
  File "/home/min/anaconda2/envs/py3/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 964, in _import_module_from_library
    file, path, description = imp.find_module(module_name, [path])
  File "/home/min/anaconda2/envs/py3/lib/python3.5/imp.py", line 297, in find_module
    raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'inplace_abn'

So, I am not sure it is because of Ninja or the module inplace_abn. I guess it may related to the version of CUDA and cuDNN (I use 9.0 and 7.4.1).

Thank you for your help!

closed time in a month

jiangolder

issue commentCoinCheung/BiSeNet

about the module "inplace_abn"

Closing this, since we do not rely on inplace-abn now.

jiangolder

comment created time in a month

issue commentCoinCheung/BiSeNet

ImportError: No module named 'inplace_abn'

Hi,

I refactored the implementation, and we do not rely on inplace abn now. If you are still in need of this, you can update to pytorch1.6, and try the new code.

wk524629918

comment created time in a month

issue closedCoinCheung/BiSeNet

How to test on my own pictures with your pretrained model?

Hello, thanks for your work and your humor!

now I need to get segemantation results on my own pictures(.bmp),but I need to use your pretrained model, can u give me some advice?(my coding ability is very poor now) Thanks very much!

closed time in a month

EchoAmor

issue commentCoinCheung/BiSeNet

How to test on my own pictures with your pretrained model?

Hi,

I hope you have figured out the solution. I refactored the codebase, and now you can infer with single picuture with the script ofdemo.py.

EchoAmor

comment created time in a month

issue closedCoinCheung/BiSeNet

Diss: cityscpase mIOU result!

Thx for author‘s practice ! I have tested the pretrain model. but i get mIOU=74.6. If some one want test, you can download code there: https://github.com/mcordts/cityscapesScripts

So, why your eval code can get 78.45?

Do you consider background class (black color in lable) when training? I don't think you have done this.

closed time in a month

ProWhalen

issue commentCoinCheung/BiSeNet

Diss: cityscpase mIOU result!

No more information on this topic ? I am closing this.

ProWhalen

comment created time in a month

issue closedCoinCheung/BiSeNet

evaluation on val

Your online evaluation result is very high, but I predicted on the entire validation set, and then calculated iou, the result is very low, this is why?do you make sure your evaluation code is correct?

closed time in a month

zzwen1

issue commentCoinCheung/BiSeNet

evaluation on val

I find no errors that harms the evaluation in the current script. Actually I use this script to test some dataset released by a competition, and the miou is very close to the result computed by the competition submit server.

If you find errors in the code, please point out the place where the errors lies in. If you want me help you with your own code that is developed based on this repo, please provide enough details (what is the current state and what you expect to see, and better with a piece of sample code), so that I can understand and reproduce the problem.

zzwen1

comment created time in a month

issue commentfisheepx/tencent-weibo-exporter

老哥,这个现在是什么状态呀

我用的 anaconda 默认的 python3.7, geckodriver放到 anaconda的目录下面了,没自己建环境,就用的自带的python3.7,然后我运行 logversion6,报了这个错 : ··· 开始登录腾讯微博... 登录超时... 打开我的广播超时... 开始分析腾讯微博第 1 页... Traceback (most recent call last): File "tencent_weibo.py", line 154, in <module> weibo.start() File "tencent_weibo.py", line 136, in start self.get_stories() File "tencent_weibo.py", line 74, in get_stories talk_list = self.browser.find_element_by_id('talkList') File "D:\Users\hp\Anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 360, in find_element_by_id return self.find_element(by=By.ID, value=id_) File "D:\Users\hp\Anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find_element 'value': value})['value'] File "D:\Users\hp\Anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute self.error_handler.check_response(response) File "D:\Users\hp\Anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="talkList"] ··· 这个是什么原因呢

CoinCheung

comment created time in 2 months

issue commentCoinCheung/BiSeNet

evaluation on val

Please be more specific, what is the steps to reproduce the problem ?

zzwen1

comment created time in 2 months

issue commentCoinCheung/BiSeNet

problem about training log

How did you setup training ? I did not observe this in my practice.

yongguanjiangshan

comment created time in 2 months

issue closedfacebookresearch/pycls

Why is the learning rate different from paper?

https://github.com/facebookresearch/pycls/blob/3747e12c13812043801f6be0c2b990f39947ea09/configs/dds_baselines/effnet/EN-B0_dds_8gpu.yaml#L14

Hi,

I noticed that in the append C of the paper of regnet, the best learning rate and weight decay is 0.1 and 5e-5 for batch size of 128. Why do you use learning rate and weight decay of 0.4 and 1e-5 at the batch size of 256? Did I miss any details of the code ?

closed time in 2 months

CoinCheung

issue commentfacebookresearch/pycls

Why is the learning rate different from paper?

Thanks a lot !!! I am closing this.

CoinCheung

comment created time in 2 months

issue openedfacebookresearch/pycls

Why is the learning rate different from paper?

https://github.com/facebookresearch/pycls/blob/3747e12c13812043801f6be0c2b990f39947ea09/configs/dds_baselines/effnet/EN-B0_dds_8gpu.yaml#L14

Hi,

I noticed that in the append C of the paper of regnet, the best learning rate and weight decay is 0.1 and 5e-5 for batch size of 128. Why do you use learning rate and weight decay of 0.4 and 1e-5 at the batch size of 256? Did I miss any details of the code ?

created time in 2 months

issue commentCoinCheung/BiSeNet

evaluation on val

@dzyjjpy Your code is not clear in my browser, what are you trying to do with it ?

zzwen1

comment created time in 2 months

issue commentfisheepx/tencent-weibo-exporter

老哥,这个现在是什么状态呀

好的,然后linux下怎么使用登录版呀,貌似文档里面没说怎么用,只说了下载geckodriver然后放到windows的c盘里面,并没说怎么启动脚本啥的?

CoinCheung

comment created time in 2 months

issue commentCoinCheung/BiSeNet

evaluation on val

I am not sure, what is the correct method to compute miou please ?

zzwen1

comment created time in 2 months

issue openedfisheepx/tencent-weibo-exporter

老哥,这个现在是什么状态呀

你好呀,知乎过来的,这个现在能正常备份吗,看read me说会自动注销帐号?

created time in 2 months

push eventCoinCheung/pytorch-loss

coincheung

commit sha 77c13715d0bcb44b093431f05e7d7b8523040eab

spell error

view details

push time in 2 months

push eventCoinCheung/pytorch-loss

coincheung

commit sha 6df1f59a0f99cf76ae5cc98913e98ec6619b6b4b

fix sync problem of one hot

view details

coincheung

commit sha 94ef6e9aab0cc27b08692a75a3f05c7dd131111f

fix a little problem with one hot

view details

coincheung

commit sha ae7aec1bc40ec5d654fa44df1d3bcea2d4462eaf

add ohem loss and add a few lines in readme

view details

push time in 2 months

issue closedCoinCheung/pytorch-loss

关于triplet loss

您好,我有个关于triplet loss求梯度的问题(虽然pytorch是自动求导的),我看网上是用损失分别对anchor、positive、negtive的嵌入向量求偏导,但从您的损失函数中看不出来

closed time in 2 months

AnnaXiong

issue commentCoinCheung/pytorch-loss

关于triplet loss

嗷那我先把这个关了啊

AnnaXiong

comment created time in 2 months

issue commentCoinCheung/pytorch-loss

关于triplet loss

triplet-loss我用的也是pytorch的自动求导呀,没自己写backward。。。。

AnnaXiong

comment created time in 2 months

issue commentCoinCheung/BiSeNet

RuntimeError: copy_if failed to synchronize: an illegal memory access was encountered

Please notice that training labels of cityscapes are mapped from the label images pixels according to the specification. See this: https://github.com/CoinCheung/BiSeNet/blob/aa3876b4b1f2c430e07678f8c15b96465681fca0/lib/base_dataset.py#L44

ltshan

comment created time in 2 months

issue commentCoinCheung/BiSeNet

RuntimeError: copy_if failed to synchronize: an illegal memory access was encountered

How many categories are there in your own dataset? Are u using the dataset class designed for cityscapes or implemented a new dataset class ?

ltshan

comment created time in 2 months

issue commentfacebookresearch/FixRes

question about the principle of the method

Hi, thanks for replying !!!
So is the major difference between scratch training and finetuning is that we use smaller cropsize (both train/eval) to train from scratch and then use larger cropsize to finetune with backbone features freezed ?

CoinCheung

comment created time in 2 months

issue commentfacebookresearch/FixRes

question about the principle of the method

Hi,

There are still some details that I am not sure about, First, in the training process, both training from scratch and finetune use standard augmentation defined in the torchvision library, and in the test process, they use self-defined Resize function, which resize the longer side of the image to assigned value when largest=True, and only resize the hight of the image to the assigned value when largest=False, why always resize the hight of the image to the assigned value? Second, in the imnet_finetune/ folder, there is a transform.py which is actually used in the finetuning process. Do we need to replace it with the transform_v2.py ? Additionally, I found that when finetune the efficientnet, the last nn.Liear and nn.Conv2d is tuned, but the last bn layer is not set to requires_grad=True, meanwhile the state of the bn layer is set to train mode. What is the necessity of not tuning the weight of the last bn but still updating its moving average states ?

By the way, it seems that when finetune the resnet-50 model, all the parameters are tuned while finetuning the efficentnet needs only to finetune the last few layers. Did I misunderstand the code ?

CoinCheung

comment created time in 2 months

create barnchCoinCheung/ytttttttty

branch : master

created branch time in 2 months

created repositoryCoinCheung/ytttttttty

created time in 2 months

create barnchCoinCheung/ffffffff

branch : master

created branch time in 2 months

created repositoryCoinCheung/ffffffff

created time in 2 months

issue openedfacebookresearch/FixRes

question about the principle of the method

Hi,

Thanks for releasing this great work !! after reading the paper and going through the code, I feel I have not fully understand the method ...

Would you please tell me what is the general pipeline of using the proposed method? Take resnet50 as instance, do I need to train resnet-50 with the standard method (train cropsize=224, test cropsize=224/256, achieve 76% acc), and then finetune it with larger cropsize of 384 for a few epoch and test with the enlarged cropsize of 384(resized the short side to 384 * 256 / 224 before crop) ? Or do I need to train and test with larger cropsize of 384 from scratch directly? Do we have other things to pay attention to except the training/test cropsizes ?

created time in 2 months

issue commentpytorch/pytorch

Pybind11 cpp extensions broken with pytorch v1.5.0

Solved, I recompiled my pytorch and ues the libtorch in the torch/ directory of the compiled source path. And use the TORCH_USE_RTLD_GLOBAL=YES method will work. It seems that pytorch and libtorch should be compiled under same circumstance.

bcharlier

comment created time in 2 months

issue commentpytorch/pytorch

Pybind11 cpp extensions broken with pytorch v1.5.0

My error message is: ImportError: /root/repo/play/build/play.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

bcharlier

comment created time in 2 months

issue commentpytorch/pytorch

Pybind11 cpp extensions broken with pytorch v1.5.0

Hi,

I met same problem and setting TORCH_USE_RTLD_GLOBAL=YES does not solve my problem. I am using pytorch1.6 from conda and libtorch downloaded from official download page. Any suggestions please ?

bcharlier

comment created time in 2 months

issue openedhuanghuidmml/cail2019_track2

请问规则匹配是怎么做的呢

你好呀,请问readme里面提到的基于规则的方法是怎么做的呀,相关代码在这个repo里面没找到呢,能分享一下吗,感谢

created time in 2 months

push eventCoinCheung/pytorch-loss

coincheung

commit sha 677417ba096af4b27e0dc3b95877a6e19c9e6bc0

a bit fix of one hot

view details

push time in 2 months

push eventCoinCheung/ImageNet-Loader

coincheung

commit sha e3b860c1597f2c1aa30df2479b717989256ce1bd

refactor and add pca-noise and add color jitter

view details

push time in 2 months

push eventCoinCheung/pytorch-loss

coincheung

commit sha 4cbc382cc1acbc0de187371e9fa848811b383b60

a bit refactoring

view details

push time in 2 months

issue commentpytorch/pytorch

[C++/pytorch] data loading and working with complex data structures with the C++ frontend

For reference, this is the C++ extension I had in my slides at the devcon:

#include <opencv2/opencv.hpp>
#include <torch/extension.h>

at::Tensor compute(at::Tensor x, at::Tensor w) {
  cv::Mat input(x.size(0), x.size(1), CV_32FC1, x.data<float>());
  cv::Mat warp(3, 3, CV_32FC1, w.data<float>());

  cv::Mat output;
  cv::warpPerspective(input, output, warp, {64, 64});

  return torch::from_blob(output.ptr<float>(), {64, 64}).clone();
}

I think the order in which you want to convert is inverted, but maybe it's useful :)

Hi,

I use the setup.py method to compile the extension, and I met the message of opencv2/opencv.hpp: No such file or directory, would you please tell me how should I configure the setup.py to use third-party libraries ?

zeryx

comment created time in 2 months

issue commentqiufengyuyi/event_extraction

方便上传一下原始数据集吗?

我搞错连接了,请无视我....

l294265421

comment created time in 2 months

issue commentqiufengyuyi/event_extraction

方便上传一下原始数据集吗?

请问这个数据集是只有5000条是训练集还有5000条是没有村注的测试集吗,我看那个链接里面说是10000条训练集,下载下来csv后发现只有5000行

l294265421

comment created time in 2 months

push eventCoinCheung/fixmatch-pytorch

coincheung

commit sha 9b5a31d194571fd7d8912032f12a5274d04be814

update a bit of ReadMe

view details

push time in 2 months

issue commentCoinCheung/BiSeNet

About hyper-parameters in OhemCELoss

hi,

From my personal observation, there is no conspicuous routine of setting these hyper-parameters. You can just try different values according to different models and datasets. Or you can just fix these parameters to default values and adjust other parameters, before trying these parameters.

Mayy1994

comment created time in 2 months

push eventCoinCheung/pytorch-loss

coincheung

commit sha 1d19bfdbdb3ad93844ca96116d3895ce8d2496d8

use template pattern for one hot

view details

coincheung

commit sha f11b03bfcb97d8c85c849932d277ead7f32a5e5f

one hot

view details

coincheung

commit sha 4c5ee8b0e13046e5454708e6bd1e0c6b307489d0

optimizer soft-dice-loss to better use gpu

view details

coincheung

commit sha 4cc9dc88478dbfdc00cba78014f4fbbb07ca27c8

a bit fix

view details

push time in 2 months

issue openedpanchunguang/ccks_baidu_entity_link

你好,请问怎么用长文本做实体连接?

你好呀,

看readme.md里面这样说:

传统的实体链接任务主要是针对长文档。长文档拥有充分的上下文 信息,能够辅助实体的识别与消歧。相比之下,中文短文本的实体链接存 在很大的挑战。实体链接整个过程包括实体识别和实体消歧两个子任务。 针...

请问针对长文本做实体连接使用的是什么传统方法呀,能不能讲一下方法是什么名字,或者什么思路,或者甩个链接来看?

created time in 2 months

issue commentCoinCheung/BiSeNet

How to use multi-output to get higher accuracy?

Hi,

Personally, I believe the aux outputs will all have some performance loss, since they have small spatial area. The primary output feature has a downsample rate of 8, while the auxiliary output features have downsample rate of 16 and 32. Their output may be too coarse to offer enough help. If you would like to have a try, I think you can try to add the auxiliary feature with downsample rate of 16 and discard the 32 times downsampled feature. Good luck.

dzyjjpy

comment created time in 2 months

issue commentCoinCheung/BiSeNet

How to use multi-output to get higher accuracy?

Hi,

You can try to sum up the logits of all the outputs before using softmax to compute the score, though I do not think this would bring good result easily, since the auxiliary outputs are not rich in information of the extracted features. You can also try to sum up the softmax scores of all the outputs before using torch.argmax to compute the pred. I have not tried it, you can doing some experiements and see if it will work.

dzyjjpy

comment created time in 2 months

issue commentCoinCheung/pytorch-loss

do buffered params in EMA need to be updated?

Hi,

From my observation, there are two methods to deal with buffers, one is to process it with ema along with the parameters, and the other is to copy them directly from the model you are training. From my experience, I saw few difference between these two methods. There can be some performance difference, but I did not observer stable trend. Sometimes, implementing ema on buffers are better and sometimes the other works better, and the test gap between them is not big.

DietDietDiet

comment created time in 2 months

issue commentXiefan-Guo/CCKS2019_subject_extraction

成绩复现

@CN-COTER 你好呀,请问你是用什么数据集复现的呀,我看这个数据集的eval集里面是没有实体的,train集里面有"文本", "事件类型",“事件主体”,eval集里面是"文本", "事件类型",然后没有事件主体,请问这个分数是怎么计算得到的呀

CN-COTER

comment created time in 2 months

issue closedCoinCheung/BiSeNet

unable to access the BiSeNetV2 pre-trained model using google drive

I am unable to access the BiSeNetV2 pre-trained model using google drive. Can you please provide me it's access permission: li.ying@cidi.ai. Thank you!

closed time in 2 months

earlysleepearlyup

issue commentCoinCheung/BiSeNet

unable to access the BiSeNetV2 pre-trained model using google drive

I am closing this, please be free to open new issues for more communication.

earlysleepearlyup

comment created time in 2 months

issue commentCoinCheung/BiSeNet

Confused about applying colorjitter augment to mask.

Hi, Would you please tell me from which line shows that the colorjitter is applied to both image and labels ?

hot-dog

comment created time in 2 months

issue openedsassoftware/kernel-pca-sample-code

Do you know how to compute when number of samples is large ?

Hi,

I noticed that the kpca method uses dot product matrix between samples rather than dimensions of naive pca method. That means when the number of samples gets large, the matrix requires more memory, which will hinder the application of the method to processing large datasets. Do you know how to handle this problem ?

created time in 2 months

more