profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/cszn/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Kai Zhang cszn CVL, ETH Zurich Zurich, Switzerland https://cszn.github.io/ Image Restoration; Inverse Problems

cszn/KAIR 1006

Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

cszn/DnCNN 972

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising (TIP, 2017)

cszn/DPSR 781

Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels (CVPR, 2019) (PyTorch)

cszn/USRNet 586

Deep Unfolding Network for Image Super-Resolution (CVPR, 2020) (PyTorch)

cszn/IRCNN 485

Learning Deep CNN Denoiser Prior for Image Restoration (CVPR, 2017) (Matlab)

cszn/BSRGAN 368

Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!

cszn/SRMD 345

Learning a Single Convolutional Super-Resolution Network for Multiple Degradations (CVPR, 2018) (Matlab)

cszn/FFDNet 300

FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising (TIP, 2018)

cszn/DPIR 239

Plug-and-Play Image Restoration with Deep Denoiser Prior (IEEE TPAMI 2021) (PyTorch)

JingyunLiang/FKP 93

Official PyTorch code for Flow-based Kernel Prior with Application to Blind Super-Resolution (FKP, CVPR2021)

issue closedcszn/KAIR

Potential bug in main_train_dncnn.py and dataset_dncnn.py?

https://github.com/cszn/KAIR/blob/e7e2cc2940d4900f7aaf46824a8c16934d073871/main_train_dncnn.py#L112-L114 Note that dataset_opt refers to opt['datasets']['train'] and it will be used in

https://github.com/cszn/KAIR/blob/e7e2cc2940d4900f7aaf46824a8c16934d073871/data/dataset_dncnn.py#L19-L26

However, according to train_dncnn.json, https://github.com/cszn/KAIR/blob/e7e2cc2940d4900f7aaf46824a8c16934d073871/options/train_dncnn.json#L6-L20

n_channels, sigma and sigma_test are defined as opt['n_channels'], opt['sigma'] and opt['sigma_test'] instead of opt['train']['n_channels'], opt['train']['sigma'] and opt['train']['sigma_test'] ,which makes

self.n_channels = opt['n_channels'] if opt['n_channels'] else 3 
self.patch_size = opt['H_size'] if opt['H_size'] else 64 
self.sigma = opt['sigma'] if opt['sigma'] else 25 
self.sigma_test = opt['sigma_test'] if opt['sigma_test'] else self.sigma 

become

self.n_channels = 3 
self.patch_size = opt['H_size'] if opt['H_size'] else 64 
self.sigma = 25 
self.sigma_test = self.sigma 

Similar issues can also be found in main_train_drunet.py.

closed time in 5 hours

HolmesShuan

issue commentcszn/KAIR

Potential bug in main_train_dncnn.py and dataset_dncnn.py?

Fixed now!

https://github.com/cszn/KAIR/blob/c755c73a8123aaa06ad22420dbe9856d7e2b1f75/utils/utils_option.py#L58-L59

https://github.com/cszn/KAIR/blob/c755c73a8123aaa06ad22420dbe9856d7e2b1f75/utils/utils_option.py#L86 https://github.com/cszn/KAIR/blob/c755c73a8123aaa06ad22420dbe9856d7e2b1f75/options/train_dncnn.json#L25-L26

HolmesShuan

comment created time in 5 hours

push eventcszn/KAIR

Kai Zhang

commit sha c755c73a8123aaa06ad22420dbe9856d7e2b1f75

fix setting for sigma and sigma_test

view details

push time in 5 hours

startedjiaxi-jiang/FBCNN

started time in a day

issue commentcszn/KAIR

gan_type selection for SwinIR

https://github.com/cszn/KAIR/blob/e7e2cc2940d4900f7aaf46824a8c16934d073871/options/train_bsrgan_x4_gan.json#L88-L89

You can also use lsgan with weight 1 to get similar results.

richardburleigh

comment created time in a day

issue commentJingyunLiang/SwinIR

Can gan be finetuned on own dataset?

You should change the file name 003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth into 5000_G.pth

betterftr

comment created time in 4 days

issue closedcszn/KAIR

test srmd

i test srmd,but the resolution the generated image has not changed

closed time in 5 days

yudadabing

issue closedcszn/KAIR

Can‘t reproduce the same test results on BSD68 using my own USRNET training model.

Hi, I trained the USRNet using the code in this repository recently. I haven't found problems in data and network during training. But I got the worse results than yours in the paper. So I'd like to ask for some hints to train the model correctly. I was wondering whether the mannual seed affecting the results. If so, Can you show me your setting of mannual seed during your training? Thanks for your help.

微信图片_20201104101808 My results were shown above, the blue ones are results in the paper. I can get the same results using the pretrained model download from drive. The red ones are my results which have a large gap between the ones in papers.

closed time in 5 days

pigfather0315

issue commentcszn/KAIR

I don't have permission to access the model.any idea why?

Now you can download model via https://github.com/cszn/KAIR/blob/master/main_download_pretrained_models.py

marzi9696

comment created time in 5 days

issue closedcszn/KAIR

DatasetUSRNet

20-09-25 03:21:23.915 : <epoch:  5, iter:   5,000, lr:1.000e-04> G_loss: 3.441e-02 
20-09-25 03:21:23.916 : Saving the model.
Traceback (most recent call last):
  File "E:/Work/KAIR/main_train_msrresnet_psnr.py", line 219, in <module>
    main()
  File "E:/Work/KAIR/main_train_msrresnet_psnr.py", line 178, in main
    for test_data in test_loader:
  File "D:\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 363, in __next__
    data = self._next_data()
  File "D:\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 989, in _next_data
    return self._process_data(data)
  File "D:\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 1014, in _process_data
    data.reraise()
  File "D:\anaconda3\lib\site-packages\torch\_utils.py", line 395, in reraise
    raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "D:\anaconda3\lib\site-packages\torch\utils\data\_utils\worker.py", line 185, in _worker_loop
    data = fetcher.fetch(index)
  File "D:\anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "D:\anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "E:\Work\KAIR\data\dataset_usrnet.py", line 116, in __getitem__
    return {'L': img_L, 'H': img_H, 'k': k, 'sigma': noise_level, 'sf': self.sf, 'L_path': L_path, 'H_path': H_path}
AttributeError: 'DatasetUSRNet' object has no attribute 'sf'


Process finished with exit code 1

closed time in 5 days

zuenko

issue closedcszn/KAIR

Training question

Thanks for sharing.

if i want to train my own dataset, do i need to conform the format of the DIV2K? i noticed that you have put up a few dataset e.g. from Flick2K etc. Please advise. thank you

closed time in 5 days

lchunleo

issue closedcszn/KAIR

DPSR Training Error

Thanks for @cszn contribution, DPSR is an amazing job. I was tried to train with my own dataset with main_train_dpsr.py by using pretrained_netG. I used pretrained_netG with dpsr repository's model, DPSRx4.pth. But i got runtime error. RuntimeError: Error(s) in loading state_dict for SRResNet: Missing key(s) in state_dict: "model.3.weight", "model.3.bias", "model.6.weight", "model.6.bias". Unexpected key(s) in state_dict: "model.2.weight", "model.2.bias", "model.5.weight", "model.5.bias".

How can i fix? Hope your kind help. Thanks

closed time in 5 days

richardminh

issue closedcszn/KAIR

util.tensor2uint: It takes a lot of time to convert GPU tensor to CPU tensor

Hi, I successfully run the test code of FFDNet. But I find it takes a lot of time (4.5s) to convert GPU tendor to CPU tensor and convert to uint8 image. Can you help me solve the problem. Thanks.

image image

closed time in 5 days

LHJ1098826475

issue closedcszn/KAIR

Training warning:UserWarning

1 As show in pic. does it matters? or something wrong? and one more thing is that how to prove the training speed

closed time in 5 days

charlesfish712

push eventcszn/KAIR

Kai Zhang

commit sha 7d70f91bb7c03d8795a6bed29ee17b1c6b834e4e

Correct link

view details

push time in 5 days

issue commentcszn/KAIR

could not find MARK

345500_G.pth or 345000_G.pth? You should also try to test 345000_E.pth which is supposed to have a better performance.

betterftr

comment created time in 5 days

issue closedcszn/KAIR

could not find MARK

Cant test my model, pls haalp python main_test_swinir.py --task real_sr --scale 4 --model_path /superresolution/swinir_sr_realworld_x4_gan/models/345500_G.pth --folder_lq testsets/asd downloading model /superresolution/swinir_sr_realworld_x4_gan/models/345500_G.pth.pth Traceback (most recent call last): File "main_test_swinir.py", line 253, in <module> main() File "main_test_swinir.py", line 42, in main model = define_model(args) File "main_test_swinir.py", line 175, in define_model pretrained_model = torch.load(args.model_path) File "D:\CONDA\envs\real\lib\site-packages\torch\serialization.py", line 595, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "D:\CONDA\envs\real\lib\site-packages\torch\serialization.py", line 764, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: could not find MARK

closed time in 5 days

betterftr

issue closedcszn/KAIR

G_loss is normal?

when i train main_train_psnr.py by
python -m torch.distributed.launch --nproc_per_node=4 --master_port=1234 main_train_psnr.py --opt options/train_bsrgan_x4_psnr.json --dist True

the G_loss is ups and downs,from 2.114e-02 to 1.114e-02 then to 5.6e-02,is normal?

closed time in 5 days

wangqi-xxxx

issue commentcszn/KAIR

G_loss is normal?

It is normal since the degradation degrees of patches in different batches could be quite different.

wangqi-xxxx

comment created time in 5 days

issue commentcszn/KAIR

Training warning:UserWarning

It does not influence the training. The userwarning will disappear by changing https://github.com/cszn/KAIR/blob/a15fd792ebc08d2c9a22fef469c712ec77f42438/models/model_base.py#L62 with

scheduler.step()

Two basic ways to improve the training speed:

  • Train with DistributedDataParallel for large model
python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 main_train_gan.py --opt options/train_msrresnet_gan.json  --dist True
  • Improve the dataloader
charlesfish712

comment created time in 7 days

created tagcszn/KAIR

tagv1.1

Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

created time in 11 days

release cszn/KAIR

v1.1

released time in 11 days

push eventcszn/cszn

Kai Zhang

commit sha 4e9beed531434b1d2c4af4b068799ce12ecc7a78

Update README.md

view details

push time in 11 days

push eventcszn/cszn.github.io

Kai Zhang

commit sha 2323a1989f922bc86b573b39c74c6f37868cd37a

Update index.html

view details

push time in 11 days

push eventcszn/KAIR

Kai Zhang

commit sha a15fd792ebc08d2c9a22fef469c712ec77f42438

Update README.md

view details

push time in 11 days

push eventcszn/KAIR

Kai Zhang

commit sha 7b4b55fb13ddc924a86376e973a04bfa0602f353

Support downloading SwinIR pretrained models

view details

push time in 12 days

push eventcszn/KAIR

Kai Zhang

commit sha 6c38f5921d2c19290049689944a1b7d75e0a3e8d

Support downloading SwinIR pretrained models

view details

push time in 12 days

push eventcszn/KAIR

Kai Zhang

commit sha 715d18a7ab476629c682e5959d7ffedb79e19c52

Update README.md

view details

push time in 12 days

push eventcszn/KAIR

Kai Zhang

commit sha 83ad63e6109651d19c519e2a687c4aa5aaa12bc9

Add code to download pre-trained models

view details

push time in 12 days