profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/eps696/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

eps696/aphantasia 274

CLIP + FFT/DWT = text-to-image

eps696/stylegan2 123

StyleGAN2 for practice

eps696/stylegan2ada 109

StyleGAN2-ada for practice

eps696/stargan2 28

StarGAN2 for practice

eps696/dx11-vvvv 0

DirectX11 Rendering within vvvv

eps696/VVVV.HTMLTexture.DX11 0

Browser for DX11, based on ChromiumFX library. Perfectly fit for UI.

eps696/VVVV.Packs.OSC 0

OSC nodes that think like I do for VVVV (using existing OSC framework)

startedthoppe/NansAreNumbers

started time in 2 days

issue commenteps696/stargan2

How to run with pretrained celeba HQ ?

you can use afhq model, since it was trained without high pass filtering (--w_hpf==0)

hdxpmc

comment created time in 2 days

issue closedeps696/stargan2

How to run with pretrained celeba HQ ?

Hello, Where to download pretrained celebA HQ to run with test.py ?

closed time in 2 days

hdxpmc

issue commenteps696/stargan2

How to run with pretrained celeba HQ ?

  1. you can put the model anywhere, specifying the path to it with --model argument.
  2. at the moment, this repo does not support high-pass filtering extension, which was used for original celebA HQ model.
hdxpmc

comment created time in 2 days

push eventeps696/stargan2

Vadim Epstein

commit sha e40bbe9e7da69d358d3b6a7ab2fa429829784ab6

macos files fix

view details

push time in 6 days

push eventeps696/stylegan2

Vadim Epstein

commit sha e72c7ef73bbc2eb0eaf9cc906e34d801cdd13d15

macos files fix

view details

push time in 6 days

push eventeps696/stylegan2ada

Vadim Epstein

commit sha c5b194d5849bf63185a43b095ce80f0d78fc9d6d

macos files fix

view details

push time in 6 days

startedsnap-research/articulated-animation

started time in 6 days

issue commenteps696/stylegan2ada

I am training on RGBA images. Do we have a pre-trained model (ffhq-256.pkl) that support RGBA on this version of yours?

thanks for reporting. i got this error as well, trying to finetune such converted model in this repo. the reason: fmaps (base conv filters count) in pytorch-ada version is halved for resolutions < 512. i've added an argument --fmaps_fix to train.py, which fixes this count to match the original stylegan2 models. please confirm, if this fixed the problem on your side.

BishwaBS

comment created time in 8 days

push eventeps696/stylegan2ada

Vadim Epstein

commit sha cd3504b4135acca6c62458c49d33f33f68cc7e07

fix for converted models

view details

push time in 8 days

issue commenteps696/stylegan2ada

I am training on RGBA images. Do we have a pre-trained model (ffhq-256.pkl) that support RGBA on this version of yours?

what exactly do you mean by "tried to use"? which command did you run

BishwaBS

comment created time in 8 days

issue commenteps696/stylegan2ada

I am training on RGBA images. Do we have a pre-trained model (ffhq-256.pkl) that support RGBA on this version of yours?

you can easily construct such model yourself from the standard (RGB) ffhq-256.pkl or ffhq-1024.pkl model with scripts from this repo (see "Tweaking models" part). then it can be finetuned with any of these repos.

BishwaBS

comment created time in 12 days

push eventeps696/stylegan2ada

Vadim Epstein

commit sha 40e827c5ba889109f3d5534630a213d4bce26a09

fix

view details

push time in 17 days

push eventeps696/stylegan2

Vadim Epstein

commit sha 61fecbc093cd45deffc0e29c54f38c8e006e3698

fix

view details

push time in 17 days

startedPAIR-code/knowyourdata

started time in 18 days

issue closedeps696/aphantasia

Invalid Syntax when trying to run the first time

I followed all the instructions and still I'm getting this error -

aphantasia git:(master) python clip_fft.py -t "the text" --size 1280-720 
  File "clip_fft.py", line 112
    Ys = [torch.randn(*Y.shape).cuda() for Y in [Yl_in, *Yh_in]]

closed time in 21 days

GonrasK

issue commenteps696/aphantasia

Invalid Syntax when trying to run the first time

i presume, the question is answered now, so i'm closing the issue. feel free to reopen it if needed (or start a new one in case of different problem).

GonrasK

comment created time in 21 days

issue closedeps696/stargan2

hi, which platform did you spread this repo?

Do you broadcast this repo on Facebook or Reddit? I want to know how to access the wonderful repo for the first time.

closed time in 21 days

shoutOutYangJie

issue commenteps696/stargan2

hi, which platform did you spread this repo?

i guess, the question is addressed, so i'm closing the issue. feel free to reopen if needed.

shoutOutYangJie

comment created time in 21 days

push eventeps696/aphantasia

Vadim Epstein

commit sha f124f373d8d267ed6dd09b53d9188c2402a07a99

fix

view details

push time in a month

issue commenteps696/aphantasia

Invalid Syntax when trying to run the first time

no way.. in theory it would work, but soo prohibitively slow, it just doesn't make any sense use colab, if you're out of local resources

GonrasK

comment created time in a month

push eventeps696/aphantasia

Vadim Epstein

commit sha 4b7bccbe8ac02eac739f2d366fa5c070be7d978e

bug fix

view details

push time in a month

issue commenteps696/aphantasia

Invalid Syntax when trying to run the first time

no it wasn't - you dropped the most important part about syntax error.

what is your python version? the repo is supported on 3.7. if might also work on 3.5, but i can't check that. older versions are definitely out of scope.

GonrasK

comment created time in a month

issue commenteps696/aphantasia

Invalid Syntax when trying to run the first time

  1. please always quote full error log, from the start till the end. it's impossible to tell what's wrong from such cut (it doesn't even include error message).
  2. i cannot reproduce it here - the mentioned command runs without problems. besides, line 112 belongs to the function for DWT-based generation, which is used if explicitly set --dwt (not presented in your command). so it seems something is corrupt or edited on your side. try downloading fresh repo again; if the problem persists, let's check your actions step by step.
GonrasK

comment created time in a month

issue commenteps696/aphantasia

DeepSpeed integration for training on local cheaper GPUs.

i do not think i'd invest more time in VQGAN methods: there are tons of various flavours for it on the internet. i've put my own implementation here just for collection, no further development is planned for it.

Vbansal21

comment created time in a month

push eventeps696/stargan2

Vadim Epstein

commit sha 582b9d3394ec91cf2cf06b7c22f29df4c8dd6e40

added error message

view details

push time in a month

issue closedeps696/stargan2

Multi GPU ?

Have you managed to run your code with multi-GPU ?

Thanks, Steve

closed time in a month

thusinh1969

push eventeps696/stargan2

Vadim Epstein

commit sha 16e26e9fb6986d0818158433ead8ab12fe51c780

bug fix

view details

push time in a month

issue commenteps696/aphantasia

DeepSpeed integration for training on local cheaper GPUs.

thank you, glad to hear you liked it. alas, i've got zero experience with deepspeed yet, so can't help with it. i'll leave this issue open, in the hope it happens one day.

have to add tho, this method is not memory-hungry like fat GANs which are mostly used elsewhere, so it should work as it is with any cuda-enabled GPU. i used it on laptop with 1060 mobile (definitely cheaper than 1660ti). just decrease samples parameter (kinda batchcount), and there you go.

Vbansal21

comment created time in a month