profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/florian-hoenicke/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Florian Hönicke florian-hoenicke

florian-hoenicke/beam 0

Apache Beam is a unified programming model for Batch and Streaming

florian-hoenicke/dashboard 0

Interactive UI for analyzing logs, designing flows and viewing Hub images

florian-hoenicke/dj_gan 0

Generates electronic music.

florian-hoenicke/docs 0

TensorFlow documentation

florian-hoenicke/espeak-ng 0

eSpeak fork providing significant changes and improvements to the upstream eSpeak project

florian-hoenicke/FaceEditor 0

Unsupervised learning to find facial features.

florian-hoenicke/fulfillment-weather-nodejs 0

Integrating an API with Dialogflow's Fulfillment

florian-hoenicke/jina 0

An easier way to build neural search in the cloud

florian-hoenicke/jina-hub 0

An open-registry for hosting Jina executors via container images

florian-hoenicke/marytts 0

MARY TTS -- an open-source, multilingual text-to-speech synthesis system written in pure java

startedneuro-inc/neuro-flow

started time in 3 days

startedapankrat/nullboard

started time in 11 days

startedbenibela/xidel

started time in 12 days

startedadityatelange/hugo-PaperMod

started time in 12 days

startedserhii-londar/open-source-mac-os-apps

started time in 13 days

startedNightonke/Gitee

started time in 13 days

startedpodlove/podlove-web-player

started time in 13 days

issue closedgoogle-research/simclr

Error when pre-training unsupervised

I have created a custom tfds dataset with unlabeled images (no label info saved in feature dicc and without train / test split). When trying to pre-train a model unsupervised (lineareval_while_pretraining=False) I am getting the following errors:

1. ValueError: Unknown split "validation". Should be one of ['train']. I can bypass the error by splitting the dataset in train and validation

I0207 17:06:26.491837 139792756389696 dataset_builder.py:360] Reusing dataset wsi2outcome (/home/rene/tensorflow_datasets/wsi2outcome/1.0.0) Traceback (most recent call last): File "run.py", line 666, in <module> app.run(main) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 303, in run _run_main(main, args) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "run.py", line 469, in main num_eval_examples = builder.info.splits[FLAGS.eval_split].num_examples File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/splits.py", line 230, in getitem instructions = tfrecords_reader.make_file_instructions( File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 165, in make_file_instructions absolute_instructions = instruction.to_absolute(name2len) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 667, in to_absolute return [_rel_to_abs_instr(rel_instr, name2len) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 667, in <listcomp> return [_rel_to_abs_instr(rel_instr, name2len) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 494, in _rel_to_abs_instr raise ValueError('Unknown split "{}". Should be one of {}.'.format( ValueError: Unknown split "validation". Should be one of ['train'].

I can bypass this error by re-creating the custom tifs dataset with a train test split but I don't quite understand why I need to specify a train validation test split in the unsupervised setting.

2. If I split the custom tfds dataset into test, training, validation I run into the error that labels are expected. As this is an unlabeled dataset (I have a smaller labeled one for fine-tuning later) I am not sure what to make of this.

I0207 17:32:02.563346 139707779163968 dataset_builder.py:360] Reusing dataset wsi2outcome (/home/rene/tensorflow_datasets/wsi2outcome/1.0.0) Traceback (most recent call last): File "run.py", line 666, in <module> app.run(main) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 303, in run _run_main(main, args) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "run.py", line 470, in main num_classes = builder.info.features['label'].num_classes File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/features/features_dict.py", line 142, in getitem return self._feature_dict[key] KeyError: 'label'

Could someone kindly assist me in resolving the errors? Thank you in advance.

I am running the following command: python run.py --train_mode=pretrain --train_batch_size=512 --train_epochs=10 --learning_rate=1.0 --weight_decay=1e-4 --lineareval_while_pretraining=False --temperature=0.5 --dataset=wsi2outcome --image_size=32 --resnet_depth=18 --use_blur=False --color_jitter_strength=0.5 --model_dir=/home/rene/ml/wsi2outcome/src/models/self_supervised/simclr2/model --use_tpu=False

closed time in 14 days

Rebero

issue commentgoogle-research/simclr

Train on consecutive video frames.

Thank you for the link!

nofreewill42

comment created time in 16 days

issue commentgoogle-research/simclr

Train on consecutive video frames.

Sure, some work has explored extending SimCLR to video (e.g. https://arxiv.org/abs/2008.03800).

nofreewill42

comment created time in 17 days

issue openedgoogle-research/simclr

Train on consecutive video frames.

Wouldn't it make sense to not just train on two different transformations of the same image, but on transformations of two close enough frames of the same video?

For faster video loading, and also for requiring much less storage, there could be done something like this: https://gist.github.com/nofreewill42/1ab604a463561b118702d39a51bbb623

But for optimal and still fast training, we could just cut the video into its' individual frames, requiring a magnitude larger storage.

created time in 18 days

issue openedgoogle-research/simclr

Error when pre-training unsupervised

I have created a custom tfds dataset with unlabeled images (no label info saved in feature dicc and without train / test split). When trying to pre-train a model unsupervised (lineareval_while_pretraining=False) I am getting the following errors:

1. ValueError: Unknown split "validation". Should be one of ['train']. I can bypass the error by splitting the dataset in train and validation

I0207 17:06:26.491837 139792756389696 dataset_builder.py:360] Reusing dataset wsi2outcome (/home/rene/tensorflow_datasets/wsi2outcome/1.0.0) Traceback (most recent call last): File "run.py", line 666, in <module> app.run(main) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 303, in run _run_main(main, args) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "run.py", line 469, in main num_eval_examples = builder.info.splits[FLAGS.eval_split].num_examples File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/splits.py", line 230, in getitem instructions = tfrecords_reader.make_file_instructions( File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 165, in make_file_instructions absolute_instructions = instruction.to_absolute(name2len) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 667, in to_absolute return [_rel_to_abs_instr(rel_instr, name2len) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 667, in <listcomp> return [_rel_to_abs_instr(rel_instr, name2len) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 494, in _rel_to_abs_instr raise ValueError('Unknown split "{}". Should be one of {}.'.format( ValueError: Unknown split "validation". Should be one of ['train'].

I can bypass this error by re-creating the custom tifs dataset with a train test split but I don't quite understand why I need to specify a train validation test split in the unsupervised setting.

2. If I split the custom tfds dataset into test, training, validation I run into the error that labels are expected. As this is an unlabeled dataset (I have a smaller labeled one for fine-tuning later) I am not sure what to make of this.

I0207 17:32:02.563346 139707779163968 dataset_builder.py:360] Reusing dataset wsi2outcome (/home/rene/tensorflow_datasets/wsi2outcome/1.0.0) Traceback (most recent call last): File "run.py", line 666, in <module> app.run(main) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 303, in run _run_main(main, args) File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "run.py", line 470, in main num_classes = builder.info.features['label'].num_classes File "/home/rene/anaconda3/envs/gerstung_wsi_prep/lib/python3.8/site-packages/tensorflow_datasets/core/features/features_dict.py", line 142, in getitem return self._feature_dict[key] KeyError: 'label'

Could someone kindly assist me in resolving the errors? Thank you in advance.

I am running the following command: python run.py --train_mode=pretrain --train_batch_size=512 --train_epochs=10 --learning_rate=1.0 --weight_decay=1e-4 --lineareval_while_pretraining=False --temperature=0.5 --dataset=wsi2outcome --image_size=32 --resnet_depth=18 --use_blur=False --color_jitter_strength=0.5 --model_dir=/home/rene/ml/wsi2outcome/src/models/self_supervised/simclr2/model --use_tpu=False

created time in 20 days

startedcylc/cylc-flow

started time in 23 days

startedwolever/autorepr

started time in 24 days

issue closedgoogle-research/simclr

Fine-tuned the pretrained weights

Hi! Thanks for your great works.

I had some trouble when I wanted to finetune the pretrained weights you provided. More precisely, if I set saved_model(image, trainable=True), there will be a LookupError:

    finetuning_tf2.py:942 update  *
        grads = tape.gradient(loss_t, dense_layer_weights)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/backprop.py:1079 gradient  **
        unconnected_gradients=unconnected_gradients)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/imperative_grad.py:77 imperative_grad
        compat.as_str(unconnected_gradients.value))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:808 _backward_function
        return self._rewrite_forward_and_call_backward(call_op, *args)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:723 _rewrite_forward_and_call_backward
        forward_function, backwards_function = self.forward_backward(len(doutputs))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:632 forward_backward
        forward, backward = self._construct_forward_backward(num_doutputs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:680 _construct_forward_backward
        func_graph=backwards_graph)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py:998 func_graph_from_py_func
        func_outputs = python_func(*func_args, **func_kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:670 _backprop_function
        src_graph=self._func_graph)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_util.py:684 _GradientsHelper
        lambda: grad_fn(op, *out_grads))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_util.py:340 _MaybeCompile
        return grad_fn()  # Exit early
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_util.py:684 <lambda>
        lambda: grad_fn(op, *out_grads))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:723 _rewrite_forward_and_call_backward
        forward_function, backwards_function = self.forward_backward(len(doutputs))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:632 forward_backward
        forward, backward = self._construct_forward_backward(num_doutputs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:680 _construct_forward_backward
        func_graph=backwards_graph)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py:998 func_graph_from_py_func
        func_outputs = python_func(*func_args, **func_kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:670 _backprop_function
        src_graph=self._func_graph)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_util.py:638 _GradientsHelper
        (op.name, op.type))

    LookupError: No gradient defined for operation 'resnet/block_group4/bottleneck_block_15/batch_norm_relu_52/sync_batch_normalization_52/moments/IdentityN_1' (op type: IdentityN)

The model is a _UserObject, which is also inconvenient for some Keras model usages. TensorFlow version is 2.5.0-dev20210114. I am a PyTorch user, maybe I missed some important things in TensorFlow. I will be very appreciative if you can give me some advice.

Thanks!

closed time in a month

summelon

issue commentgoogle-research/simclr

Fine-tuned the pretrained weights

Got it. Thank you!

summelon

comment created time in a month

issue commentgoogle-research/simclr

Fine-tuned the pretrained weights

I think that information is lost at this point. SavedModel saves the object hierarchy to track variables but other information is ignored so the restored model isn't actually a tf.keras.Model so won't have the summary function.

summelon

comment created time in a month

issue closedgoogle-research/simclr

Did you freeze any layer in the fine-tune step for the results reported in simclr-v2 paper?

Hi, there. I wonder for the results reported in the paper, did you fine-tune the whole network, meaning the flag fine_tune_after_block == -1? From the paper, it says "For fine-tuning, by default we fine-tune from the first layer of the projection head for 1%/10% of labeled examples, but from the input of the projection head when 100% labels are present. " so I initially thought you only fine-tuned the weights after this layer (meaning freezing all the layers beforehand) but it seems contradicting the rest of the paper and the codebase. Please help clarify. Thanks!

closed time in a month

cp1123

issue closedgoogle-research/simclr

python version

Hi, what's the python version?

closed time in a month

gscahhh

issue commentgoogle-research/simclr

Did you freeze any layer in the fine-tune step for the results reported in simclr-v2 paper?

Yes, the fine-tuning means fine-tuning the whole network. The description of " fine-tune from the first layer of the projection head" means we keep the first layer of projection head and add a linear head with softmax on top of that when fine-tuning.

Hope that makes it clear.

cp1123

comment created time in a month

issue commentgoogle-research/simclr

Data augument for supervised models

No, they are trained with standard resnet settings (e.g. random crop augmenttion, no color augmentation or gaussian blur). With the extra augmentations and a longer training schedule, you could expect the supervised models to be improved roughly 0.5-2%.

Dashi-1997

comment created time in a month

startedpmndrs/jotai

started time in a month

issue openedgoogle-research/simclr

Data augument for supervised models

Hello, does the supervised comparison model you provided here use the same data enhancement method(random crop (with resize and random flip), random color distortion, and random Gaussian blur)? image

created time in a month

push eventgoogle-research/simclr

Ting Chen

commit sha 6bf69ce127ae33e181e1a6c5777c84570cb5d147

fix a few small issues

view details

push time in a month

issue commentgoogle-research/simclr

Fine-tuned the pretrained weights

Cool! This hack fixed the error! Thank you! I guess the reason why it works before is you set the loaded model as trainable=False.

By the way, there is another problem that still confuses me. How can I use the model imported from tf.saved_model.load() as a Keras model(i.e. model.summary()) and check some details like layer information or trainable parameters? Specifically, the model from your gs://simclr-checkpoints-tf2/.

summelon

comment created time in a month

issue commentgoogle-research/simclr

Fine-tuned the pretrained weights

You are right, I can reproduce it now. Don't know how it was working before. It does seem like missing custom gradients is the problem. My guess is these are custom gradients needed for the all reduce of distributed models. However for single device these should not be required. Maybe we need to re-export these.

One extreme hack to get this specific model working would be run the following code:

def _IdNGrad(_, *grad):
  return grad
custom_grads = ['CustomGradient-11622', 'CustomGradient-11626', 'CustomGradient-11630', 'CustomGradient-11700', 'CustomGradient-11704', 'CustomGradient-11708', 'CustomGradient-11775', 'CustomGradient-11779', 'CustomGradient-11783', 'CustomGradient-11851', 'CustomGradient-11855', 'CustomGradient-11859', 'CustomGradient-11927', 'CustomGradient-11931', 'CustomGradient-11935', 'CustomGradient-12004', 'CustomGradient-12008', 'CustomGradient-12012', 'CustomGradient-12080', 'CustomGradient-12084', 'CustomGradient-12088', 'CustomGradient-12156', 'CustomGradient-12160', 'CustomGradient-12164', 'CustomGradient-12233', 'CustomGradient-12237', 'CustomGradient-12241', 'CustomGradient-12309', 'CustomGradient-12313', 'CustomGradient-12317', 'CustomGradient-12385', 'CustomGradient-12389', 'CustomGradient-12393', 'CustomGradient-12465', 'CustomGradient-12469', 'CustomGradient-12473', 'CustomGradient-12540', 'CustomGradient-12544', 'CustomGradient-12548', 'CustomGradient-12618', 'CustomGradient-12622', 'CustomGradient-12626', 'CustomGradient-12694', 'CustomGradient-12698', 'CustomGradient-12702', 'CustomGradient-12771', 'CustomGradient-12775', 'CustomGradient-12779', 'CustomGradient-12847', 'CustomGradient-12851', 'CustomGradient-12855', 'CustomGradient-12923', 'CustomGradient-12927', 'CustomGradient-12931', 'CustomGradient-13000', 'CustomGradient-13004', 'CustomGradient-13008', 'CustomGradient-13076', 'CustomGradient-13080', 'CustomGradient-13084', 'CustomGradient-13152', 'CustomGradient-13156', 'CustomGradient-13160', 'CustomGradient-13229', 'CustomGradient-13233', 'CustomGradient-13237', 'CustomGradient-13305', 'CustomGradient-13309', 'CustomGradient-13313', 'CustomGradient-13381', 'CustomGradient-13385', 'CustomGradient-13389', 'CustomGradient-13461', 'CustomGradient-13465', 'CustomGradient-13469', 'CustomGradient-13536', 'CustomGradient-13540', 'CustomGradient-13544', 'CustomGradient-13614', 'CustomGradient-13618', 'CustomGradient-13622', 'CustomGradient-13690', 'CustomGradient-13694', 'CustomGradient-13698', 'CustomGradient-13767', 'CustomGradient-13771', 'CustomGradient-13775', 'CustomGradient-13843', 'CustomGradient-13847', 'CustomGradient-13851', 'CustomGradient-13919', 'CustomGradient-13923', 'CustomGradient-13927', 'CustomGradient-13996', 'CustomGradient-14000', 'CustomGradient-14004', 'CustomGradient-14072', 'CustomGradient-14076', 'CustomGradient-14080', 'CustomGradient-14148', 'CustomGradient-14152', 'CustomGradient-14156', 'CustomGradient-14225', 'CustomGradient-14229', 'CustomGradient-14233', 'CustomGradient-14301', 'CustomGradient-14305', 'CustomGradient-14309', 'CustomGradient-14377', 'CustomGradient-14381', 'CustomGradient-14385', 'CustomGradient-14454', 'CustomGradient-14458', 'CustomGradient-14462', 'CustomGradient-14530', 'CustomGradient-14534', 'CustomGradient-14538', 'CustomGradient-14606', 'CustomGradient-14610', 'CustomGradient-14614', 'CustomGradient-14683', 'CustomGradient-14687', 'CustomGradient-14691', 'CustomGradient-14759', 'CustomGradient-14763', 'CustomGradient-14767', 'CustomGradient-14835', 'CustomGradient-14839', 'CustomGradient-14843', 'CustomGradient-14915', 'CustomGradient-14919', 'CustomGradient-14923', 'CustomGradient-14990', 'CustomGradient-14994', 'CustomGradient-14998', 'CustomGradient-15068', 'CustomGradient-15072', 'CustomGradient-15076', 'CustomGradient-15144', 'CustomGradient-15148', 'CustomGradient-15152', 'CustomGradient-15221', 'CustomGradient-15225', 'CustomGradient-15229', 'CustomGradient-15297', 'CustomGradient-15301', 'CustomGradient-15305', 'CustomGradient-15373', 'CustomGradient-15377', 'CustomGradient-15381', 'CustomGradient-15450', 'CustomGradient-15454', 'CustomGradient-15458', 'CustomGradient-15526', 'CustomGradient-15530', 'CustomGradient-15534', 'CustomGradient-15602', 'CustomGradient-15606', 'CustomGradient-15610', 'CustomGradient-15685', 'CustomGradient-15689', 'CustomGradient-15693', 'CustomGradient-15750', 'CustomGradient-15754', 'CustomGradient-15758', 'CustomGradient-15815', 'CustomGradient-15819', 'CustomGradient-15823']
for name in custom_grads:
  try:
    tf.RegisterGradient(name)(_IdNGrad)
  except:
    pass

I will try to dig deeper into this and maybe re-export the SavedModels to not need custom gradients.

summelon

comment created time in a month

issue openedgoogle-research/simclr

Did you freeze any layer in the fine-tune step for the results reported in simclr-v2 paper?

Hi, there. I wonder for the results reported in the paper, did you fine-tune the whole network, meaning the flag fine_tune_after_block == -1? From the paper, it says "For fine-tuning, by default we fine-tune from the first layer of the projection head for 1%/10% of labeled examples, but from the input of the projection head when 100% labels are present. " so I initially thought you only fine-tuned the weights after this layer (meaning freezing all the layers beforehand) but it seems contradicting the rest of the paper and the codebase. Please help clarify. Thanks!

created time in a month

issue commentgoogle-research/simclr

Fine-tuned the pretrained weights

@saxenasaurabh I have tried again. It still occurred. Would you like to check this colab?

Thanks for your help!

summelon

comment created time in a month

issue commentgoogle-research/simclr

Fine-tuned the pretrained weights

I just tried your code and it seems to work fine. No gradient defined for operation indicates some problem with importing tensorflow. Maybe try restarting your colab runtime?

summelon

comment created time in a month