profile
viewpoint

ganguli-lab/RetinalResources 36

Code for "A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs", ICLR 2019

jlindsey15/ComputerChess 1

Computer Chess Player

flyconnectome/hemibrain_olf_data 0

Summary data on identified FIB hemibrain neurons including olfactory PNs

push eventjlindsey15/FeedbackAndLocalPlasticity

Jack Lindsey

commit sha 4864cb3f562ee472d63184b3dadf69089c230ce6

Create README.md

view details

push time in 3 days

create barnchjlindsey15/FeedbackAndLocalPlasticity

branch : master

created branch time in 4 days

created repositoryjlindsey15/FeedbackAndLocalPlasticity

created time in 9 days

created repositoryjlindsey15/LearningToLearnWithFeedbackAndLocalPlasticity

created time in 9 days

push eventjlindsey15/jlindsey15.github.io

John Lindsey

commit sha 0d2a044ada4ed08e2fed24a636e9feb373abfe7d

update

view details

push time in a month

push eventjlindsey15/jlindsey15.github.io

John Lindsey

commit sha 1a08a8d875e581612ed929657ba40314734f86d9

update

view details

push time in a month

push eventjlindsey15/jlindsey15.github.io

John Lindsey

commit sha 9585a81c052916d16b530326f378b46b9f59d28f

update

view details

push time in a month

issue commentHobbitLong/SupContrast

SupContrast with Moco trick

Hi! Just wanted to second this, it would be great to have the code and a pre-trained model for the SupContrast network on ImageNet.

richcmwang

comment created time in 2 months

issue commentgoogle-research/simclr

Checkpoints for baseline networks?

Got it, thanks -- to confirm, you still use the cosine learning schedule for the supervised models released here?

VSehwag

comment created time in 2 months

issue commentgoogle-research/simclr

Checkpoints for baseline networks?

I think it can be mostly/entirely attributed to the implementation -- the gap between the two supervised networks is similar to the gap between SimCLR and MOCO. Also, sparsity is not the only metric where this happens -- dimensionality is another, as is invariance to data augmentations. My main interest is actually in how these models compare to neural responses in visual cortex, and there are systematic differences there as well.

I had considered the input normalization difference -- at one point I tried using the torchvision training code without any input normalization but it didn't seem to make a difference. So I figure it is some other small detail, perhaps the learning rate schedule. If any other differences come to mind let me know. For instance, was a weight decay of 10^-6 used for the supervised networks in this repo? I think the Torchvision standard is 10^-4.

VSehwag

comment created time in 2 months

issue commentgoogle-research/simclr

Checkpoints for baseline networks?

Hi! Another question along these lines. I am doing some analyses that involve comparing the representations learned by self-supervised networks (like SimCLR and MOCO) vs supervised ones. But I notice there are systematic differences in various aspects of the representations of networks in this repo (SimCLR, SimCLR v2, and the provided supervised baselines) and those from PyTorch code (MOCO pretrained models, and pretrained supervised models from Torchvision). For instance, the activations of models produced by this repo are consistently much sparser (even comparing the supervised network from this repo to the supervised network from Torchvision). Do you have any idea how the training code in this repo might deviate from PyTorch defaults to produce such differences?

VSehwag

comment created time in 2 months

push eventjlindsey15/jlindsey15.github.io

John Lindsey

commit sha a9103b1d989bc8e1226d7c63c05527bca0c1f6b1

update

view details

push time in 3 months

push eventjlindsey15/jlindsey15.github.io

John Lindsey

commit sha ef6c1c5227936351692ae554ed8c59a9bfce7cbd

update

view details

push time in 3 months

more