profile
viewpoint
Christopher Olah colah San Francisco colah.github.io I want to understand things clearly and explain them well. @openai formerly @brain-research.

colah/ImplicitCAD 822

A math-inspired CAD program in haskell. CSG, bevels, and shells; 2D & 3D; gcode generation...

colah/Conv-Nets-Series 182

A series of blog posts on convolutional neural networks and their generalizations.

colah/HaskSymb 63

An Experiment in Haskell Symbolic Algebra

colah/NLP-RNNs-Representations-Post 35

A blog post using word embeddings and RNNs to explain representations.

colah/colah-essays 33

A place for me to keep essays/papers I'm working on...

colah/Motivated-Topology 23

A topology textbook with a hubristic title

colah/ByronTrialNotes 20

Notes from the trial of Byron Sonne

colah/data 13

package manager for datasets

colah/implicitcad.org 8

Website for ImplicitCAD

issue commenttensorflow/lucid

AssertionError in calling render_vis

It sounds like you are running into numerical stability issues while visualizing your model. This could be caused by many issues -- for example, the objective you are visualizing could be exploding, your model may have unstable gradients, and much more.

Please keep in mind that lucid is research code.

Uiuran

comment created time in 3 days

issue commenttensorflow/lucid

Does lucid work outside colab?

Thanks for reaching out @cristinasegalin and @Uiuran .

Lucid itself does not depend on colab, but code in the notebooks often relies on colab features to create visualizations of lucid's output. I personally use lucid outside of colab on a day to day basis, as do many of my colleagues.

In order to say more, I'd need to you to report specific failures with debugging details.

Please keep in mind that Lucid is research code. It's maintained by researchers actively engaged in interpretability research. We share it with the community in the hopes that it helpful, but we don't have the capacity to provide detailed user support or debugging.

cristinasegalin

comment created time in 3 days

issue commenttensorflow/lucid

no model.layers when loaded inceptionv1

Hi Cristina,

Is it possible you're using an out of date version of lucid?

Chris

On Thu, Feb 13, 2020 at 11:17 AM Cristina Segalin notifications@github.com wrote:

Trying to run infinite_patterns tutorial on lucid but when loading inceptionV1 and not inceptionv1_caffe as it also gives error, there is no layer attribute in the model

AttributeError Traceback (most recent call last) in ----> 1 model.layers

AttributeError: 'InceptionV1' object has no attribute 'layers'

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tensorflow/lucid/issues/227?email_source=notifications&email_token=AAAPBWR2L3JHQ6HQ55ZN3PLRCWMC5A5CNFSM4KUZFKY2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4INLUCQQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAPBWTU3OZXR7S4O465MP3RCWMC5ANCNFSM4KUZFKYQ .

cristinasegalin

comment created time in 11 days

issue closedtensorflow/lucid

[Question] DeepDream without VGG ?

Hi all, it is indeed a question about Deep Dream, since i cant find a better place to put it (maybe you can indicate me ?), i will ask it here, after all Lucid is the heir of Deep Dream afaik.

Is it possible to do the same you do with deep dream (i.e. objective maximization of hidden channels), without using VGG ? I tried it with InceptionV4 but still without any success. Iam getting a lot of dimensionality errors in the gradients, but when i try with VGG it works fine, so the bug is not in the code ... Any example on how it is possible to work ? Thanks.

closed time in 22 days

Uiuran

issue commenttensorflow/lucid

[Question] DeepDream without VGG ?

DeepDream works fine with inception -- just take the tutorial notebook on the front page and use objectives.deepdream("mixed4d").

Unfortunately, we're unable to help you debug your own implementation.

Uiuran

comment created time in 22 days

pull request commenttensorflow/lucid

New synthetic stimuli module.

... I think it's possible that these are now so slow to render that rendering them is the bottleneck on collecting activations. :(

colah

comment created time in 24 days

push eventtensorflow/lucid

Chris Olah

commit sha 7d0f23348a72f21c44fa531ffe8bf336e2df9806

fix bug with fade_coef

view details

push time in 25 days

pull request commenttensorflow/lucid

New synthetic stimuli module.

(I listed Ludwig and both Nicks as reviewers to give them an FYI, only need a review from one person.)

colah

comment created time in 25 days

PR opened tensorflow/lucid

Reviewers
New synthetic stimuli module.

New module for generating synthetic image stimuli. It presently only supports a (very general) rounded curves stimulus, but provides a more general framework.

This has two major change from previous iterations (found in colab notebooks):

  • Creating a "sampling" function which gives one a great deal of flexibility as to how a stimulus is rendered. I think this is a natural division of responsibility, and gives end users a lot of control over their stimulus family.

  • Switching from a traditional curve stimulus to a more flexible "rounded corner" (based on the rmin function).

+176 -0

0 comment

1 changed file

pr created time in 25 days

create barnchtensorflow/lucid

branch : synthetic_stimuli

created branch time in 25 days

issue closedtensorflow/lucid

ChannelReducer breaks with new version of sklearn

AttributeError                            Traceback (most recent call last)
<ipython-input-12-060e126c4e4f> in <module>
----> 1 acts_ = ChannelReducer(6, "NMF").fit_transform(np.maximum(activations,0))

/opt/conda/lib/python3.7/site-packages/lucid/misc/channel_reducer.py in __init__(self, n_components, reduction_alg, **kwargs)
     52     for name in dir(sklearn.decomposition):
     53       obj = sklearn.decomposition.__getattribute__(name)
---> 54       if isinstance(obj, type) and issubclass(obj, sklearn.decomposition.base.BaseEstimator):
     55         algorithm_map[name] = obj
     56     if isinstance(reduction_alg, str):

AttributeError: module 'sklearn.decomposition' has no attribute 'base'

closed time in 25 days

mcleavey

issue commenttensorflow/lucid

ChannelReducer breaks with new version of sklearn

Yes! Thanks this is now resolved thanks to @bmiselis :)

mcleavey

comment created time in 25 days

delete branch tensorflow/lucid

delete branch : get_activations

delete time in a month

push eventtensorflow/lucid

Chris Olah

commit sha f548e02f72252600f0ae4d15f5836a1ce4d9ed76

Add memory efficient model.get_activations()

view details

Chris Olah

commit sha b33ada38e0428853d86440b333ae6d977c3d42c1

Improvements based on Micahel's comments

view details

Christopher Olah

commit sha d7296ec24f3d340955d9e88d64f92493027e087d

Merge pull request #224 from tensorflow/get_activations Add memory efficient model.get_activations()

view details

push time in a month

PR merged tensorflow/lucid

Reviewers
Add memory efficient model.get_activations()

The goal of this PR is to add a model.get_activations() function which can get activations for large n-dimensional families of lazily generated images. In the course of accomplishing this, I had to do a few other things:

(1) Add a number of utilities for using iterable workflows in n-dimensions. See lucid.misc.iter_nd_utils.

(2) Fix an annoying cludge with model.import_graph(). Lucid's convention for accessing the internals of an imported model is to use the T() accessor (inspired by $ in jquery). But for weird historical reasons, only render.import_model returned T, not model.import_graph. To avoid needing to depend on render, I added this support to import_graph (where it really always should have been).

(3) Finally, add a lazy iteration based get_activations and get_activations_iter (more flexible, but not exposed by default). get_activations() is added as a method to model to make it easily accessible.

+312 -6

1 comment

4 changed files

colah

pr closed time in a month

Pull request review commenttensorflow/lucid

Add memory efficient model.get_activations()

+# Copyright 2018 The Lucid Authors. All Rights Reserved.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+# ==============================================================================++"""Helpers for getting responses from models over large collections."""+++import itertools+from collections import defaultdict++import numpy as np+import tensorflow as tf++from lucid.misc.iter_nd_utils import recursive_enumerate_nd, dict_to_ndarray, batch_iter+++def get_activations_iter(model, layer, generator, reducer="mean", batch_size=64,+                         dtype=None, ind_shape=None, center_only=False):+  """Collect center activtions of a layer over many images from an iterable obj.++  Note: this is mostly intended for large synthetic families of images, where+    you can cheaply generate them in Python. For collecting activations over,+    say, ImageNet, there will be better workflows based on various dataset APIs+    in TensorFlow.++  Args:+    model: model for activations to be collected from.+    layer: layer (in model) for activtions to be collected from.+    generator: An iterable object (intended to be a generator) which produces+      tuples of the form (index, image). See details below.+    reducer: How to combine activations if multiple images map to the same index.+      Supports "mean", "rms", and "max".+    batch_size: How many images from the generator should be processes at once?+    dtype: determines dtype of returned data (defaults to model activation+      dtype). Can be used to make funciton memory efficient.+    ind_shape: Shape that indices can span. Optional, but makes funciton orders+      of magnitiude more memory efficient.++  Memory efficeincy:+    Using ind_shape is the main tool for make this function memory efficient.+    dtype="float16" can further help.++  Returns:+    A numpy array of shape [ind1, ind2, ..., layer_channels]+  """+++  assert reducer in ["mean", "max", "rms"]+  combiner, normalizer = {+      "mean" : (lambda a,b: a+b,             lambda a,n: a/n         ),+      "rms"  : (lambda a,b: a+b**2,          lambda a,n: np.sqrt(a/n)),+      "max"  : (lambda a,b: np.maximum(a,b), lambda a,n: a           ),+  }[reducer]++  with tf.Graph().as_default(), tf.Session() as sess:+    t_img = tf.placeholder("float32", [None, None, None, 3])+    T = model.import_graph(t_img)+    t_layer = T(layer)++    responses = None+    count = None++    # # If we know the total length, let's give a progress bar+    # if ind_shape is not None:+    #   total = int(np.prod(ind_shape))+    #   generator = tqdm(generator, total=total)++    for batch in batch_iter(generator, batch_size=batch_size):++      inds, imgs = [x[0] for x in batch], [x[1] for x in batch]++      # Get activations (middle of image)+      acts = t_layer.eval({t_img: imgs})+      if center_only:+        acts = acts[:, acts.shape[1]//2, acts.shape[2]//2]

Ideally, I'd like to use lucid.misc.graph_analysis.infer_format() here so that it's automatic, but don't fully trust it yet. Will defer for now.

colah

comment created time in a month

Pull request review commenttensorflow/lucid

Add memory efficient model.get_activations()

+# Copyright 2018 The Lucid Authors. All Rights Reserved.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+# ==============================================================================++"""Helpers for doing gnerator/iterable style workflows in n-dimensions."""++import itertools+import types+from collections.abc import Iterable+import numpy as np+++def recursive_enumerate_nd(it, stop_iter=None, prefix=()):+  """Recursively enumerate nested iterables with tuples n-dimenional indices.++  Arguments:+    it: object to be enumerated+    stop_iter: User defined funciton which can conditionally block further+      iteration. Defaults to allowing iteration.+    prefix: index prefix (not intended for end users)++  Yields:+    (tuple representing n-dimensional index, original iterator value)++  Example use:+    it = ((x+y for y in range(10) )+               for x in range(10) )+    recursive_enumerate_nd(it) # yields things like ((9,9), 18)++  Example stop_iter:+    stop_iter = lambda x: isinstance(x, np.ndarray) and len(x.shape) <= 3+    # this prevents iteration into the last three levels (eg. x,y,channels) of+    # a numpy ndarray++  """+  if stop_iter is None:+    stop_iter = lambda x: False++  for n, x in enumerate(it):+    n_ = prefix + (n,)+    if isinstance(x, Iterable) and (not stop_iter(x)):+      yield from recursive_enumerate_nd(x, stop_iter=stop_iter, prefix=n_)+    else:+      yield (n_, x)+++def dict_to_ndarray(d):+  """Convert a dictionary representation of an array (keys as indices) into a ndarray.++  Args:+    d: dict to be converted.++  Converts a dictionary representation of a sparse array into a ndarray. Array+  shape infered from maximum indices. Entries default to zero if unfilled.++  Example:+    >>> dict_to_ndarray({(0,0) : 3, (1,1) : 7})+    [[3, 0],+     [0, 7]]++  """+  inds = list(d.keys())+  ind_dims = len(inds[0])+  assert all(len(ind) == ind_dims for ind in inds)+  ind_shape = [max(ind[i]+1 for ind in inds) for i in range(ind_dims)]+  arr = None+  for ind, val in d.items():+    if arr is None:

Done!

colah

comment created time in a month

Pull request review commenttensorflow/lucid

Add memory efficient model.get_activations()

+# Copyright 2018 The Lucid Authors. All Rights Reserved.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+# ==============================================================================++"""Helpers for doing gnerator/iterable style workflows in n-dimensions."""++import itertools+import types+from collections.abc import Iterable+import numpy as np+++def recursive_enumerate_nd(it, stop_iter=None, prefix=()):+  """Recursively enumerate nested iterables with tuples n-dimenional indices.++  Arguments:+    it: object to be enumerated+    stop_iter: User defined funciton which can conditionally block further+      iteration. Defaults to allowing iteration.+    prefix: index prefix (not intended for end users)++  Yields:+    (tuple representing n-dimensional index, original iterator value)++  Example use:+    it = ((x+y for y in range(10) )+               for x in range(10) )+    recursive_enumerate_nd(it) # yields things like ((9,9), 18)++  Example stop_iter:+    stop_iter = lambda x: isinstance(x, np.ndarray) and len(x.shape) <= 3+    # this prevents iteration into the last three levels (eg. x,y,channels) of+    # a numpy ndarray++  """+  if stop_iter is None:+    stop_iter = lambda x: False++  for n, x in enumerate(it):+    n_ = prefix + (n,)+    if isinstance(x, Iterable) and (not stop_iter(x)):+      yield from recursive_enumerate_nd(x, stop_iter=stop_iter, prefix=n_)+    else:+      yield (n_, x)+++def dict_to_ndarray(d):+  """Convert a dictionary representation of an array (keys as indices) into a ndarray.++  Args:+    d: dict to be converted.++  Converts a dictionary representation of a sparse array into a ndarray. Array+  shape infered from maximum indices. Entries default to zero if unfilled.++  Example:+    >>> dict_to_ndarray({(0,0) : 3, (1,1) : 7})+    [[3, 0],+     [0, 7]]++  """+  inds = list(d.keys())

Done!

colah

comment created time in a month

Pull request review commenttensorflow/lucid

Add memory efficient model.get_activations()

 def load_from_manifest(manifest_url):     else:       raise NotImplementedError("SerializedModel Manifest type '{}' has not been implemented!".format(manifest.get('type'))) +  def get_activations(self, layer, examples, batch_size=64,+                         dtype=None, ind_shape=None, center_only=False):+    """Collect center activtions of a layer over an n-dimensional array of images.++    Note: this is mostly intended for large synthetic families of images, where+      you can cheaply generate them in Python. For collecting activations over,+      say, ImageNet, there will be better workflows based on various dataset APIs+      in TensorFlow.++    Args:+      layer: layer (in model) for activtions to be collected from.+      examples: A (potentially n-dimensional) array of images. Can be any nested+        iterable object, including a generator, as long as the inner most objects+        are a numpy array with at least 3 dimensions (image X, Y, channels=3).+      batch_size: How many images should be processed at once?+      dtype: determines dtype of returned data (defaults to model activation+        dtype). Can be used to make funciton memory efficient.

Done!

colah

comment created time in a month

Pull request review commenttensorflow/lucid

Add memory efficient model.get_activations()

 def load_from_manifest(manifest_url):     else:       raise NotImplementedError("SerializedModel Manifest type '{}' has not been implemented!".format(manifest.get('type'))) +  def get_activations(self, layer, examples, batch_size=64,+                         dtype=None, ind_shape=None, center_only=False):+    """Collect center activtions of a layer over an n-dimensional array of images.++    Note: this is mostly intended for large synthetic families of images, where+      you can cheaply generate them in Python. For collecting activations over,+      say, ImageNet, there will be better workflows based on various dataset APIs+      in TensorFlow.++    Args:+      layer: layer (in model) for activtions to be collected from.+      examples: A (potentially n-dimensional) array of images. Can be any nested+        iterable object, including a generator, as long as the inner most objects+        are a numpy array with at least 3 dimensions (image X, Y, channels=3).+      batch_size: How many images should be processed at once?+      dtype: determines dtype of returned data (defaults to model activation+        dtype). Can be used to make funciton memory efficient.+      ind_shape: Shape that the index (non-image) dimensions of examples. Makes+        code much more memory efficient if examples is not a numpy array.++    Memory efficeincy:+      Have examples be a generator rather than an array of images; this allows+      them to be lazily generated and not all stored in memory at once. Also+      use ind_shape so that activations can be stored in an efficient data+      structure. If you still have memory problems, dtype="float16" can probably+      get you another 2x.++    Returns:+      A numpy array of shape [ind1, ind2, ..., layer_channels]+    """++    return get_activations(self, layer, examples, batch_size=batch_size,

Done!

colah

comment created time in a month

Pull request review commenttensorflow/lucid

Add memory efficient model.get_activations()

 def import_graph(self, t_input=None, scope='import', forget_xy_shape=True, input         self.graph_def, final_input_map, name=scope)     self.post_import(scope) +    def T(layer):

The input case can't be fully supported here without some more changes and is kind of idiosyncratic. With that said, I did refactor T on the render side so that it just adds on functionality to the version from here.

colah

comment created time in a month

push eventtensorflow/lucid

Chris Olah

commit sha b33ada38e0428853d86440b333ae6d977c3d42c1

Improvements based on Micahel's comments

view details

push time in a month

MemberEvent

pull request commenttensorflow/lucid

Add memory efficient model.get_activations()

I think either @ludwigschubert or @michaelpetrov would be a good reviewer for this.

CC Nick Barry (uncertain of github handle but will share) since he was running into memory issues with a more naive implementation.

colah

comment created time in a month

PR opened tensorflow/lucid

Reviewers
Add memory efficient model.get_activations()

The goal of this PR is to add a model.get_activations() function which can get activations for large n-dimensional families of lazily generated images. In the course of accomplishing this, I had to do a few other things:

(1) Add a number of utilities for using iterable workflows in n-dimensions. See lucid.misc.iter_nd_utils.

(2) Fix an annoying cludge with model.import_graph(). Lucid's convention for accessing the internals of an imported model is to use the T() accessor (inspired by $ in jquery). But for weird historical reasons, only render.import_model returned T, not model.import_graph. To avoid needing to depend on render, I added this support to import_graph (where it really always should have been).

(3) Finally, add a lazy iteration based get_activations and get_activations_iter (more flexible, but not exposed by default). get_activations() is added as a method to model to make it easily accessible.

+307 -0

0 comment

3 changed files

pr created time in a month

create barnchtensorflow/lucid

branch : get_activations

created branch time in a month

push eventtensorflow/lucid

bmiselis

commit sha 1b421539a35cb018f1b7de9d49f11a9ebaa73643

Replaced sklearn.decomposition.base.BaseEstimator with sklearn.base.BaseEstimator.

view details

bmiselis

commit sha 66d107cc8a6fb1521876e13db787bead398958ff

Fix regarding @colah comments.

view details

Christopher Olah

commit sha 96e2fce7c01436c044a1a19041287902635ea07e

Merge pull request #221 from bmiselis/test-reducer-fix Fix failing ChannelReducer test

view details

push time in a month

PR merged tensorflow/lucid

Reviewers
Fix failing ChannelReducer test

It seems that sklearn.decomposition.base.BaseEstimator does not exist anymore. I've replaced it with sklearn.base.BaseEstimator and all the tests seem to pass now.

+7 -1

7 comments

1 changed file

bmiselis

pr closed time in a month

pull request commenttensorflow/lucid

Fix failing ChannelReducer test

This is great, @bmiselis! Thank you so much for the PR.

bmiselis

comment created time in a month

pull request commenttensorflow/lucid

Fix failing ChannelReducer test

Hi @bmiselis -- I hope you had a great time over the winter break!

I was vaguely concerned there might be further version issues with colab (eg. maybe other packages installed would break if we forced an upgrade / or conflict) but I don't know much about python packaging. If you know more and think this is fine, I'm happy to go with that. :)

bmiselis

comment created time in a month

issue commenttensorflow/lucid

ChannelReducer breaks with new version of sklearn

Yep! Thanks to @mcleavey for letting me note this down while we were pairing. :)

Seems to be resolved by #221.

mcleavey

comment created time in a month

pull request commentdistillpub/post--ctc

Fix padding issue in equation

Thanks for fixing this, Awni! Sorry for the slow follow up.

The website will update next time it is deployed.

awni

comment created time in 2 months

push eventdistillpub/post--ctc

Awni Hannun

commit sha 31f344cd147a1eb616631ef286354491fd8fd50f

Fix padding issue in equation

view details

Christopher Olah

commit sha 0b75180236978f4f76a78a5aa85d5cb502abf9f0

Merge pull request #40 from awni/master Fix padding issue in equation

view details

push time in 2 months

PR merged distillpub/post--ctc

Fix padding issue in equation

Before:

Screen Shot 2019-11-12 at 1 53 51 PM

After:

Screen Shot 2019-11-12 at 1 56 47 PM

+1 -1

2 comments

1 changed file

awni

pr closed time in 2 months

push eventcolah/colah.github.io

Chris Olah

commit sha 738dbf191da88f79bfce0bfa83fc54614236e8a3

reorganize and prune

view details

Chris Olah

commit sha bf5ab78aed88c1c84e1462f36a29045ee62fd05a

Merge branch 'master' of https://github.com/colah/colah.github.io

view details

push time in 2 months

pull request commenttensorflow/lucid

Fix failing ChannelReducer test

For backwards compatibility, would it make check for the existence of one BaseEstimator and fall back to the other? Many colab instances still come with the old version of sklearn pre-installed.

CC @michaelpetrov for updating requirements. My main concern is how this will interact with other packages and with platforms like colab that pre-install particular versions of packages.

bmiselis

comment created time in 2 months

push eventtensorflow/lucid

Christopher Olah

commit sha 246ef621be879f3a2b59b9cf596bbe54d2af7003

Update README.md

view details

push time in 2 months

issue commenttensorflow/lucid

How to import a model saved as a TFHub module?

I've never used tfhub, but you probably just do something like module(images)?

Unfortunately, this is more of a tfhub qusestion than a lucid question.

mencia

comment created time in 2 months

push eventtensorflow/lucid

Thomas Tumiel

commit sha 3134acc55e6f63c7aa6ccc2a6c5e1322e345db85

fix a few typos

view details

Thomas Tumiel

commit sha 78d08eb8e192fb64546785eb7d0ea9e8aaf19c13

fix a few typos

view details

Thomas Tumiel

commit sha 9d77cdec18ce5b072f29f69b156d8acfbedb8ca3

Merge branch 'typos' of github.com:ttumiel/lucid into typos

view details

Christopher Olah

commit sha 79382c2d5c47e07e47e2894f49c561249720de33

Merge pull request #210 from ttumiel/typos Fix a few typos

view details

push time in 3 months

PR merged tensorflow/lucid

Fix a few typos

Just edited a few typos in the docstrings

+21 -22

2 comments

4 changed files

ttumiel

pr closed time in 3 months

pull request commenttensorflow/lucid

Fix a few typos

Nice, thanks!

ttumiel

comment created time in 3 months

pull request commenttensorflow/lucid

graph_analysis: nicer json parsed graph structure

This change probably breaks some things.

colah

comment created time in 3 months

PR opened tensorflow/lucid

graph_analysis: nicer json parsed graph structure

Nicer layout parse structure for @shancarter:

"layout": {
    "type": "Sequence",
    "children": [
      {
        "type": "Node",
        "name": "input:0"
      },
      {
        "type": "Node",
        "name": "conv2d0:0"
      },
      {
        "type": "Node",
        "name": "maxpool0:0"
      },
      {
        "type": "Node",
        "name": "conv2d1:0"
      },
      {
        "type": "Node",
        "name": "conv2d2:0"
      },
      {
        "type": "Node",
        "name": "maxpool1:0"
      },
      {
        "type": "Branch",
        "children": [
          {
            "type": "Node",
            "name": "mixed3a_1x1:0"
          },
          {
            "type": "Sequence",
            "children": [
              {
                "type": "Node",
                "name": "mixed3a_3x3_bottleneck:0"
              },
              {
                "type": "Node",
                "name": "mixed3a_3x3:0"
              }
            ]
          },
...
+35 -18

0 comment

1 changed file

pr created time in 3 months

create barnchtensorflow/lucid

branch : colah-nicer-graph-json

created branch time in 3 months

issue commenttensorflow/lucid

model.save() - Froze 0 variables. Converted 0 variables to const ops.

You might want to look at Interactive Stefan Sietzen and Manuela Waldner work on Feature Visualization in the Browser.

Unfortunately, exporting a visualization graph like this isn't a supported use of lucid, and you're likely to run into a number of issues:

  • Model.save() is intended to export models (ie. the object you call model) and not visualization graphs. You're likely to run into places where the code makes assumptions that don't hold for the visualization graph.
  • The saved model is in a somewhat unusual format, with all weights burnt into the graph. We've never tested this for compatibility with tfjs.
  • If you save the visualization graph, it will contain many ops that aren't typically in models. As a result, tfjs may not be able to run the graph.

I would recommend using tfjs to run your model, and implementing style transfer using Stefan's code or from scratch in javascript.

camoconnell

comment created time in 3 months

issue commenttensorflow/lucid

model.save() - Froze 0 variables. Converted 0 variables to const ops.

Hm, I'm not sure that I'm following.

  • It looks like you're trying to save the feature visualization graph rather than a model graph?
  • It looks like you're calling model.save on an a model instance, rather than the abstract graph (ie. Model.save)? (That isn't an intended use of save and we should make it impossible.)
  • I'm not sure what the goal of saving the model is, since it seems like you already have a working model instance?

Normally, you'd have a one script construct and export your model, then import it in another script for visualization.

camoconnell

comment created time in 3 months

delete branch tensorflow/lucid

delete branch : parameter-editor-dtype-fix

delete time in 4 months

push eventtensorflow/lucid

Jacob Hilton

commit sha f02ba4c826ad04e0cb2f863f3ed436a25e3c2566

parameter editor fixes

view details

Jacob Hilton

commit sha 170828ef06e9f8efc2beab9acf42ec4586a902ff

undo accidental docstring modifications

view details

Christopher Olah

commit sha aba63bdd3d2482ac51f925bcf664d6839bceb49c

Merge pull request #211 from tensorflow/parameter-editor-dtype-fix Parameter editor fixes

view details

push time in 4 months

PR merged tensorflow/lucid

Parameter editor fixes

Makes a few fixes to scratch/parameter_editor.py:

  • Model flag typo
  • Convert tensor shape to tuple so comparison works
  • Convert tensor value to original dtype
+4 -3

0 comment

1 changed file

jacobhilton

pr closed time in 4 months

push eventtensorflow/lucid

dependabot[bot]

commit sha 9966580c8673a4dfdbe72eefe8c3eba9b5d4e309

Bump lodash from 4.17.10 to 4.17.15 in /lucid/scratch/js Bumps [lodash](https://github.com/lodash/lodash) from 4.17.10 to 4.17.15. - [Release notes](https://github.com/lodash/lodash/releases) - [Commits](https://github.com/lodash/lodash/compare/4.17.10...4.17.15) Signed-off-by: dependabot[bot] <support@github.com>

view details

Christopher Olah

commit sha 44e904c2e1df00108f8c38c3b099b98eec91d182

Merge pull request #207 from tensorflow/dependabot/npm_and_yarn/lucid/scratch/js/lodash-4.17.15 Bump lodash from 4.17.10 to 4.17.15 in /lucid/scratch/js

view details

push time in 4 months

PR merged tensorflow/lucid

Bump lodash from 4.17.10 to 4.17.15 in /lucid/scratch/js dependencies

Bumps lodash from 4.17.10 to 4.17.15. <details> <summary>Commits</summary>

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot ignore this [patch|minor|major] version will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+3 -3

1 comment

1 changed file

dependabot[bot]

pr closed time in 4 months

MemberEvent

issue commenttensorflow/lucid

MobilenetV3 for Model Zoo

Hello! It sounds like your parameterization may be too small for the neuron you are trying to visualize.

Try making your parametrization larger (eg. param_f = lambda: param.image(larger_numer)).

tul-urte

comment created time in 4 months

issue commentdistillpub/post--feature-visualization

Receptive Fields

Hi Hans! I totally agree that explicitly visualizing the receptive field is a valuable thing one could add. You can see it implicitly in a neuron feature visualization, but you could explore it more explicitly.

That said, I think it's outside the scope of this particular article.

hmeine

comment created time in 4 months

push eventdistillpub/post--differentiable-parameterizations

Christopher Berner

commit sha 0138a10b3ae9a54043c7ad7bbb3683f49a555f3b

Fix typo in Differentiable Image Parameterizations

view details

Christopher Olah

commit sha 127d36b401ba8d41134d9c0802ed2b0b51cc429f

Merge pull request #94 from cberner/patch-1 Fix typo in Differentiable Image Parameterizations

view details

push time in 4 months

pull request commentdistillpub/post--differentiable-parameterizations

Fix typo in Differentiable Image Parameterizations

Aww, thank you Chris!

cberner

comment created time in 4 months

issue commenttensorflow/lucid

Cannot import name 'gfile' from 'tensorflow'

Lucid does not support TF 2.0 at present.

themolco

comment created time in 4 months

Pull request review commenttensorflow/lucid

Add support for saving/loading compressed data + pickle support

 def _load_graphdef_protobuf(handle, **kwargs): }  -def load(url_or_handle, cache=None, **kwargs):+unsafe_loaders = {+    ".pickle": _load_pickle,+    ".pkl": _load_pickle,+}+++decompressors = {+    ".xz": _decompress_xz,+}+++def load(url_or_handle, allow_unsafe_formats=False, cache=None, **kwargs):     """Load a file.      File format is inferred from url. File retrieval strategy is inferred from     URL. Returned object type is inferred from url extension.      Args:       url_or_handle: a (reachable) URL, or an already open file handle+      allow_unsafe_formats: set to True to allow saving unsafe formats (eg. pickles)

If I'm understanding correctly, the unsafe step of using unsafe formats is when you save them (eg. pickiling doesn't fullly capture a complicated python object). Once you're loading, the damage has already been done? If so, we might only need to have allow_unsafe_formats for save() and not need it here.

michaelpetrov

comment created time in 4 months

Pull request review commenttensorflow/lucid

Add support for saving/loading compressed data + pickle support

 def load_using_loader(url_or_handle, loader, cache, **kwargs):             # since this may have been cached, it's our responsibility to try again once             # since we use a handle here, the next DecodeError should propagate upwards             with read_handle(url, cache="purge") as handle:-                result = load_using_loader(handle, loader, cache, **kwargs)+                result = load_using_loader(handle, decompressor, loader, cache, **kwargs)     return result   def is_handle(url_or_handle):     return hasattr(url_or_handle, "read") and hasattr(url_or_handle, "name")  -def get_extension(url_or_handle):+def _get_extension(url_or_handle):+    compression_ext = None     if is_handle(url_or_handle):-        _, ext = os.path.splitext(url_or_handle.name)+        path_without_ext, ext = os.path.splitext(url_or_handle.name)+    else:+        path_without_ext, ext = os.path.splitext(url_or_handle)++    if ext in decompressors:+        decompressor_ext = ext+        _, ext = os.path.splitext(path_without_ext)     else:-        _, ext = os.path.splitext(url_or_handle)+        decompressor_ext = None

(no action needed) It seems like unsupported decompressors (eg. ".gz") will fall through as normal extensions. I don't see any great way to avoid this, but it may lead to unintuitive error messages.

michaelpetrov

comment created time in 4 months

Pull request review commenttensorflow/lucid

Add support for saving/loading compressed data + pickle support

 def load(url_or_handle, cache=None, **kwargs): # Helpers  -def load_using_loader(url_or_handle, loader, cache, **kwargs):+def load_using_loader(url_or_handle, decompressor, loader, cache, **kwargs):     if is_handle(url_or_handle):-        result = loader(url_or_handle, **kwargs)+        if decompressor:+            with decompressor(url_or_handle) as decompressor_handle:+                result = loader(decompressor_handle, **kwargs)+        else:+            result = loader(url_or_handle, **kwargs)     else:         url = url_or_handle         try:             with read_handle(url, cache=cache) as handle:-                result = loader(handle, **kwargs)+                if decompressor:+                    with decompressor(handle) as decompressor_handle:+                        result = loader(decompressor_handle, **kwargs)+                else:+                    result = loader(handle, **kwargs)

(nitpicking, feel free to disregard) The logic here feels like it's getting a bit deep, and also repeated. I wonder if you could do something like the following to simplify things:

if decompressor is None:
   decompressor = contextlib.nullcontext

and drop all further logic about whether something is being decompressed.

michaelpetrov

comment created time in 4 months

Pull request review commenttensorflow/lucid

Various improvements

 def make_optimizer(optimizer, args):            "optimizer, or tf.train.Optimizer instance.")  -def import_model(model, t_image, t_image_raw=None, scope="import"):+def import_model(model, t_image, t_image_raw=None, scope="import", input_map=None):

Doesn't need to be addressed in this PR, but it's pretty weird that we rely on this helper function so much, and that it lives in render. It might be worth improving model.import_graph() so that it can also return T and then deprecating this.)

michaelpetrov

comment created time in 5 months

push eventtensorflow/lucid

Christopher Olah

commit sha 8fb9374fbb9a443b49b303036264b4e4b76d0610

Update slack link

view details

push time in 5 months

push eventdistillpub/post--editorial-update-2018

Christopher Olah

commit sha 974551d1882105ced1b74bb26c6d2282c82cc4e3

Update slack link (in other file)

view details

push time in 5 months

push eventdistillpub/post--editorial-update-2018

Christopher Olah

commit sha 02f6c181485ea3d42bcd218c0729cf9c607d5752

Update slack link to slack.distill.pub

view details

push time in 5 months

issue commenttensorflow/lucid

Support for tf.SavedModel

Model.save() is still present: https://github.com/tensorflow/lucid/blob/master/lucid/modelzoo/vision_base.py#L299

Is it possible you're using a really old version of lucid?

ludwigschubert

comment created time in 5 months

push eventtensorflow/lucid

Chris Olah

commit sha 5b059cdbf4a5a10b18ab58f51b82e3db3e01dbe2

fix imports for ParameterEditor

view details

push time in 5 months

push eventtensorflow/lucid

Chris Olah

commit sha f85bb829014ab0df05e85a9f000854dd53cc8c00

Add flag to mark edtied models in graph_def

view details

push time in 5 months

push eventtensorflow/lucid

Chris Olah

commit sha 050217be03ba00071b0cf53aabc3c4cccb69cba6

Add ParameterEditor to scratch

view details

Chris Olah

commit sha 8e45ecc46a3a24588950d4818761e267beece0aa

Merge branch 'master' of https://github.com/tensorflow/lucid

view details

push time in 5 months

issue closedtensorflow/lucid

Defensive Programming of Model.save()

Model.save() is the new recommended way for people to import models into Lucid. While pairing with @Newmu the other day, we ran into a number of issues where we made silly mistakes, but it took a bit of time to debug.

Since Model.save() is such an essential user facing function, I think it warrants heavy defensive programming, good error messages, and documentation to protect against this.

Some checks that would have been nice:

  • Check for ":0" in names -- A ":" denotes a TensorFlow tensor, but we actually want ops.
  • Check if default session is None (more info).
  • Check if graph is empty

closed time in 5 months

colah

push eventtensorflow/lucid

Chris Olah

commit sha d812e9c0ccb9a1dfeee6d021be9f5771d14fe6d4

More defensive Model.save()

view details

Christopher Olah

commit sha dd035d4be5306271cbe7cf1ee9859c42d2c78ffd

Merge pull request #194 from tensorflow/defensive-model-save More defensive Model.save()

view details

push time in 5 months

PR merged tensorflow/lucid

More defensive Model.save()

Importing models is a common failure point for users. This PR implements suggestions from #188 to more defensively implement model saving and catch some common errors.

+10 -0

1 comment

2 changed files

colah

pr closed time in 5 months

PR opened tensorflow/lucid

More defensive Model.save()

Importing models is a common failure point for users. This PR implements suggestions from #188 to more defensively implement model saving and catch some common errors.

+10 -0

0 comment

2 changed files

pr created time in 5 months

create barnchtensorflow/lucid

branch : defensive-model-save

created branch time in 5 months

create barnchtensorflow/lucid

branch : colah-overlay-work

created branch time in 5 months

push eventtensorflow/lucid

Christopher Olah

commit sha 0b818819ff86366e30b8ed0c6f8b234e652c1931

Update README.md

view details

push time in 5 months

push eventtensorflow/lucid

Christopher Olah

commit sha f6ebe2618949a7c6a9a8e62e49df2f2f4a598ccf

Update README.md

view details

push time in 5 months

GollumEvent

issue commenttensorflow/lucid

Support for tf.keras

This is lucid's Model.save. I've written up instructions on using it here:

https://github.com/tensorflow/lucid/wiki/Importing-Models-into-Lucid

AakashKumarNain

comment created time in 5 months

GollumEvent
GollumEvent

issue commenttensorflow/lucid

illustration2vec model support

Hello! You will need to convert this model from caffe to TensorFlow and then import it as a lucid model.

huaji0353

comment created time in 6 months

more