profile
viewpoint
Dan Moldovan mdanatg @google USA Software Engineer

tensorflow/examples 2492

TensorFlow examples

google/tangent 2005

Source-to-Source Debuggable Derivatives in Pure Python

mdanatg/community 0

Stores documents used by the TensorFlow developer community

mdanatg/copybara 0

Copybara: A tool for transforming and moving code between repositories.

push eventmdanatg/Astronet-Triage

Dan Moldovan

commit sha be16e54921136d09bace096d20d7e831de2774a8

Fix import.

view details

Dan Moldovan

commit sha 2309f0aa455346c87c6b605f566913dcd17d0fde

Remove unused configurations.

view details

Dan Moldovan

commit sha dfc6c2b4e70aafbdf1f4646bbfe347fc6cadfb6c

Minor updates

view details

push time in 7 hours

push eventmdanatg/community

Dan Moldovan

commit sha e3224da4030f8c197a7c2c56beef8f141a0774b9

Update 20200211-tf-types.md

view details

push time in 8 hours

push eventmdanatg/Astronet-Triage

Dan Moldovan

commit sha 6cddad6a6c517d12164a1b613f93403933494b61

Re-add the saving of config. Fix config loading.

view details

Dan Moldovan

commit sha bb85ba0d839c06d280076389e5df0e2f38d065f5

Make config loading consistent with saving.

view details

Dan Moldovan

commit sha 4672ab653faaf43ced3373aaa6be150b4da866f7

Make config loading consistent with saving.

view details

Dan Moldovan

commit sha 8c2400c02b201182047d66c0a6fe469ddb27df60

Add prediction setup.

view details

push time in a day

push eventmdanatg/Astronet-Triage

Dan Moldovan

commit sha 5fb9a9fd5a7a249b2cee14e8d88ce887a8ae821c

Fix config loading.

view details

push time in a day

push eventmdanatg/Astronet-Triage

Dan Moldovan

commit sha 6874d9edd3dff5193e4e5d38633a3dce7dc2acd3

Remove obsolete code.

view details

Dan Moldovan

commit sha 623a41277f7a85f35b5774ba7cacd27cf4d51df0

Remove obsolete code.

view details

Dan Moldovan

commit sha 75ceb3b2b162cb7de5bd9fe4c686a6175c4938f9

Fix broken import.

view details

Dan Moldovan

commit sha 835b3b45e7726ca39ecba34ba8aac70a855a1daf

Fix broken import.

view details

Dan Moldovan

commit sha b79959813a24a1a5c7af7f425b55fb8615383516

Add model saving.

view details

Dan Moldovan

commit sha 5e5aabcdddd0b73dd3b81e47860fa2cab16e0233

Minor fix.

view details

push time in a day

issue closedtensorflow/tensorflow

Tensorflow 2.0 tf.function internal error

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab, Ubuntu, Windows
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.0-beta, 2.0-rc, 2.0-nightly (v1.12.1-9694-g006e2933 2.0.0-dev20190825)
  • Python version: 3.6
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 10, also without CUDA
  • GPU model and memory:

Describe the current behavior Decorating the training loop that consumes a tf.Data.Dataset with @tf.function causes an internal error in tensorflow (an object is returned to Python with an error set and I'm unable to understand the actual origin of the error). Not using @tf.function, the code works alright.

Describe the expected behavior The model gradients should be calculated well regardless of being within a @tf.function trace or not.

Code to reproduce the issue Code is provided in the colab notebook available here

Other info / logs While working with a rather complex module that operates on irregular data (Graphs), using @tf.function worsens performance in TF 2.0 due to bug #29075. While following the workaround described in that issue, I stumbled upon this issue.

Because of these issues, TF 2.0 is not a good fit for DL on irregularly shaped data.

Relevant traceback:

    <ipython-input-8-e24395a4965a>:137 train_step  *
        gradients = tape.gradient(loss, model.trainable_variables)
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/backprop.py:1015 gradient
        unconnected_gradients=unconnected_gradients)
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/imperative_grad.py:76 imperative_grad
        compat.as_str(unconnected_gradients.value))
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/backprop.py:599 _aggregate_grads
        if len(gradients) == 1:

    SystemError: <built-in function len> returned a result with an error set

closed time in a day

mmv

issue commenttensorflow/tensorflow

Tensorflow 2.0 tf.function internal error

I checked the colab posted by @mmv against tf-nightly and it looks like #33497 resolved the issue.

mmv

comment created time in a day

Pull request review commenttensorflow/tensorflow

Add sorted builtin support for autograph

 def _py_all(iterable):   return all(iterable)  +def sorted_(iterable, key=UNSPECIFIED, reverse=UNSPECIFIED):+  if tensor_util.is_tensor(iterable):+    return _tf_sorted(iterable, key, reverse)+  return _py_sorted(iterable, key, reverse)+++def _tf_sorted(iterable, key, reverse):+  """Overload of sorted_ for Tensor iterable."""+  if reverse is UNSPECIFIED:+    direction = 'ASCENDING'+  else:+    direction = 'DESCENDING'+  if key is not UNSPECIFIED:+    mapped = parallel_ops.vectorized_map(key, iterable)+    with ops.control_dependencies(

Optional: If the rank is static, you could add an extra check for an early warning:

if mapped.shape.rank is not None and mapped.shape.rank != 1:
  raise ValueError('sort only supports only 1D tensors')
lyonguyen8697

comment created time in a day

issue commenttensorflow/tensorflow

Using tf.function while enumerating a dataset causes an infinite loop

@tgsmith61591 This issue should be resolved - enumerate should now work correctly with datasets in tf.function, with TF >= 1.5. Have you been still experiencing issues?

BeWe11

comment created time in a day

issue commenttensorflow/tensorflow

[TF 2.0] 'Unknown graph' error when using tf.function decorator with pre-trained models

It looks like Model.predict is not compatible with tf.function. Note that the error is different in tf-nightly, and I confirmed that it's not caused by autograph by modifying the gist above:

@tf.function(autograph=False)
def extract_feat(feat_extractor, _input):
    feat = feat_extractor.predict(_input, steps=1)
    return feat
---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-1-2656cf5dcf1a> in <module>()
     23 
     24 if __name__ == '__main__':
---> 25     main()

16 frames
<ipython-input-1-2656cf5dcf1a> in main()
     18     _input = np.random.rand(1, 224, 224, 3) - 0.5 / 0.5
     19 
---> 20     feat = extract_feat(feature_extractor, _input)
     21     print(feat.shape)
     22 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
    574         xla_context.Exit()
    575     else:
--> 576       result = self._call(*args, **kwds)
    577 
    578     if tracing_count == self._get_tracing_count():

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
    621       # This is the first call of __call__, so we have to initialize.
    622       initializers = []
--> 623       self._initialize(args, kwds, add_initializers_to=initializers)
    624     finally:
    625       # At this point we know that the initialization is complete (or less

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    503     self._concrete_stateful_fn = (
    504         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
--> 505             *args, **kwds))
    506 
    507     def invalid_creator_scope(*unused_args, **unused_kwds):

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   2438       args, kwargs = None, None
   2439     with self._lock:
-> 2440       graph_function, _, _ = self._maybe_define_function(args, kwargs)
   2441     return graph_function
   2442 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   2769 
   2770       self._function_cache.missed.add(call_context_key)
-> 2771       graph_function = self._create_graph_function(args, kwargs)
   2772       self._function_cache.primary[cache_key] = graph_function
   2773       return graph_function, args, kwargs

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   2659             arg_names=arg_names,
   2660             override_flat_arg_shapes=override_flat_arg_shapes,
-> 2661             capture_by_value=self._capture_by_value),
   2662         self._function_attributes,
   2663         # Tell the ConcreteFunction to clean up its graph once it goes out of

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    979         _, original_func = tf_decorator.unwrap(python_func)
    980 
--> 981       func_outputs = python_func(*func_args, **func_kwargs)
    982 
    983       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    438         # __wrapped__ allows AutoGraph to swap in a converted function. We give
    439         # the function a weak reference to itself to avoid a reference cycle.
--> 440         return weak_wrapped_fn().__wrapped__(*args, **kwds)
    441     weak_wrapped_fn = weakref.ref(wrapped_fn)
    442 

<ipython-input-1-2656cf5dcf1a> in extract_feat(feat_extractor, _input)
      7 @tf.function(autograph=False)
      8 def extract_feat(feat_extractor, _input):
----> 9     feat = feat_extractor.predict(_input, steps=1)
     10     return feat
     11 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
    917         max_queue_size=max_queue_size,
    918         workers=workers,
--> 919         use_multiprocessing=use_multiprocessing)
    920 
    921   def reset_metrics(self):

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in predict(self, model, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing, **kwargs)
    494         model, ModeKeys.PREDICT, x=x, batch_size=batch_size, verbose=verbose,
    495         steps=steps, callbacks=callbacks, max_queue_size=max_queue_size,
--> 496         workers=workers, use_multiprocessing=use_multiprocessing, **kwargs)
    497 
    498 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in _model_iteration(self, model, mode, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, **kwargs)
    471               mode=mode,
    472               training_context=training_context,
--> 473               total_epochs=1)
    474           cbks.make_logs(model, epoch_logs, result, mode)
    475 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
    166       else:
    167         batch_outs = training_v2_utils._aggregate_predict_results(
--> 168             strategy, batch_outs, model)
    169 
    170       if step == 0:

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in _aggregate_predict_results(strategy, batch_outs, model)
    262     nested_outs = batch_outs[i * num_replicas:i * num_replicas + num_replicas]
    263     per_output_result = dist_utils.concat_along_batch_dimension(
--> 264         nest.flatten(nested_outs))
    265 
    266     if need_batch_index_gather:

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/distribute/distributed_training_utils.py in concat_along_batch_dimension(outputs)
   1200   if isinstance(outputs[0], ragged_tensor.RaggedTensor):
   1201     return ragged_concat_ops.concat(outputs, axis=0)
-> 1202   return np.concatenate(outputs)

<__array_function__ internals> in concatenate(*args, **kwargs)

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py in __array__(self)
    748   def __array__(self):
    749     raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy"
--> 750                               " array.".format(self.name))
    751 
    752   def __len__(self):

NotImplementedError: Cannot convert a symbolic Tensor (StatefulPartitionedCall:0) to a numpy array.
biendltb

comment created time in 4 days

issue commenttensorflow/tensorflow

Tensorflow 2.0 tf.function internal error

This looks like a bug related to GradientTape and Dataset. A few workarounds:

  1. Move the @tf.function to train_step (so that the dataset iteration is outside the tf.function). That also seems to speed things up, which might be another bug.

  2. Use tf.gradients instead of GradientTape. The downside is that tf.gradients doesn't work in eager mode:

def train_step(xs, ys):
    ys_hat = model(xs, training=True)
    loss = loss_fn(ys, ys_hat)
    gradients = tf.gradients(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))

    train_loss(loss)
    train_metric(ys.edges, ys_hat.edges)
mmv

comment created time in 5 days

issue commenttensorflow/tensorflow

tf.function and tf.nest break for valid Mapping instances

For the instruction set warning, try specifying -c opt as well. I came across this page where the author seemed to successfully use these options: https://gist.github.com/Brainiarc7/6d6c3f23ea057775b72c52817759b25c

ethereon

comment created time in 6 days

issue commenttensorflow/tensorflow

tf.function and tf.nest break for valid Mapping instances

I think the error you see is expected - it's the bug described in the OP.

ethereon

comment created time in 6 days

issue commenttensorflow/tensorflow

You tried to call `count_params` on embedding, but the layer isn't built. You can build it manually via: `embedding.build(batch_input_shape)`.

BTW, I couldn't see the original count_oarams in any of the error any more.

Goofy-G

comment created time in 6 days

issue commenttensorflow/tensorflow

You tried to call `count_params` on embedding, but the layer isn't built. You can build it manually via: `embedding.build(batch_input_shape)`.

@omalleyt12

The error in the gist indicates that inputs is not a 2-element tuple as you'd expect, but instead a single Tensor. When you try to expand that, Python tries to iterate over that Tensor which gives you the error message. Not sure where that single Tensor is coming from, but you can see it by adding a print statement:

  def call(self, inputs, training=None, mask=None):
        print(inputs)
        ...
  # Output: Tensor("inputs:0", shape=(None, None), dtype=int64)

@Goofy-G Keras expects your dataset to be batched, so you should call dataset = dataset.batch(1) even if you work with single examples.

But even after I added the batching, it seems that the second tensor is still not being passed to the model, which I think is a bug.

Goofy-G

comment created time in 6 days

push eventmdanatg/community

Dan Moldovan

commit sha e9720c1bda4cd3eaf25bc4566652f0c1579f400f

Update 20200211-tf-types.md

view details

push time in 6 days

issue commenttensorflow/tensorflow

tf.function and tf.nest break for valid Mapping instances

@namedtoaster You can safely ignore that warning, but otherwise you want a binary that is built with those features enabled. Try to see if you can rebuild it with these bazel options: --copt=-mavx2 --copt=-msse4 --copt=-mfma

ethereon

comment created time in 6 days

issue commenttensorflow/tensorflow

polyval gives TypeError when run inside tf.function with Tensor coeffs, but not when run eagerly

@Joey155 it might be interesting to see if polyval can be made to work with tensor inputs by taking its source code and putting it in a tf.function. That said, there are a couple of bugs and GradientTape doesn't give high-order gradients for tf.while_loop, which would need to be fixed first.

mjwatkins2

comment created time in 6 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

Looks like there is a lint error - see the Details link next to the Ubuntu Sanity check above, search for do_pylint in the logs.

punndcoder28

comment created time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha de72ac5a759b3ebd4d268fbfbb148dbafffc4d87

Update 20200211-tf-types.md

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha 4666892b1a80087d48c89c5d1b2d6d31c96797f6

Update 20200211-tf-types.md

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha e4fd7fad83760cca5f4c0f8df63d92548b74a67f

Update 20200211-tf-types.md

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha 0842a2fbf134e37e9bfcca9f144bc609b89b67ee

Update 20200211-tf-types.md

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha 71a5164810d878ae05892245e40d35117f4f6045

Update 20200211-tf-types.md

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha 001efcc9cf33f8bd22951e6255f92c1cbf680bc5

Update 20200211-tf-types.md

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha 455362657adb78f419135c325fa2c760a1f4dd53

Capitalization

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha 4e58ec3f2f0fcc62dedb4866b153690a2284ce25

Initial commit.

view details

push time in 7 days

push eventmdanatg/community

Dan Moldovan

commit sha 039544096bcbdc7c07496d88d829454e09cc3e60

Clone draft template for new tf-types RFC.

view details

push time in 7 days

fork mdanatg/community

Stores documents used by the TensorFlow developer community

fork in 7 days

fork mdanatg/community

Stores documents used by the TensorFlow developer community

fork in 7 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):         d[key] = result[key]       return d     else:-      return instance_type((key, result[key]) for key in instance)+      d = instance_type()+      for key in instance:+        d[key] = instance[key]+      return d+  elif _is_mapping(instance):+    result = dict(zip(_sorted(instance), args))+    instance_type = type(instance)+    tf_logging.log_first_n(+      tf_logging.WARN, "Mapping types may not work well with tf.nest. Prefer using" +      "MutableMapping for {}".format(instance_type), 1+    )+    d = instance_type()

Here, we'd have to use the old constructor (because the mapping is immutable, can't be modified once constructed): return instance_type((key, result[key]) for key in instance)

punndcoder28

comment created time in 7 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):         d[key] = result[key]       return d     else:-      return instance_type((key, result[key]) for key in instance)+      d = instance_type()+      for key in instance:

This piece is identical to the one in the if branch above it. You can factor it out:

if instance_type == _collections.defaultdict:
  d = _collections.defaultdict(instance.default_factory)
else:
  d = instance_type()

for key in instance:
  ...
punndcoder28

comment created time in 7 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):   Returns:     `args` with the type of `instance`.   """-  if _is_mapping(instance):+  if _is_mutable_mapping(instance):

Yes. Preserve the original code, minus the special cases for defaultdict, and adding the warning.

punndcoder28

comment created time in 7 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):     result = dict(zip(_sorted(instance), args))     instance_type = type(instance)     if instance_type == _collections.defaultdict:-      d = _collections.defaultdict(instance.default_factory)+      d = instance_type(_collections.defaultdict(instance.default_factory))       for key in instance:         d[key] = result[key]       return d

I'm seeing just return instance_type((key, result[key]) for key in instance)

punndcoder28

comment created time in 7 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):   Returns:     `args` with the type of `instance`.   """-  if _is_mapping(instance):+  if _is_mutable_mapping(instance):

I'm a bit worried that we might break someone who relied on this behavior before. Let's keep the old branch for _is_mapping (removing the pieces specific to defaultdict), but add a warning to it:

tf_logging.log_first_n(
    tf_logging.WARN, "Mapping types may not work well with tf.nest. Prefer using MutableMapping for {}".format(instance_type), 1)
punndcoder28

comment created time in 8 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):     result = dict(zip(_sorted(instance), args))     instance_type = type(instance)     if instance_type == _collections.defaultdict:-      d = _collections.defaultdict(instance.default_factory)+      d = instance_type(_collections.defaultdict(instance.default_factory))       for key in instance:         d[key] = result[key]       return d

Don't forget to update line 153: we want to rebuild the mapping with a for loop, not directly using the constructor.

punndcoder28

comment created time in 8 days

push eventmdanatg/Astronet-Triage

Dan M

commit sha 42b04b4d6c521a745217d92521dc328e4d68bc49

Add data visualization utilities.

view details

push time in 8 days

push eventmdanatg/Astronet-Triage

Dan M

commit sha 19303af14551b4e0ac1d2939ecc6f8d82369b603

Fix grievous bug in tuning. Invalidate all tuning results.

view details

Dan M

commit sha 153dc71083dea3093c716f8e7702ce2c51a7418b

Include the config name in the study ID.

view details

Dan M

commit sha 4200d1dba7530067c9352ead5ad8c6e27c134df0

Fix bug in arguments.

view details

Dan M

commit sha 8805de2e93ef68038325fd074b963b124e1ea086

Catch error and report infeasible trials.

view details

Dan M

commit sha 964e907c2a7115922c380a693a70852a41fa813e

Fix reporting bug, update notebook.

view details

Dan M

commit sha 745cacb3daa07f7676830ec9b08da9e86706ae0f

Update study name.

view details

push time in 9 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

There are probably a few other files that need changing. A search for IsMapping in all files shows two other instances, python/util/util_wrapper.cc and tools/def_file_filter/symbols_pybind.txt. You might also find the docs for pybind11 and bazel useful in better understanding how things tie together.

punndcoder28

comment created time in 9 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):   Returns:     `args` with the type of `instance`.   """-  if _is_mapping(instance):+  if _is_mutable_mapping(instance):     # Pack dictionaries in a deterministic order by sorting the keys.     # Notice this means that we ignore the original order of `OrderedDict`     # instances. This is intentional, to avoid potential bugs caused by mixing     # ordered and plain dicts (e.g., flattening a dict but using a     # corresponding `OrderedDict` to pack it back).+    result = dict(zip(_sorted(instance), args))+    instance_type = type(instance)+    if instance_type == _collections.defaultdict:

Yes.

punndcoder28

comment created time in 9 days

push eventmdanatg/Astronet-Triage

Dan M

commit sha 69b0bfe21e20f238c35b9f76dd191204dca55341

Fix bugs.

view details

push time in 10 days

push eventmdanatg/Astronet-Triage

Dan M

commit sha c53b76260924abb51d60ffe1d0b1514f8f6f9873

Add a configuration with extra features.

view details

push time in 10 days

push eventmdanatg/Astronet-Triage

Dan M

commit sha 3dc51f5ec94e05eff19b29945d4013c7cc2e862c

Add a few scratchpads.

view details

Dan M

commit sha 0a8045614152ab49c35fb99f98f67b04051762b2

Record params for best tuning runs, add a multiclass version.

view details

push time in 10 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

Are the namespace methods in tensorflow/python/util/util.h call the helper methods in tensorflow/python/util/util.cc?

Not exactly. The .h file contains declarations for the functions inside the .cc file. That is, it has the function signature with no body, and the .cc file repeats the function signature, adding an actual body. Here you might find more information about why that's being done: https://www.learncpp.com/cpp-tutorial/header-files/

I noticed the header file is missing from the PR - did you commit it?

punndcoder28

comment created time in 10 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):   Returns:     `args` with the type of `instance`.   """-  if _is_mapping(instance):+  if _is_mutable_mapping(instance):     # Pack dictionaries in a deterministic order by sorting the keys.     # Notice this means that we ignore the original order of `OrderedDict`     # instances. This is intentional, to avoid potential bugs caused by mixing     # ordered and plain dicts (e.g., flattening a dict but using a     # corresponding `OrderedDict` to pack it back).+    result = dict(zip(_sorted(instance), args))+    instance_type = type(instance)+    if instance_type == _collections.defaultdict:

You're missing the else branch here (and also for default dicts the constructor needs to be called with the factory argument, as in the old code).

punndcoder28

comment created time in 10 days

issue commenttensorflow/tensorflow

polyval gives TypeError when run inside tf.function but not when run eagerly

tf.math.polyval only works with lists of tensors, but it doesn't verify its arguments before starting the work and it errors out internally. The fact that it works in eager mode is incidental.

So you'll need to split coeffs:

        coeffs = tf.eye(5)
        coeffs = tf.split(coeffs, 5)  # Convert coeffs to a list of tensors.
        pv = tf.math.polyval(coeffs, x)

The op implementation could be improved in a couple of ways:

  • it should ensure coeffs is a list and raise an appropriate error message
  • it may be made to work with tensor coeffs, which should be fairly straightforward
mjwatkins2

comment created time in 10 days

issue commenttensorflow/tensorflow

ValueError when trying to speed up model by decorating tf.function

The error is raised because tf.function doesn't like it when you create variables inside it. In this case, the Keras layers are attempting to create variables so you should create those outside the tf.function.

On the other hand, I don't know if this is the recommended way of creating layers. @fchollet @omalleyt12 can give better advice there.

Also copying @karmel for the performance question.

Here are a couple of ways in which you can build your example model that I tested myself. In both cases, tf.function should be applied automatically:

Functional style:

input_layer = tf.keras.layers.Input([32, 32, 3])
model = tf.keras.models.Sequential(
    [input_layer, Flatten(), tf.keras.layers.Dense(10)])

a = np.ones([1, 32, 32, 3], dtype=np.float32)
print(model(a))

Object-oriented style:

class KerasModel(tf.keras.models.Model):

  def __init__(self):
    super(KerasModel, self).__init__()
    self.flatten = Flatten()
    self.dense = tf.keras.layers.Dense(10)

  def call(self, input_layer):
    x = input_layer
    x = self.flatten(x)
    x = self.dense(x)
    return x

model = KerasModel()

a = np.ones([1, 32, 32, 3], dtype=np.float32)
print(model(a))

You can also customize the way the code is optimized by adding @tf.function manually. For example the code below will use XLA (if available in your installation) which is usually faster:

class KerasModel(tf.keras.models.Model):

  def __init__(self):
    super(KerasModel, self).__init__()
    self.flatten = Flatten()
    self.dense = tf.keras.layers.Dense(10)

  @tf.function(experimental_compile=True)
  def call(self, input_layer):
    x = input_layer
    x = self.flatten(x)
    x = self.dense(x)
    return x

model = KerasModel()

a = np.ones([1, 32, 32, 3], dtype=np.float32)
print(model(a))
hgffly

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):   Returns:     `args` with the type of `instance`.   """-  if _is_mapping(instance):+  if _is_mutable_mapping(instance):+    result = dict(zip(_sorted(instance), args))+    instance_type = type(instance)+    if instance_type == _collections.OrderedDict:+      d = _collections.OrderedDict(instance.default_factory)+      for key in instance:+        d[key] = result[key]+      return d+    else:+      return instance_type((key, result[key]) for key in instance)+  elif _is_mapping(instance):

defaultdict should be captured by the branch above, so we should no longer need the special case below.

punndcoder28

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):   Returns:     `args` with the type of `instance`.   """-  if _is_mapping(instance):+  if _is_mutable_mapping(instance):+    result = dict(zip(_sorted(instance), args))+    instance_type = type(instance)+    if instance_type == _collections.OrderedDict:+      d = _collections.OrderedDict(instance.default_factory)+      for key in instance:+        d[key] = result[key]+      return d+    else:+      return instance_type((key, result[key]) for key in instance)

Here you should use the robust method to reconstruct the dict:

d = instance_type()
for key in instance:
  d[key] = result[key]
return d
punndcoder28

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):   Returns:     `args` with the type of `instance`.   """-  if _is_mapping(instance):+  if _is_mutable_mapping(instance):+    result = dict(zip(_sorted(instance), args))+    instance_type = type(instance)+    if instance_type == _collections.OrderedDict:

I think you meant defaultdict here?

punndcoder28

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):     # Pack a CompositeTensor's components according to a TypeSpec.     assert len(args) == 1     return instance._from_components(args[0])  # pylint: disable=protected-access+  # elif _is_mutable_mapping(instance):

Sounds good! In that case, please write a # TODO instead. We usually avoid commented-out code.

punndcoder28

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):     # Pack a CompositeTensor's components according to a TypeSpec.     assert len(args) == 1     return instance._from_components(args[0])  # pylint: disable=protected-access+  # elif _is_mutable_mapping(instance):+  #   new_mapping = instance_type(instance)+  #   new_mapping.update()

Resolving for now, per the other comment.

punndcoder28

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):     # Pack a CompositeTensor's components according to a TypeSpec.     assert len(args) == 1     return instance._from_components(args[0])  # pylint: disable=protected-access+  # elif _is_mutable_mapping(instance):

Did you mean for this to be commented out? Also, you want this check to be done before _is_mapping which is more general.

punndcoder28

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def _sequence_like(instance, args):     # Pack a CompositeTensor's components according to a TypeSpec.     assert len(args) == 1     return instance._from_components(args[0])  # pylint: disable=protected-access+  # elif _is_mutable_mapping(instance):+  #   new_mapping = instance_type(instance)+  #   new_mapping.update()

The implementation should be consistent with that for _is_mapping, to correctly handle OrderedDict.

In particular, you want:

new_mapping = instance_type()
for key in sorted(instance):
  new_mapping[key] = result[key]
punndcoder28

comment created time in 11 days

issue commenttensorflow/tensorflow

tf.function and tf.nest break for valid Mapping instances

@namedtoaster see the guide: https://www.tensorflow.org/install/source; you may want to skip to the Docker section for a ready-made setup.

ethereon

comment created time in 11 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 int IsInstanceOfRegisteredType(PyObject* obj, const char* type_name) { // Returns 1 if `o` is considered a mapping for the purposes of Flatten(). // Returns 0 otherwise. // Returns -1 if an error occurred.-int IsMappingHelper(PyObject* o) {+int IsNestCompatibleMappingHelper(PyObject* o) {

See how it was done for e.g. IsMapping. I think it's in the same file.

punndcoder28

comment created time in 13 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 int IsInstanceOfRegisteredType(PyObject* obj, const char* type_name) { // Returns 1 if `o` is considered a mapping for the purposes of Flatten(). // Returns 0 otherwise. // Returns -1 if an error occurred.-int IsMappingHelper(PyObject* o) {+int IsNestCompatibleMappingHelper(PyObject* o) {

It seems to be generated by this target: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/BUILD#L649

punndcoder28

comment created time in 13 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

Great! Now you just need to update nest.py to use the new method.

punndcoder28

comment created time in 13 days

issue commenttensorflow/tensorflow

tf.function using higher GPU memory than normal python function

@kkimdev @sanjoy for more insight about the differences of peak memory use.

The difference is not entirely surprising because tf.function and Python function execute things in different ways.

abhigoyal2210

comment created time in 14 days

issue closedtensorflow/tensorflow

Infinite loop with generators wrapping a dataset in tf.function

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): v2.0.0-rc2-26-g64c3d38 2.0.0
  • Python version: 3.7.4
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 10.0 / 7.6.3
  • GPU model and memory: TITAN Xp, 12196MiB

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with: 1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)" 2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior

My use case was to use tqdm to track progress on a training loop over a tf.data.Dataset:

@tf.function
def train_one_epoch(model, dataset):
    for x in tqdm(dataset):
        train_step(model, x)

However, when the function train_one_epoch is wrapped in a tf.function, the AutoGraph is stuck in an infinite loop.

Describe the expected behavior

The expected behavior would be that using tf.function results in the same behavior than the eager mode.

The current issue is that AutoGraph doesn't recognize tqdm(dataset) as a tf.data.Dataset (which is normal). However, iterating infinitely over the dataset in AutoGraph is weird and shouldn't happen. Maybe it should give an exception.

Maybe the easiest fix would be to prevent dataset.__iter__ being called inside tf.function if it is not in a loop.
So for x in dataset would be fine, but for x in tqdm(dataset) wouldn't.

Code to reproduce the issue

import tensorflow as tf


class Iterable():
    def __init__(self, iterable):
        self.iterable = iterable

    def __iter__(self):
        for obj in self.iterable:
            yield obj

@tf.function
def f(dataset):
    for x in Iterable(dataset):
        print(x)

dataset = tf.data.Dataset.range(5)
f(dataset)

The minimal Iterable class can be replaced by tqdm (from tqdm import tqdm), and this should yield the same results.

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

Without the @tf.function, the wrapped dataset Iterable(dataset) is iterated over in eager mode:

tf.Tensor(0, shape=(), dtype=int64)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(2, shape=(), dtype=int64)
tf.Tensor(3, shape=(), dtype=int64)
tf.Tensor(4, shape=(), dtype=int64)

With the @tf.function, the AutoGraph mode doesn't recognize the wrapped Iterable(dataset) as a tf.data.Dataset, so it tries to iterate over it in python to trace the graph. However, it looks like the Iterable(dataset).__iter__ infinitely yields a next element named IteratorGetNext.

This results in an infinite iteration over the dataset:

Tensor("IteratorGetNext:0", shape=(), dtype=int64)
Tensor("IteratorGetNext_1:0", shape=(), dtype=int64)
Tensor("IteratorGetNext_2:0", shape=(), dtype=int64)
Tensor("IteratorGetNext_3:0", shape=(), dtype=int64)
Tensor("IteratorGetNext_4:0", shape=(), dtype=int64)
Tensor("IteratorGetNext_5:0", shape=(), dtype=int64)
Tensor("IteratorGetNext_6:0", shape=(), dtype=int64)
Tensor("IteratorGetNext_7:0", shape=(), dtype=int64)
...

closed time in 15 days

omoindrot

issue commenttensorflow/tensorflow

Infinite loop with generators wrapping a dataset in tf.function

There is now a verification in autograph that outputs a warning in situations such as this. We could add a verification that looks specifically for tqdm, but that won't work work the custom generator in the OP so it would be of limited help.

Here is an alternative that might be useful for adding a progress bar to a dataset - it modifies a dataset to print tqdm messages whenever it is being iterated, using tf.py_function:

def tf_tqdm(ds):

  # Suppress printing the initial status message - it creates extra newlines, for some reason.
  bar = tqdm.tqdm(file=io.StringIO())

  def advance_tqdm(e):
    def fn():
      bar.update(1)
      # Print the status update manually.
      print('\r', end='')
      print(repr(bar), end='')
    tf.py_function(fn, [], [])
    return e

  return ds.map(advance_tqdm)


@tf.function
def f(ds):
  for x in tf_tqdm(ds):
    pass

ds = tf.data.Dataset.from_tensor_slices(tf.range(3))
f(ds)

Another similar alternative is to use Dataset.from_generator and supply it with a custom generator that itself includes tqdm. This will work whenever you are constructing the dataset from a Python object (it will not work when using built-in datasets like TFRecordDataset, though):

data = tf.range(3)

def f():
  bar = tqdm.tqdm(file=io.StringIO())
  for i in data:
    bar.update(1)
    # Print the status update manually.
    print('\r', end='')
    print(repr(bar), end='')
    yield i

ds = tf.data.Dataset.from_generator(f, tf.int32)


@tf.function
def f():
  for x in ds:
    pass

f()
omoindrot

comment created time in 15 days

push eventmdanatg/Astronet-Triage

Dan Moldovan

commit sha 4f4da06523c4399d3403dab86d70bcfd6a2c43fd

Add a basic configuration for vetting mode.

view details

push time in 15 days

push eventmdanatg/Astronet-Triage

Dan M

commit sha 1e493ab284b6cf1de4fb56ea422613acac25aa51

Add batch normalization to the tuning process, fix the prediction threshold to ensure it's consistend across train and eval while tuning.

view details

push time in 16 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

Ok, I recommend filing an issue about this - the instructions should get you to a functional setup, and perhaps others who ran into this will know a solution.

punndcoder28

comment created time in 19 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

That's odd. I suspect the libraries aren't configured properly. This line is particularly suspicious:

INFO: Reading rc options for 'build' from /Users/puneethk/Desktop/Projects/Open-Source/tensorflow/.tf_configure.bazelrc:
  'build' options: --host_force_python=PY2 --action_env PYTHON_BIN_PATH=/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python --action_env PYTHON_LIB_PATH=/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/site-packages --python_path=/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python --config=xla --action_env TF_CONFIGURE_IOS=0

It looks like, although you specified Python 3 in ./configure, somehow bazel is still picking python2 binaries and that's definitely wrong.

The build instructions have some special steps for MacOS: https://www.tensorflow.org/install/source - have you followed those?

punndcoder28

comment created time in 20 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

You need to provide more details (what exact commands did you run, whether you ran them at head of with your changes, the complete console output, etc.) otherwise I won't be able to help.

punndcoder28

comment created time in 20 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

If ./configure doesn't complain then you should have the correct version. The tests should now pass at head.

punndcoder28

comment created time in 21 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

That error is pretty obscure - not sure what's causing it. There should be a way to get more details about it. But first I'd double check the environment:

  • if you haven't already, go through the instructions to set up and configure build (first two sections, until "Build the pip package")
  • run the tests at head, to make sure they pass without your changes (it's rare, but sometime the build does break)
punndcoder28

comment created time in 22 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 int IsInstanceOfRegisteredType(PyObject* obj, const char* type_name) { // Returns 1 if `o` is considered a mapping for the purposes of Flatten(). // Returns 0 otherwise. // Returns -1 if an error occurred.-int IsMappingHelper(PyObject* o) {+int IsNestCompatibleMappingHelper(PyObject* o) {

Start by adding it around line 112, _is_nest_compatible_mapping = _pywrap_utils.IsNestCompatibleMapping; the tests will tell you when everything is wired properly.

punndcoder28

comment created time in 22 days

pull request commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

ERROR: Skipping '//tensorflow/python/util/...': no targets found beneath 'tensorflow/python/util' ERROR: no targets found beneath 'tensorflow/python/util'

Sorry, I gave you the wrong command line. It seems that all tests are bundled up in the parent directory. Here's the proper command:

bazel test //tensorflow/python:util_nest_test

But you should run all tests to be sure, once util_nest_test passes. It will take longer the first time, but second calls should be faster once the results are cached:

bazel test //tensorflow/python/...
punndcoder28

comment created time in 22 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 int IsInstanceOfRegisteredType(PyObject* obj, const char* type_name) { // Returns 1 if `o` is considered a mapping for the purposes of Flatten(). // Returns 0 otherwise. // Returns -1 if an error occurred.-int IsMappingHelper(PyObject* o) {+int IsNestCompatibleMappingHelper(PyObject* o) {

There are a few other C++ files that use this function, so you probably want to add a new one like this instead.

Then you'll need to update the name at line 122.

Lastly, you'll need to change line 145 so that is creates an new empty object then calls update on it, which is part of the MutableMapping] interface:

new_mapping = instance_type()
new_mapping.update(result)
return new_mapping
punndcoder28

comment created time in 23 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 Python sequence, tuple (including `namedtuple`), or dict that can contain further sequences, tuples, and dicts. +Structures are atom, or tuple or dict or list of constructed atoms and/or

This is good. Consider combining with the paragraph above it, to something like that:

" A nested structure is a Python collection that can contain further collections as well as other objects called atoms. Note that numpy arrays are considered atoms. nest recognizes the following types of collections:

  • abc.Collection``list
  • tuple
  • namedtuple
  • dict
  • OrderedDict
  • MutableMapping
  • attr.s-decorated classes "
punndcoder28

comment created time in 23 days

push eventmdanatg/Astronet-Triage

Dan M

commit sha b3b11e8a3238ec7b306161974aebbfbc80fc7e25

Add basic tuning.

view details

push time in 24 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def flatten(structure, expand_composites=False):   running.    Args:-    structure: an arbitrarily nested structure or a scalar object. Note, numpy-      arrays are considered scalars.+    structure: an arbitrarily nested structure which can be a scalar, or+      tuple or dict or list of constructed scalars and/or other tuples/lists, or+      a scalar object. Note, numpy arrays are considered scalars.

I'm thinking about using the word "atoms" instead of "scalars" to avoid confusion with scalar tensors (these structures can contain anything)

punndcoder28

comment created time in 25 days

Pull request review commenttensorflow/tensorflow

Update the docstring of @tf.exports methods in tf.nest.

 def flatten(structure, expand_composites=False):   running.    Args:-    structure: an arbitrarily nested structure or a scalar object. Note, numpy-      arrays are considered scalars.+    structure: an arbitrarily nested structure which can be a scalar, or+      tuple or dict or list of constructed scalars and/or other tuples/lists, or+      a scalar object. Note, numpy arrays are considered scalars.

Suggestion: "an arbitrarily nested structure of list, tuple, namedtuple and dicts. Note, numpy arrays are considered atoms and are not flattened."

Could you also update the text at line 246?

Alternatively, you can define what a structure is in the module overview and refer to that from everywhere else, to avoid the duplication.

punndcoder28

comment created time in 25 days

issue commenttensorflow/tensorflow

tf.function and tf.nest break for valid Mapping instances

I'd start with everything in one PR (docstring changes and small code changes), and if you end up changing lots of code, break it down into more pieces.

ethereon

comment created time in 25 days

issue commenttensorflow/tensorflow

tf.function and tf.nest break for valid Mapping instances

Sure, here are a few directions for improvement -

First, the documentation of all @tf_export methods inside tf.nest needs to be brought to date and clarify what is a "structure". Some entries mention that you can have dicts, others do not. I think the module documentation should clarify that. The docstrings in there are used to generate the public docs.

Second, IsMappingHelper that verifies for supported types should check for MutableMapping rather than just Mapping. It should probably be renamed to IsNestCompatibleMapping on this occasion. The reason for that is because we need to know how to construct new mappings of the same type and the Mapping interface doesn't standardize that. See the Python reference for more details.

Third, the code of _sequence_like that attempts to reconstruct dict objects should catch exceptions and raise a more informative one (something like "could not reconstruct object of type X"). Also, the code of _get_defun_inputs must again catch exceptions and add more information (something like "could not handlearguments of <function name>: <message of caught error>").

I hope this helps! As a general guidance, I recommend sending multiple small PRs instead of one big change, to make them easier to review.

ethereon

comment created time in 25 days

issue commenttensorflow/tensorflow

tf.function breaks for valid Mapping instances

It looks like tf.nest is overly permissive with Mapping subclasses, because the contract of Mapping does not seem to specify an interface for constructors. Users may accept key-value pairs, **kwargs or just about anything else and they'd still have a valid Mapping. Moreover, detecting how to properly use that constructor is extremely difficult in Python.

The correct solution is either to only support dict and OrderedDict (treating everything else as opaque objects), or to explicitly require a certain initialization contract.

At any rate, this limitation should be documented and a best-effort to detect errors could be made by catching anything exception that the constructor raises and adding a hint about this.

ethereon

comment created time in a month

issue closedtensorflow/tensorflow

r2.0/2.1 Python 3.8 AutoGraph could not transform <bound method LinearRegressionTF.fit of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f7a5ccc1fa0>> and will run it as-is

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes (attached file code_warning_py38.py.txt -> rename it as .py and execute it)
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): This problem occurs with both r2.0 (built from head) and r2.1rc0
  • Python version: 3.8 virtual environment (The problem does not occur with Python 3.7.3)
  • Bazel version (if compiling from source): 0.26.1
  • GCC/Compiler version (if compiling from source): 7.4.0
  • CUDA/cuDNN version: CUDA 10 / cuDNN 7.6.5
  • GPU model and memory: Nvidia Geforce RTX 2080 Ti

Describe the current behavior export AUTOGRAPH_VERBOSITY=10 Run the attached python script (renamed as .py) python code_warning_py38.py &> output.txt Read the output in the attached output.txt

Describe the expected behavior There is no warning message when the script is run with Python 3.7.3

Code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. See both files attached. code_warning_py38.py.txt output.txt

closed time in a month

dbonner

issue commenttensorflow/tensorflow

r2.0/2.1 Python 3.8 AutoGraph could not transform <bound method LinearRegressionTF.fit of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f7a5ccc1fa0>> and will run it as-is

This should now be fixed at head. With the pip package from source, running code_warning_py38.py should no longer print a warning.

dbonner

comment created time in a month

issue commenttensorflow/tensorflow

AutoGraph unexpected indent in tf-nightly-gpu-2.1.0.dev20191103

Backslash continuations should now be properly supported; the fix is in tf-nightly and will be available in TF 2.2.

netw0rkf10w

comment created time in a month

issue closedtensorflow/tensorflow

AutoGraph unexpected indent in tf-nightly-gpu-2.1.0.dev20191103

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): tf-nightly-gpu-2.1.0.dev20191103
  • Python version: 3.5.2
  • CUDA/cuDNN version: 10.0/7.1
  • GPU model and memory: GTX 1080 Ti

Describe the current behavior

After upgrading to tf-nightly-gpu-2.1.0.dev20191103 from tensorflow-gpu-2.0.0, I obtained this error when running my code:

WARNING:tensorflow:AutoGraph could not transform <bound method CRFLayer.mean_field of <models.crf_layer.CRFLayer object at 0x7f6124237b00>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: unexpected indent (<unknown>, line 36)

I did export AUTOGRAPH_VERBOSITY=10 but did not observe any other types of output other than the above. It says unexpected indent so I guess something changed in the parsing of Python code when building the graph? The problem does not occur on tensorflow-gpu-2.0.0 (I installed tf-nightly-gpu-2.1.0.dev20191103 because I need to be able to load_weights(pretrained_weights, by_name=True, skip_mismatch=True), which is not available in 2.0.0. There is another bug with this function that I will report in a separate issue.)

Describe the expected behavior Like in TF 2.0.0: no AutoGraph warning.

Code to reproduce the issue Unfortunately my code has a lot of dependencies and I was unable to create a minimal reproducible example. But from the warning message I guess it's easy enough to check in the source code.

closed time in a month

netw0rkf10w

issue closedtensorflow/tensorflow

Autograph transformation warning

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.1
  • Python version: 3.7
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 7.6
  • GPU model and memory: 2080ti

Describe the current behavior I'm seeing the message "AutoGraph could not transform and will run it as-is" I found no performance issue, because a model shows almost the same losses compared to the exact code implemented with pytorch. I'm using gast version 0.2.2, so I doubt this is caused by gast. Code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem.

import os
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow.keras as keras


os.environ['AUTOGRAPH_VERBOSITY'] = '10'

def shape_list(x):
    static = x.shape.as_list()
    dynamic = tf.shape(x)
    return [dynamic[i] if s is None else s for i, s in enumerate(static)]


class TFConv1D(layers.Layer):
    def __init__(self, input_dim, output_dim, init_std=0.02, use_bias=True, **kwargs):
        """ TFConv1D layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2)
            Basically works like a Linear layer but the weights are transposed
        """
        super(TFConv1D, self).__init__(**kwargs)
        self.nf = output_dim
        self.nx = input_dim
        self.initializer_range = init_std
        self.use_bias = use_bias
        self.weight = self.add_weight(
            "{}_weight".format(self.name),
            shape=[self.nx, self.nf],
            initializer=keras.initializers.TruncatedNormal(stddev=init_std))
        if self.use_bias:
            self.bias = self.add_weight(
                "{}_bias".format(self.name),
                shape=[1, self.nf],
                initializer=tf.zeros_initializer())

    def call(self, x):
        x = tf.matmul(x, self.weight)
        if self.use_bias:
            x += self.bias
        return x


class Adaptive_Softmax(layers.Layer):
    def __init__(self, vocab_size: int, hidden_dim: int, cutoffs: list, padding_index: int, init_std=0.02):
        super(Adaptive_Softmax, self).__init__()
        self.padding_index = padding_index
        self.vocab_size = vocab_size
        self.hidden_dim = hidden_dim
        self.n_clusters = len(cutoffs) + 1
        self.cutoffs = [0] + cutoffs + [vocab_size]
        self.cluster_logit = TFConv1D(hidden_dim, self.n_clusters)
        self.logits = self.add_weight(
            "{}_weight".format(self.name),
            shape=[hidden_dim, vocab_size],
            initializer=keras.initializers.TruncatedNormal(stddev=init_std))

        self.bias = self.add_weight(
            "{}_bias".format(self.name),
            shape=[1, vocab_size],
            initializer=tf.zeros_initializer())

    def call(self, x, y):
        x = x[:, :-1]
        b, l, h = shape_list(x)
        x = tf.reshape(x, [b * l, -1])
        y = tf.reshape(y, [-1])
        cl = self.cluster_logit(x)
        cluster_ll = tf.nn.log_softmax(cl, axis=1)
        nll = tf.zeros_like(y, dtype=x.dtype)
        tail_weight = self.logits

        for i in range(self.n_clusters):
            l, r = self.cutoffs[i], self.cutoffs[i + 1]
            mask = (y >= l) & (y < r)
            indices = tf.where(mask)
            target_i = tf.boolean_mask(y, mask) - l
            tail_logit = tf.matmul(tf.boolean_mask(x, mask), tail_weight[:, l:r]) + self.bias[:, l:r]
            tail_logprob_i = tf.nn.log_softmax(tail_logit, axis=1)  # [b,vocab]
            # word_nll[indices] = -logprob_i
            cur_ll = tf.gather_nd(cluster_ll, tf.concat([indices, tf.ones_like(indices) * i], 1)) + \
                     tf.gather_nd(tail_logprob_i,
                                  tf.stack([tf.range(tf.size(target_i, out_type=target_i.dtype)), target_i], 1))
            nll = tf.tensor_scatter_nd_update(nll, indices, -cur_ll)
        return nll

vocab_size = 51
hidden_dim = 100
cutoffs = [5,20]
padding_index = 50
x = tf.random.normal((800,51,100),dtype=tf.float32)
y = tf.random.uniform((800,50),maxval=50,dtype=tf.int64)

dataset = tf.data.Dataset.from_tensor_slices((x,y))
batchfier = dataset.batch(4)

model = Adaptive_Softmax(vocab_size,hidden_dim,cutoffs,padding_index)
optimizer = keras.optimizers.Adam()

@tf.function
def update_step(x, y):
    with tf.GradientTape() as tape:
        batch_loss = model(x,y)
    step_grad = tape.gradient(batch_loss, model.trainable_variables)
    optimizer.apply_gradients(zip(step_grad, model.trainable_variables))
    return batch_loss

for x,y in batchfier:
    update_step(x,y)

Other info / logs

2020-01-13 16:53:44.063623: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6 2020-01-13 16:53:44.064637: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6 2020-01-13 16:53:44.604399: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-01-13 16:53:44.613753: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.614201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.73GiB deviceMemoryBandwidth: 573.69GiB/s 2020-01-13 16:53:44.614220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-01-13 16:53:44.614238: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-01-13 16:53:44.615203: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-01-13 16:53:44.615364: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-01-13 16:53:44.616463: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-01-13 16:53:44.617020: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-01-13 16:53:44.617041: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-01-13 16:53:44.617092: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.617634: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.618056: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-01-13 16:53:44.618258: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-01-13 16:53:44.641610: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3000000000 Hz 2020-01-13 16:53:44.642154: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55ad9a2a6710 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-01-13 16:53:44.642165: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-01-13 16:53:44.686305: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.686789: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55ad9a925960 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-01-13 16:53:44.686818: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5 2020-01-13 16:53:44.686923: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.687358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.73GiB deviceMemoryBandwidth: 573.69GiB/s 2020-01-13 16:53:44.687376: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-01-13 16:53:44.687383: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-01-13 16:53:44.687395: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-01-13 16:53:44.687403: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-01-13 16:53:44.687411: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-01-13 16:53:44.687419: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-01-13 16:53:44.687425: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-01-13 16:53:44.687455: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.687898: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.688316: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-01-13 16:53:44.688332: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-01-13 16:53:44.910219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-01-13 16:53:44.910242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2020-01-13 16:53:44.910247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2020-01-13 16:53:44.910389: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.910886: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-01-13 16:53:44.911318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9064 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5) WARNING:tensorflow:AutoGraph could not transform <bound method Adaptive_Softmax.call of <main.Adaptive_Softmax object at 0x7ff7a6a7f320>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: expected exactly one node node, found [<gast.gast.FunctionDef object at 0x7ff79c3853c8>, <gast.gast.Return object at 0x7ff79c385c50>] WARNING:tensorflow:AutoGraph could not transform <bound method Adaptive_Softmax.call of <main.Adaptive_Softmax object at 0x7ff7a6a7f320>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: expected exactly one node node, found [<gast.gast.FunctionDef object at 0x7ff79c212198>, <gast.gast.Return object at 0x7ff79c2121d0>] 2020-01-13 16:53:45.777560: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10

Process finished with exit code 0

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

closed time in a month

bj1123

issue commenttensorflow/tensorflow

Autograph transformation warning

Thank you for checking. We'll track fixing this issue in #35765.

bj1123

comment created time in a month

issue closedtensorflow/autograph

WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._init_func at 0x7fadaffa7d90> could not be transformed and will be executed as-is.

The full error message:

WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._init_func at 0x7f8069decd90> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._remote_init_func at 0x7f8068094268> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._next_func at 0x7f80680949d8> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._remote_next_func at 0x7f7df0609158> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._finalize_func at 0x7f7df0609bf8> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._remote_finalize_func at 0x7f7df0609ea0> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4

closed time in a month

Bengt

issue commenttensorflow/autograph

WARNING:tensorflow:Entity <function _CopyToDeviceDataset.__init__.<locals>._init_func at 0x7fadaffa7d90> could not be transformed and will be executed as-is.

I believe all the issues we've talked about have been addressed, although some fixes will only land in 2.2. Feel free to reopen if we missed anything.

Bengt

comment created time in a month

issue commenttensorflow/tensorflow

Autograph failure with `\`

For a workaround until this is fixed, note that you can use parentheses to break expressions on multiple lines:

a = (
    1)
emailweixu

comment created time in a month

issue commenttensorflow/tensorflow

AutoGraph unexpected indent in tf-nightly-gpu-2.1.0.dev20191103

Thank you, and sorry for the delay. It looks like the warning is caused by the same bug as #35765. Removing the backslash continuations worked in my tests. Until that's fixed, any of these workarounds should silence the warning:

  • removing the backslash continuations
  • decorating the function with @tf.autograph.experimental.do_not_convert, since it doesn't contain data-dependent control flow

Note that you can use parentheses to break expressions on multiple lines, as alternative to backslash continuations.

netw0rkf10w

comment created time in a month

issue commenttensorflow/tensorflow

Autograph transformation warning

This looks like #35765. Could you try removing the backslash continuation on this line:

            cur_ll = tf.gather_nd(cluster_ll, tf.concat([indices, tf.ones_like(indices) * i], 1)) + \
                     tf.gather_nd(tail_logprob_i,
                                  tf.stack([tf.range(tf.size(target_i, out_type=target_i.dtype)), target_i], 1))

Breaking the line using parentheses should work:

            cur_ll = (
                tf.gather_nd(cluster_ll, tf.concat([indices, tf.ones_like(indices) * i], 1)) +
                tf.gather_nd(tail_logprob_i,
                             tf.stack([tf.range(tf.size(target_i, out_type=target_i.dtype)), target_i], 1)))
bj1123

comment created time in a month

issue commentserge-sans-paille/gast

Name constructor required 3 arguments prior to 0.3, and 4 arguments after

I think the current state is ok, it's not bothersome once you switch to 0.3+

mdanatg

comment created time in a month

issue closedserge-sans-paille/gast

Name constructor required 3 arguments prior to 0.3, and 4 arguments after

It looks like the new grammar added a 4th type_comment argument to Name nodes. Upgrading the code for this change would break existing users and force them to upgrade their gast installation. Is it possible to default this argument to None instead? Then the code would remain compatible across versions.

closed time in a month

mdanatg

issue commenttensorflow/tensorflow

r2.0/2.1 Python 3.8 AutoGraph could not transform <bound method LinearRegressionTF.fit of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f7a5ccc1fa0>> and will run it as-is

Progress update - upgrade to astunparse is done, awaiting new release of gast which fixes a bug with FunctionDef nodes in 3.8 (see https://github.com/serge-sans-paille/gast/pull/43).

dbonner

comment created time in a month

issue commenttensorflow/tensorflow

Autograph failure with `\`

Looks like a bug in the parser. dedent_block seems to be confused by the backslash continuation:

s = r'''
    def f():
        a = \
            1
        return a
'''

print(dedent_block(s))
def f():
    a = \
    1
return a
emailweixu

comment created time in a month

Pull request review commenttensorflow/tensorflow

Fix version comparison for Python 3.10 and 4

 def write_docs(output_dir, 

Looks good now, thank you!

hugovk

comment created time in a month

Pull request review commenttensorflow/tensorflow

Fix version comparison for Python 3.10 and 4

 def write_docs(output_dir, 

Yes, sorry for the confusion. I guess I should have said to revert it.

hugovk

comment created time in a month

Pull request review commenttensorflow/tensorflow

Fix version comparison for Python 3.10 and 4

 def write_docs(output_dir, 

Still shows deleted, but maybe the UI is confusing me. Let's try to pull it and see if the tests are happy.

hugovk

comment created time in a month

Pull request review commenttensorflow/tensorflow

Fix version comparison for Python 3.10 and 4

 def write_docs(output_dir, 

I believe git reset should work. git revert + squash might work too.

hugovk

comment created time in a month

Pull request review commenttensorflow/tensorflow

Fix version comparison for Python 3.10 and 4

 def write_docs(output_dir, 

Thanks! It still shows in the PR, as deleted. We need to exclude any changes to it from the PR.

hugovk

comment created time in a month

Pull request review commenttensorflow/tensorflow

Fix version comparison for Python 3.10 and 4

 def write_docs(output_dir, 

Sorry for the delay. It looks like the docs tool is being moved to a separate package, and the owners prefer not to make changes until that's complete. Could you remove this file only?

hugovk

comment created time in a month

more