profile
viewpoint

keras-team/keras 48439

Deep Learning for humans

Vishal-V/GSoC-TensorFlow 8

Google Summer of Code 2019 with TensorFlow: Final Work Product

k-w-w/community 0

Stores documents used by the TensorFlow developer community

k-w-w/models 0

Models and examples built with TensorFlow

k-w-w/reference 0

Reference implementations of MLPerf benchmarks

k-w-w/tensor2tensor 0

A library for generalized sequence to sequence models

k-w-w/tensorboard 0

TensorFlow's Visualization Toolkit

k-w-w/tensorflow 0

An Open Source Machine Learning Framework for Everyone

MehdiShz/reference 0

Reference implementations of MLPerf benchmarks

pull request commenttensorflow/tensorflow

Add __reduce_ex__ to Keras Model to enable copy.deepcopy and pickle

The current approach should work for Subclassed models as long as the model is unpickled in a CustomObjectScope or if the class is decorated with tf.keras.utils.register_keras_serializable.

The current approach matches how things are saved in the HDF5 format (except that in HDF5, weights are saved as a dictionary mapping layer names to layer.get_weights() rather than in a single model.get_weights() list). We should be able to extend the pickled content to include traced tf.functions so that custom objects can be deserialized without being registered.

adriangb

comment created time in 10 days

issue commenttensorflow/tensorflow

Passing tf.keras.Model as tf.function argument does not create concrete function

Another option is to make step_model a method of MyModel (methods are special cased, so the "self" argument doesn't have to be translated to encodable argument):

class MyModel(tf.keras.Model):
    def __init__(self):
        super().__init__()

    def call(self, inputs):
        return 2 * inputs

    @tf.function
    def step_model(self, inputs):
        return self(inputs)

inputs = tf.convert_to_tensor(1, dtype=tf.float32)
model = MyModel()

print(f"step_model() = {model.step_model(inputs)}") # 2.0
print(f"step_model() concrete functions: {model.step_model._list_all_concrete_functions_for_serialization()}") # [<tensorflow.python.eager.function.ConcreteFunction object at 0x7fd9cc231b00>]
jarednielsen

comment created time in a month

issue commenttensorflow/tensorflow

Keras `model_from_json` ignores distribution strategy

I'm not able to test the GPU utilization but I believe it should be.

dvbuntu

comment created time in 2 months

issue commenttensorflow/tensorflow

Keras `model_from_json` ignores distribution strategy

Hmm, this might be a bug with an earlier version of Tensorflow. I tried running it in colab (which is currently on version 2.2.0-rc3), and the _distribution_strategy attribute is set correctly. Can you try updating and checking again?

dvbuntu

comment created time in 2 months

issue commenttensorflow/tensorflow

[tf-nightly] unable to load saved functional model

@QuantumNinja92 When loading checkpoints, you should call model.load_weights instead of tf.keras.models.load_model.

Cospel

comment created time in 2 months

issue commenttensorflow/tensorflow

predict result with SavedModel are not same in Python Api and Java Api

Hmm, it's hard to say from the results alone. Is there some randomization being applied in the signature (e.g. dropout)? Are you able to post code that produced the SavedModel?

IamHimon

comment created time in 2 months

issue commenttensorflow/tensorflow

Keras `model_from_json` ignores distribution strategy

Hi Anjali, can you take a look? The model appears to have a distribution strategy (m2._distribution_strategy = mirrored stratetgy), and the variables are mirrored, so I'm not sure why the model isn't utilizing the GPU.

dvbuntu

comment created time in 2 months

issue closedtensorflow/tensorflow

When the result of tf.saved_model.load goes out of scope, it invalidates models that are still in-scope

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes, a trivial change to a stock example
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 x64
  • TensorFlow installed from (source or binary): binary via pip
  • TensorFlow version (use command below): tensorflow-gpu 2.0.0
  • Python version: 3.6.6
  • CUDA/cuDNN version: 10.0 / 10.0-windows10-x64-v7.6.0.64
  • GPU model and memory: GeForce GTX 1050 Ti (laptop)

Describe the current behavior

Loading a TF2 model built with the Keras API, and then predicting with random data follows this minimal recipe:

import tensorflow as tf
loaded = tf.saved_model.load('some/model')
infer = loaded.signatures['serving_default']
data = tf.constant(np.asarray(np.random.randn(my_shape), dtype=np.float32))
preds = infer(data)

However, if we don't keep a reference to loaded around or lose it some time later...

import tensorflow as tf
infer = tf.saved_model.load('some/model').signatures['serving_default']
data = tf.constant(np.asarray(np.random.randn(my_shape), dtype=np.float32))
preds = infer(data)

... it fails with this utterly confusing error about variables being potentially uninitialized (for which google searches lead down various unrelated tf 1.x related rabbit holes):

2019-11-13 15:37:09.423218: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Failed precondition: Error while reading resource variable batch_normalization_150/moving_mean_12513 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/batch_normalization_150/moving_mean_12513/class tensorflow::Var does not exist.
	 [[{{node StatefulPartitionedCall/model_65/model_60/batch_normalization_150/FusedBatchNormV3/ReadVariableOp}}]]
Traceback (most recent call last):
  File "D:/DEVEL/experiments/test_load.py", line 17, in <module>
    preds = infer(data)
  File "C:\Users\chris\Miniconda3\envs\np2019-11-06e2\lib\site-packages\tensorflow_core\python\eager\function.py", line 1081, in __call__
    return self._call_impl(args, kwargs)
  File "C:\Users\chris\Miniconda3\envs\np2019-11-06e2\lib\site-packages\tensorflow_core\python\eager\function.py", line 1121, in _call_impl
    return self._call_flat(args, self.captured_inputs, cancellation_manager)
  File "C:\Users\chris\Miniconda3\envs\np2019-11-06e2\lib\site-packages\tensorflow_core\python\saved_model\load.py", line 99, in _call_flat
    cancellation_manager)
  File "C:\Users\chris\Miniconda3\envs\np2019-11-06e2\lib\site-packages\tensorflow_core\python\eager\function.py", line 1224, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager)
  File "C:\Users\chris\Miniconda3\envs\np2019-11-06e2\lib\site-packages\tensorflow_core\python\eager\function.py", line 511, in call
    ctx=ctx)
  File "C:\Users\chris\Miniconda3\envs\np2019-11-06e2\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
    six.raise_from(core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.FailedPreconditionError:  Error while reading resource variable batch_normalization_150/moving_mean_12513 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/batch_normalization_150/moving_mean_12513/class tensorflow::Var does not exist.
	 [[{{node StatefulPartitionedCall/model_65/model_60/batch_normalization_150/FusedBatchNormV3/ReadVariableOp}}]] [Op:__inference_signature_wrapper_5266]

Function call stack:
signature_wrapper

Describe the expected behavior

The infer object holds a reference to the data that it depends on so that it doesn't get garbage-collected.

Code to reproduce the issue If you run the TF2.0 Saved Model tutorial colab notebook (https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/saved_model.ipynb) up to and including the cell

!saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve --signature_def serving_default

but then you execute the following cell instead of the one that would create the variable loaded:

infer = tf.saved_model.load("/tmp/mobilenet/1/").signatures["serving_default"]
print(infer.structured_outputs)
labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]
decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]
print("Result after saving and loading:\n", decoded)

... then it fails with:

---------------------------------------------------------------------------
FailedPreconditionError                   Traceback (most recent call last)
<ipython-input-8-f42985f0118d> in <module>()
----> 1 labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]
      2 
      3 decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]
      4 
      5 print("Result after saving and loading:\n", decoded)

6 frames
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)

FailedPreconditionError:  Error while reading resource variable conv_pw_7_bn/moving_variance_33153 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/conv_pw_7_bn/moving_variance_33153/N10tensorflow3VarE does not exist.
	 [[{{node StatefulPartitionedCall/mobilenet_1.00_224/conv_pw_7_bn/FusedBatchNormV3/ReadVariableOp_1}}]] [Op:__inference_signature_wrapper_30328]

Function call stack:
signature_wrapper

closed time in 2 months

chkothe

issue commenttensorflow/tensorflow

When the result of tf.saved_model.load goes out of scope, it invalidates models that are still in-scope

Functions only keep weak references to variables, so if the loaded object is garbage collected, the variables will be deleted as well. Make sure the loaded object remains in memory (using @jrbuhl93's approach as one such workaround)

chkothe

comment created time in 2 months

issue commenttensorflow/tensorflow

Using SavedModels with low-level API in TF 2.x

The suggested fix should work -- can you submit a PR (with a test)? Thank you!

ongun-kanat

comment created time in 2 months

issue commenttensorflow/tensorflow

Not able to load a tf.keras model

There's an issue when reshape is called with an empty list as the shape. I can't think of a workaround but looking into a way to fix this.

sonu1-p

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Restructure Keras Scikit-Learn wrappers to better implement Scikit-Learn API

 function install_pip_deps {   ${SUDO_CMD} ${PIP_CMD} install six==1.12.0   ${SUDO_CMD} ${PIP_CMD} install grpcio   ${SUDO_CMD} ${PIP_CMD} install portpicker-  ${SUDO_CMD} ${PIP_CMD} install scipy-  ${SUDO_CMD} ${PIP_CMD} install scikit-learn+  ${SUDO_CMD} ${PIP_CMD} install scipy==1.2.3+  ${SUDO_CMD} ${PIP_CMD} install scikit-learn==0.20.4

@gunan Do you know if this is the right approach to change the scikit versions?

adriangb

comment created time in 2 months

pull request commenttensorflow/tensorflow

Restructure Keras Scikit-Learn wrappers to better implement Scikit-Learn API

Sure thing, thanks!

adriangb

comment created time in 2 months

issue closedtensorflow/tensorflow

load_model(filename) fails on weight ordering if sublayer .trainable is modified after init

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS, Darwin-19.3.0-x86_64-i386-64bit, mac version: ('10.15.3', ('', '', ''), 'x86_64')
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): v1.12.1-26991-g5a6ae102f0 2.2.0-dev20200311
  • Python version: 3.7.6

Describe the current behavior

Creating a layer and then setting layer.trainable = False (after creation) and then creating a model using that layer and saving that model with model.save(filename), and then loading that model with tf.keras.models.load_model(filename, custom_objects=...), fails because weights ordering for the layer is inconsistent.

Describe the expected behavior

I expect to be able to load a model I saved, despite having changed the trainable attribute on some layer before compiling and saving. (The trainable attribute is documented, with no warning that it must not be mutated, nor is there any runtime warning about this.)

Standalone code to reproduce the issue

import tensorflow as tf
from tensorflow import keras


class LayerWithSublayers(keras.layers.Layer):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.embedding = keras.layers.Embedding(3, 2)
        self.dense = keras.layers.Dense(4)

    def call(self, inputs, **kwargs):
        return self.dense(self.embedding(inputs))


input_ids = keras.Input(shape=(4,), dtype=tf.int32, name='input_ids')
output_layer = LayerWithSublayers()
output_layer.embedding.trainable = False
output = output_layer(input_ids)

model = keras.Model(inputs=[input_ids], outputs=[output])
model.compile(loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])

model_file_name = 'foo.h5'
model.save(filepath=model_file_name)

loaded_model = keras.models.load_model(model_file_name, custom_objects={'LayerWithSublayers': LayerWithSublayers})

If I change this to pass trainable=False when creating the keras.layers.Embedding, then the model loads just fine. So the problem is that the change of trainable is not reflected when the model is serialized to HDF5 format.

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

Running the above script outputs this traceback:

Traceback (most recent call last):
  File "keras_save_load_h5_nontrainable.py", line 26, in <module>
    loaded_model = keras.models.load_model(model_file_name, custom_objects={'LayerWithSublayers': LayerWithSublayers})
  File "/Users/gbr/.pyenv/versions/NLNIGHTLY/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 184, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/Users/gbr/.pyenv/versions/NLNIGHTLY/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 173, in load_model_from_hdf5
    load_weights_from_hdf5_group(f['model_weights'], model.layers)
  File "/Users/gbr/.pyenv/versions/NLNIGHTLY/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 704, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)
  File "/Users/gbr/.pyenv/versions/NLNIGHTLY/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3402, in batch_set_value
    x.assign(np.asarray(value, dtype=dtype(x)))
  File "/Users/gbr/.pyenv/versions/NLNIGHTLY/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 842, in assign
    self._shape.assert_is_compatible_with(value_tensor.shape)
  File "/Users/gbr/.pyenv/versions/NLNIGHTLY/lib/python3.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 1117, in assert_is_compatible_with
    raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (3, 2) and (2, 4) are incompatible

closed time in 2 months

gthb

issue commenttensorflow/tensorflow

load_model(filename) fails on weight ordering if sublayer .trainable is modified after init

Restoring the trainable value can be tricky.

Say that LayerWithSublayers defines the trainable status of the embedding layer in the class definition, and is saved with the embedding.trainable set to True:

class LayerWithSublayers(keras.layers.Layer):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.embedding = keras.layers.Embedding(3, 2)
        self.embedding.trainable = True

Then the definition of the class is changed so that embedding.trainable is False:

class LayerWithSublayers(keras.layers.Layer):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.embedding = keras.layers.Embedding(3, 2)
        self.embedding.trainable = False

When loading the model with the new definition, should the embedding layer retain it's original trainable status, or should it use the trainable status set in the LayerWithSublayers class?

The current behavior is that we will whatever is returned by the layer's get_config/from_config (i.e. the latter of the two options). If you want LayerWithSublayers to retain the trainable status of one of its internal layers, then consider adding that to the layer's get_config method.

e.g.

class LayerWithSublayers(keras.layers.Layer):
    def __init__(self, train_embeddings=True, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.embedding = keras.layers.Embedding(3, 2)
        self.embedding.trainable = train_embeddings
    def get_config(self):
        config = super().get_config()
        config['train_embeddings'] = self.embedding,trainable
gthb

comment created time in 2 months

issue closedtensorflow/tensorflow

Incorrect number of Total Parameters when Loading a Saved Model with Trainable = False

Resume_Classification_Model.zip System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): N/A, as it can be reproduced in Google Colab
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
  • TensorFlow installed from (source or binary): pip
  • TensorFlow version (use command below): 2.1
  • Python version: Colab
  • Bazel version (if compiling from source): N/A
  • GCC/Compiler version (if compiling from source): N/A
  • CUDA/cuDNN version: N/A
  • GPU model and memory: N/A

This issue is similar to 29535 but now occurring in Tensorflow Version 2.x.

Describe the current behavior: Value of Total Parameters, when we Load the Saved Model with Trainable = False is Double compared to the Actual Total Parameters.

Describe the expected behavior: Value of Total Parameters should be same even we use Trainable = True or Trainable = False

Standalone code to reproduce the issue : Please find the attached Gist.

Please find the Model, "Resume_Classification_Model.zip", attached.

closed time in 2 months

rakeshmothukuru1

issue commenttensorflow/tensorflow

Incorrect number of Total Parameters when Loading a Saved Model with Trainable = False

I think this is working as intended. compile locks down the trainable weights, I believe because the train ops are generated during compile. Therefore, you should recompile to get the correct number of parameters. (also, the summary notes that you should call compile after changing the trainable value)

rakeshmothukuru1

comment created time in 2 months

pull request commenttensorflow/tensorflow

Restructure Keras Scikit-Learn wrappers to better implement Scikit-Learn API

@fchollet can you review the API changes made to the Scikit Learn wrapper?

adriangb

comment created time in 2 months

issue commenttensorflow/tensorflow

Keras load weights fails to load model from directory containing [[

Hi @racinmat, this does seem like a bug in our checkpoint loading code. We're not able to fix this right away, so I'll mark this as open for contributions if anyone wants to help resolve this.

racinmat

comment created time in 2 months

issue closedtensorflow/tensorflow

[1.15]Discrepancy between documentation & behaviour in tf.keras.Model.Save

URL(s) with the issue:

In the 1.15 changelog:

https://github.com/tensorflow/tensorflow/releases/tag/v1.15.0

tf.keras.model.save_model and model.save now defaults to saving a TensorFlow SavedModel.

In the 1.15 docstring: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/keras/Model#save

filepath: String, path to SavedModel or H5 file to save the model. overwrite: Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. include_optimizer: If True, save optimizer's state together. save_format: Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. The default is currently 'h5', but will switch to 'tf' in TensorFlow 2.0. The 'tf' option is currently disabled (use tf.keras.experimental.export_saved_model instead).

Description of issue (what needs changing):

  • The changelogs states that tf.keras.Model are saved using tf format by default
  • The docstring states that the default save format in tf1.x is hdf5 and that tf is disabled
  • "tf" save format is NOT disabled but can be passed as parameters https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/python/keras/saving/save.py#L92 We can still save using tf format using tf.keras.Model.save(). HOWEVER you cannot load tf model

Usage example

This is not a critical issue but this can be confusing to users reading the changelog and reading the docstring, and seeing that tf behaviour is enabled by default.

This will lead users to:

  • Being confused between behaviours...
  • Thinking they need to update their codebases to switch to 1.15
  • Seeing that tf format doesn't work in tf1.15

Saving works but not reloading, which confirms the fact that tf save format doesn't work

i = tf.keras.layers.Input(shape=(10,))
x = tf.keras.layers.Dense(2)(i)
o = tf.keras.layers.Activation("softmax")(x)
m = tf.keras.Model(inputs=i, outputs=o)
m.save('test_model_tf', save_format="tf")
m2 = tf.keras.models.load_model("test_model_tf")
m2.summary()
Layer (type)                 Output Shape              Param #   
=================================================================
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-48-d0af62cb113d> in <module>
----> 1 m2.summary()

~/opt/miniconda3/envs/py36-tf1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in summary(self, line_length, positions, print_fn)
   1459                               line_length=line_length,
   1460                               positions=positions,
-> 1461                               print_fn=print_fn)
   1462 
   1463   def _validate_graph_inputs_and_outputs(self):

~/opt/miniconda3/envs/py36-tf1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/layer_utils.py in print_summary(model, line_length, positions, print_fn)
    224   for i in range(len(layers)):
    225     if sequential_like:
--> 226       print_layer_summary(layers[i])
    227     else:
    228       print_layer_summary_with_connections(layers[i])

~/opt/miniconda3/envs/py36-tf1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/layer_utils.py in print_layer_summary(layer)
    182     name = layer.name
    183     cls_name = layer.__class__.__name__
--> 184     fields = [name + ' (' + cls_name + ')', output_shape, layer.count_params()]
    185     print_row(fields, positions)
    186 

~/opt/miniconda3/envs/py36-tf1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py in count_params(self)
   1632                          ', but the layer isn\'t built. '
   1633                          'You can build it manually via: `' + self.name +
-> 1634                          '.build(batch_input_shape)`.')
   1635     return int(sum(np.prod(w.shape.as_list()) for w in self.weights))
   1636 

ValueError: You tried to call `count_params` on input_1, but the layer isn't built. You can build it manually via: `input_1.build(batch_input_shape)`.

This works,

i = tf.keras.layers.Input(shape=(10,))
x = tf.keras.layers.Dense(2)(i)
o = tf.keras.layers.Activation("softmax")(x)
m = tf.keras.Model(inputs=i, outputs=o)
m.save('test_model_hdf5.hdf5`)
m2 = tf.keras.models.load_model("test_model_hdf5.hdf5")
m2.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_4 (InputLayer)         [(None, 10)]              0         
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 22        
_________________________________________________________________
activation_3 (Activation)    (None, 2)                 0         
=================================================================
Total params: 22
Trainable params: 22
Non-trainable params: 0
_________________________________________________________________

closed time in 2 months

fchouteau

issue commenttensorflow/tensorflow

[1.15]Discrepancy between documentation & behaviour in tf.keras.Model.Save

Thanks for reporting this, the loading bug ("layers is not built") should be fixed in the most recent version of TensorFlow.

And yes the release notes were inaccurate... the default format didn't actually change to "tf" until TF 2.

fchouteau

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Restructure Keras Scikit-Learn wrappers to better implement Scikit-Learn API

 class BaseWrapper(object):   `batch_size` or `epochs` as well as the model parameters.   """ +  # basic legal parameter set, based on functions that will normally be called+  _legal_params_fns = [+      Sequential.evaluate,+      Sequential.fit,+      Sequential.predict,+      Sequential.predict_classes,+      Model.evaluate,+      Model.fit,+      Model.predict,+  ]++  model = None+   def __init__(self, build_fn=None, **sk_params):     self.build_fn = build_fn-    self.sk_params = sk_params-    self.check_params(sk_params) -  def check_params(self, params):-    """Checks for user typos in `params`.+    # the sklearn API requires that all __init__ parameters be saved as an instance+    # attribute of the same name+    for name, val in sk_params.items():+      setattr(self, name, val)++    # collect all __init__ params for this base class as well as+    # all child classes+    init_params = []+    # reverse the MRO, we want the 1st one to overwrite the nth+    for init in reversed(self.__class__.__mro__):+      for p in inspect.signature(init).parameters.values():+        if p.kind not in ARGS_KWARGS_IDENTIFIERS:+          init_params.append(p.name)++    # add parameters from sk_params+    self._init_params = set((*init_params, *sk_params.keys()))

While yes, external Tensorflow requires >= 3.5, Tensorflow is used by google-internal code, some of which is still running on Python 2.

adriangb

comment created time in 2 months

issue commenttensorflow/tensorflow

Incorrect number of Total Parameters when Loading a Saved Model with Trainable = False

Can you clarify what the workflow is?

What do you mean by loading the SavedModel with trainable=False?

rakeshmothukuru1

comment created time in 2 months

issue closedtensorflow/tensorflow

init_from_checkpoint support loading different variables from multiple checkpoints

<em>Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>

For example say I created a model with encoder + decoder, could init_from_checkpoint support loading encoder from A-checkpoint, and loading decoder from B-checkpoint?

init_from_checkpoint(init_checkpoint_0, assignment_map_0)
init_from_checkpoint(init_checkpoint_1, assignment_map_0)

System information

  • TensorFlow version (you are using): version 1.12

Describe the feature and the current behavior/state.

Will this change the current api? How? probably not

Who will benefit with this feature? developers who need more flexible graph loading, who need more flexible transfer leaning tricks

Any Other info. pip install on both windows and linux

closed time in 2 months

congchan

issue commenttensorflow/tensorflow

init_from_checkpoint support loading different variables from multiple checkpoints

You should already be able to use multiple init_from_checkpoint calls to different checkpoints.

In the API documentation, there are several examples of how to use init_from_checkpoint. You can either initialize all variables, or specific variables.

congchan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Restructure Keras Scikit-Learn wrappers to better implement Scikit-Learn API

 class BaseWrapper(object):   `batch_size` or `epochs` as well as the model parameters.   """ +  # basic legal parameter set, based on functions that will normally be called+  _legal_params_fns = [+      Sequential.evaluate,+      Sequential.fit,+      Sequential.predict,+      Sequential.predict_classes,+      Model.evaluate,+      Model.fit,+      Model.predict,+  ]++  model = None+   def __init__(self, build_fn=None, **sk_params):     self.build_fn = build_fn-    self.sk_params = sk_params-    self.check_params(sk_params) -  def check_params(self, params):-    """Checks for user typos in `params`.+    # the sklearn API requires that all __init__ parameters be saved as an instance+    # attribute of the same name+    for name, val in sk_params.items():+      setattr(self, name, val)++    # collect all __init__ params for this base class as well as+    # all child classes+    init_params = []+    # reverse the MRO, we want the 1st one to overwrite the nth+    for init in reversed(self.__class__.__mro__):+      for p in inspect.signature(init).parameters.values():+        if p.kind not in ARGS_KWARGS_IDENTIFIERS:+          init_params.append(p.name)++    # add parameters from sk_params+    self._init_params = set((*init_params, *sk_params.keys()))

(Cause of presubmit errors)

We still support python 2, so can you rewrite this?

adriangb

comment created time in 2 months

issue commenttensorflow/tensorflow

set_shape is not loaded from saved model

Why not just set the correct shape in tf.keras.Input? I understand that this is a toy example, but is there a valid use case for this?

StefReck

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Added doc and examples for tf.keras.losses.get

 def deserialize(name, custom_objects=None):  @keras_export('keras.losses.get') def get(identifier):-  """Retrieves a Keras loss function.+  """Retrieves a Keras loss as a `function`/`Loss` class instance.++  You can get loss as a `function` using parameter `identifier` as a string+  of the loss function as shown in below example.+  >>> loss = tf.keras.losses.get("categorical_crossentropy")+  >>> type(loss)+  function++  You can get loss as a `Loss` class instance using parameter `identifier`+  as a string of the class name of the loss.+  >>> loss = tf.keras.losses.get("CategoricalCrossentropy")+  >>> type(loss)+  tensorflow.python.keras.losses.CategoricalCrossentropy

nit: I'd combine these examples e.g.

The identifier may be the string name of a loss function or `Loss` class.

>>...
ashutosh1919

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

Added doc and examples for tf.keras.losses.get

 def deserialize(name, custom_objects=None):  @keras_export('keras.losses.get') def get(identifier):-  """Retrieves a Keras loss function.+  """Retrieves a Keras loss as a `function`/`Loss` class instance.++  You can get loss as a `function` using parameter `identifier` as a string+  of the loss function as shown in below example.+  >>> loss = tf.keras.losses.get("categorical_crossentropy")+  >>> type(loss)+  function++  You can get loss as a `Loss` class instance using parameter `identifier`+  as a string of the class name of the loss.+  >>> loss = tf.keras.losses.get("CategoricalCrossentropy")+  >>> type(loss)+  tensorflow.python.keras.losses.CategoricalCrossentropy++  You can also specify `config` of the loss to this function by passing dict+  containing `class_name` and `config` as an identifier.

Also note that the class name must map to a Loss class

ashutosh1919

comment created time in 3 months

issue commenttensorflow/tensorflow

Error while trying to serilize Image Captioning keras model '_UserObject' object is not callable'

This should be fixed in the latest version of tensorflow -- can you update and check?

veonua

comment created time in 3 months

issue commenttensorflow/tensorflow

How to deserialize from a dict with tf.keras.losses.get

I can explain why this doesn't work -- when you pass a dict, the class name should be the actual name of the class. categorical_crossentropy is a function, and not a class. I think the documentation should be improved here.

@pavithrasv Should we modify losses.get to return partial functions, if the class name is the name of a function?

jpatts

comment created time in 3 months

pull request commenttensorflow/tensorflow

Restructure Keras Scikit-Learn wrappers to better implement Scikit-Learn API

Thanks for fixing the bugs! I'm pretty sure the windows tests are unrelated. Running the tests again

adriangb

comment created time in 3 months

push eventk-w-w/tensorflow

Thomas O'Malley

commit sha d0b92c1904fc868de089c8ea0e08b0cdb9465790

Fix MultiWorkerMirroredStrategy validation in Model.fit PiperOrigin-RevId: 299150128 Change-Id: Ie0ef99dcbd1afc91ad1a0e19c56638c5e48a7865

view details

A. Unique TensorFlower

commit sha 248313a950395fa552e1a5e65dcbe21dabe501a4

Eliminate tf.where call in updating running mean and average for batch norm in Keras. This unlocks the improvements promised by fusing updates into the CPU and GPU kernels. Results for ResNet50 /w batch size 32 in eager mode on GTX 1080: Before: 85 images/s (fp32), 81 images/s (fp16) After: 101 images/s (fp32), 99 images/s (fp16) PiperOrigin-RevId: 298927568 Change-Id: I2941bff5d19c7fdccad78bbb9c5df2fdcd2fc36a

view details

Penporn Koanantakool

commit sha 5824d3487d85662773a0981c3466b6fd8dfc8b0c

Remove --config=mkl_open_source_only because it has compilation errors and has not been used. PiperOrigin-RevId: 299177856 Change-Id: Ib581b41f7bc51ff27aba992656eee8fd7a5d6bc5

view details

Goldie Gadde

commit sha cb730442cef9f21c0d2a0934e8f35fa07aa32f13

Merge pull request #37373 from geetachavan1/cherrypicks_JHPZR [r2.2:Cherrypick] Eliminate tf.where call in updating running mean and average for batch norm in Keras. This unlocks the improvements promised by fusing updates into the CPU and GPU kernels.

view details

Bixia Zheng

commit sha 9bdf872734818712373163a7e5408bf5938f96b5

Diable a test that fails on open source build to unblock the release. PiperOrigin-RevId: 299389711 Change-Id: If4b4f18c6141101b7a7d3034a3e77a0eaabc8810

view details

Thomas O'Malley

commit sha 678dfa6780c2ef83126c448ac7be802d04c7fa0c

Ignore training=None when passed during Functional API construction. PiperOrigin-RevId: 299283031 Change-Id: I9a0654496fb403a304efca2d79f9cb1a56a2b3e5

view details

A. Unique TensorFlower

commit sha b9c639a983dec0b6000da57bc4f386b90e474c1b

Save gzipped trace events json by default. PiperOrigin-RevId: 298465129 Change-Id: I79bb7265d3e4ded84effd3218ecf62716df997c6

view details

A. Unique TensorFlower

commit sha f56b9c0a25dcfc5d209d4b5cffc9065e5ba1f7be

Fix OSS windows paths for profiler. PiperOrigin-RevId: 298760340 Change-Id: I1ab94dd92fa80ec8b9268cbb422b78d5dc68b9cb

view details

A. Unique TensorFlower

commit sha ec6018383820c1bbd24689b8c657bb370ead8e77

Clean up profiler when exception happens. PiperOrigin-RevId: 298788827 Change-Id: Ia420712cd0edf004aa4135cf45409d88528ed136

view details

A. Unique TensorFlower

commit sha d31ee0f2335a9ea8ef66d82425bf1a04ab1cd978

Populate more error messages to the users. PiperOrigin-RevId: 299445320 Change-Id: Ia0d02c25ac3254444f7293d60c912989ad77ee3c

view details

Goldie Gadde

commit sha df693a4afb7ec2e1e118c5a0a0dcc86b60b3e3b6

Merge pull request #37408 from qiuminxu/cherrypicks_PPKOA r2.2: cherry-pick request: fix profile traces too large and windows builds.

view details

Goldie Gadde

commit sha 216c878c8693f7675c39a1969d7ce095f4b9788c

Merge pull request #37367 from omalleyt12/cherrypicks_OUAJ4 Fix MultiWorkerMirroredStrategy validation in Model.fit

view details

Goldie Gadde

commit sha b67d58d5bf77ada330ad54622c200d8b9c26cc74

Merge pull request #37403 from omalleyt12/cherrypicks_L42U7 Ignore training=None when passed during Functional API construction.

view details

Goldie Gadde

commit sha 6a02e6e56dde68658bf5e55bcc1eb6470e775310

Merge pull request #37375 from penpornk/cherrypicks_399FU r2.2 cherry-pick request: Remove --config=mkl_open_source_only

view details

Goldie Gadde

commit sha b720399a6efd37259b2dc51912e217f49b4efa60

Merge pull request #37402 from geetachavan1/cherrypicks_XMMHF [r2.2:Cherrypick] Diable a test that fails on open source build to unblock the release.

view details

Kathy Wu

commit sha ba545bc578a0c1489f8348ee08bb9c073e9cc3e1

Merge branch 'r2.2' of https://github.com/tensorflow/tensorflow into cherrypicks_SXE8X

view details

push time in 3 months

Pull request review commenttensorflow/tensorflow

Restructure Keras Scikit-Learn wrappers to better implement Scikit-Learn API

 class BaseWrapper(object):   `batch_size` or `epochs` as well as the model parameters.   """ -  def __init__(self, build_fn=None, **sk_params):-    self.build_fn = build_fn-    self.sk_params = sk_params-    self.check_params(sk_params)--  def check_params(self, params):-    """Checks for user typos in `params`.--    Arguments:-        params: dictionary; the parameters to be checked--    Raises:-        ValueError: if any member of `params` is not a valid argument.-    """+    # basic legal parameter set, based on functions that will normally be called     legal_params_fns = [-        Sequential.fit, Sequential.predict, Sequential.predict_classes,-        Sequential.evaluate+        Sequential.evaluate,+        Sequential.fit,+        Sequential.predict,+        Sequential.predict_classes,+        Model.evaluate,+        Model.fit,+        Model.predict,     ]-    if self.build_fn is None:-      legal_params_fns.append(self.__call__)-    elif (not isinstance(self.build_fn, types.FunctionType) and-          not isinstance(self.build_fn, types.MethodType)):-      legal_params_fns.append(self.build_fn.__call__)-    else:-      legal_params_fns.append(self.build_fn)--    for params_name in params:-      for fn in legal_params_fns:-        if has_arg(fn, params_name):-          break-      else:-        if params_name != 'nb_epoch':-          raise ValueError('{} is not a legal parameter'.format(params_name))--  def get_params(self, **params):  # pylint: disable=unused-argument-    """Gets parameters for this estimator.--    Arguments:-        **params: ignored (exists for API compatibility).--    Returns:-        Dictionary of parameter names mapped to their values.-    """-    res = copy.deepcopy(self.sk_params)-    res.update({'build_fn': self.build_fn})-    return res--  def set_params(self, **params):-    """Sets the parameters of this estimator.--    Arguments:-        **params: Dictionary of parameter names mapped to their values.--    Returns:-        self-    """-    self.check_params(params)-    self.sk_params.update(params)-    return self--  def fit(self, x, y, **kwargs):-    """Constructs a new model with `build_fn` & fit the model to `(x, y)`.--    Arguments:-        x : array-like, shape `(n_samples, n_features)`-            Training samples where `n_samples` is the number of samples-            and `n_features` is the number of features.-        y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)`-            True labels for `x`.-        **kwargs: dictionary arguments-            Legal arguments are the arguments of `Sequential.fit`--    Returns:-        history : object-            details about the training history at each epoch.-    """-    if self.build_fn is None:-      self.model = self.__call__(**self.filter_sk_params(self.__call__))-    elif (not isinstance(self.build_fn, types.FunctionType) and-          not isinstance(self.build_fn, types.MethodType)):-      self.model = self.build_fn(-          **self.filter_sk_params(self.build_fn.__call__))-    else:-      self.model = self.build_fn(**self.filter_sk_params(self.build_fn))--    if (losses.is_categorical_crossentropy(self.model.loss) and-        len(y.shape) != 2):-      y = to_categorical(y)--    fit_args = copy.deepcopy(self.filter_sk_params(Sequential.fit))-    fit_args.update(kwargs)--    history = self.model.fit(x, y, **fit_args)--    return history--  def filter_sk_params(self, fn, override=None):-    """Filters `sk_params` and returns those in `fn`'s arguments.--    Arguments:-        fn : arbitrary function-        override: dictionary, values to override `sk_params`--    Returns:-        res : dictionary containing variables-            in both `sk_params` and `fn`'s arguments.-    """-    override = override or {}-    res = {}-    for name, value in self.sk_params.items():-      if has_arg(fn, name):-        res.update({name: value})-    res.update(override)-    return res---@keras_export('keras.wrappers.scikit_learn.KerasClassifier')++    __call___ = None+    model = None++    def __init__(self, build_fn=None, **sk_params):+        self.build_fn = build_fn++        # the sklearn API requires that all __init__ parameters be saved as an instance+        # attribute of the same name+        for name, val in sk_params.items():+            setattr(self, name, val)++        # collect all __init__ params for this base class as well as+        # all child classes+        init_params = []+        args_kwargs_identifiers = (+            inspect.Parameter.VAR_KEYWORD,+            inspect.Parameter.VAR_POSITIONAL,+        )+        for init in self.__class__.__mro__:+            for p in inspect.signature(init).parameters.values():+                if p.kind not in args_kwargs_identifiers:+                    init_params.append(p.name)++        # add parameters from sk_params+        self.init_params = {*init_params, *sk_params.keys()}++        # check that all __init__ parameters were assigned (as per sklearn API)+        for param in self.init_params:+            if not hasattr(self, param):+                raise RuntimeError("Parameter %s was not assigned")++        # determine what type of build_fn to use+        self.check_build_fn(build_fn)++        # check that all parameters correspond to a fit or model param+        self.check_params(self.get_params(deep=False))++    @staticmethod  # this is necessary for pickling to work+    def _clone_prebuilt_model(build_fn):+        """Clones and compiles a pre-built model when build_fn is an existing+           Keras model.++        Arguments:+            build_fn : instance of Keras Model.++        Returns: copy of the input model with no training.+        """+        model = clone_model(build_fn)+        # clone_model does not compy over compilation parameters, do those manually+        model.compile(optimizer=build_fn.optimizer, loss=build_fn.loss)+        return model++    def check_build_fn(self, build_fn):+        """Checks `build_fn`.++        Arguments:+            build_fn : method or callable class as defined in __init__++        Raises:+            ValueError: if `build_fn` is not valid.+        """+        # Note: it is not trivial to differenatiate betw++        if build_fn is None:+            # no build_fn, use this class' __call__method+            if not hasattr(self, "__call__"):+                raise ValueError(+                    "If not using the `build_fn` param, "+                    "you must implement `__call__`"+                )+        elif isinstance(build_fn, Model):+            # pre-built Keras model+            self.__call__ = self._clone_prebuilt_model+        elif inspect.isfunction(build_fn):+            if hasattr(self, "__call__"):+                raise ValueError(+                    "This class cannot implement `__call__` if"+                    "using the `build_fn` parameter"+                )+            # a callable method/function+            self.__call__ = build_fn+        elif (+            callable(build_fn)+            and hasattr(build_fn, "__class__")+            and hasattr(build_fn.__class__, "__call__")+            and inspect.isfunction(build_fn.__class__.__call__)+        ):+            if hasattr(self, "__call__"):+                raise ValueError(+                    "This class cannot implement `__call__` if"+                    "using the `build_fn` parameter"+                )+            # an instance of a class implementing __call__+            self.__call__ = build_fn.__call__+        else:+            raise ValueError("`build_fn` must be a callable or None")+        # append legal parameters+        self.legal_params_fns.append(self.__call__)++    def check_params(self, params):+        """Checks for user typos in `params`.+           To disable, override this method in child class.++        Arguments:+            params: dictionary; the parameters to be checked++        Raises:+            ValueError: if any member of `params` is not a valid argument.+        """+        for param_name in params:+            for fn in self.legal_params_fns:+                if has_arg(fn, param_name) or param_name in self.init_params:+                    break+            else:+                raise ValueError("{} is not a legal parameter".format(param_name))++    def _build_keras_model(self, X, y, sample_weight, **kwargs):+        """Call this method from fit to build the Keras model.+           This method will then process all arguments and call the model building+           function with appropriate arguments.++        Arguments:+            X : array-like, shape `(n_samples, n_features)`+                Training samples where `n_samples` is the number of samples+                and `n_features` is the number of features.+            y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)`+                True labels for `X`.+            sample_weight : array-like of shape (n_samples,)+                Sample weights. The Keras Model must support this.+            **kwargs: dictionary arguments+                Legal arguments are the arguments `build_fn`.+        Returns:+            self : object+                a reference to the instance that can be chain called+                (ex: instance.fit(X,y).transform(X) )+        Raises:+            ValuError : In case sample_weight != None and the Keras model's `fit`+                        method does not support that parameter.+        """+        # dynamically build model, i.e. self.__call__ builds a Keras model++        # get model arguments+        model_args = self.filter_params(self.__call__)++        # add `sample_weight` param+        # while it is not usually needed to build the model, some Keras models+        # require knowledge of the type of sample_weight to be built.+        sample_weight_arg = self.filter_params(+            self.__call__, params_to_check={"sample_weight": sample_weight}+        )++        # check if the model building function requires X and/or y to be passed+        X_y_args = self.filter_params(self.__call__, params_to_check={"X": X, "y": y})++        # filter kwargs+        kwargs = self.filter_params(self.__call__, params_to_check=kwargs)++        # combine all arguments+        build_args = {**model_args, **X_y_args, **sample_weight_arg, **kwargs}++        # build model+        model = self.__call__(**build_args)++        # append legal parameter names from model+        for known_keras_fn in KNOWN_KERAS_FN_NAMES:+            if hasattr(model, known_keras_fn):+                self.legal_params_fns.append(getattr(model, known_keras_fn))++        return model++    def _fit_keras_model(self, X, y, sample_weight, **kwargs):+        """Call this method from fit to fit the Keras model.+           This method will then process all arguments and call the Keras+           model's `fit` method with approriate arguments.++        Arguments:+            X : array-like, shape `(n_samples, n_features)`+                Training samples where `n_samples` is the number of samples+                and `n_features` is the number of features.+            y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)`+                True labels for `X`.+            sample_weight : array-like of shape (n_samples,)+                Sample weights. The Keras Model must support this.+            **kwargs: dictionary arguments+                Legal arguments are the arguments of the keras model's `fit` method.+        Returns:+            self : object+                a reference to the instance that can be chain called+                (ex: instance.fit(X,y).transform(X) )+        Raises:+            ValuError : In case sample_weight != None and the Keras model's `fit`+                        method does not support that parameter.+        """+        # add `sample_weight` param, required to be explicit by some sklearn functions+        # that use inspect.signature on the `score` method+        if sample_weight is not None:+            # avoid pesky Keras warnings if sample_weight is not used+            kwargs.update({"sample_weight": sample_weight})++        # filter kwargs down to those accepted by self.model.fit+        kwargs = self.filter_params(self.model.fit, params_to_check=kwargs)++        if sample_weight is not None and "sample_weight" not in kwargs:+            raise ValueError(+                "Parameter `sample_weight` is unsupported by Keras model %s"+                % self.model+            )++        # get model.fit's arguments (allows arbitrary model use)+        fit_args = self.filter_params(self.model.fit)++        # fit model and save history+        fit_args = {**fit_args, **kwargs}  # order implies kwargs overwrites fit_args+        self.history = self.model.fit(x=X, y=y, **fit_args)++        # return self to allow fit_transform and such to work+        return self++    def filter_params(self, fn, params_to_check=None):+        """Filters all instance attributes (parameters) and+        returns those in `fn`'s arguments.++        Arguments:+            fn : arbitrary function+            params_to_check : dictionary, parameters to check.+                Defaults to checking all attributes of this estimator.++        Returns:+            res : dictionary containing variables+                in both self and `fn`'s arguments.+        """+        res = {}+        for name, value in (params_to_check or self.__dict__).items():+            if has_arg(fn, name):+                res.update({name: value})+        return res++    def get_params(self, deep=True):+        """+        Get parameters for this estimator.++        This method mimics sklearn.base.BaseEstimator.get_params++        Arguments:+            deep : bool, default=True+                If True, will return the parameters for this estimator and+                contained subobjects that are estimators.++        Returns:+            params : mapping of string to any+                Parameter names mapped to their values.+        """+        out = dict()+        for key in self.init_params:+            value = getattr(self, key)+            if deep and hasattr(value, "get_params"):+                deep_items = value.get_params().items()+                out.update((key + "__" + k, val) for k, val in deep_items)+            out[key] = value+        return out++    def set_params(self, **params):+        """+        Set the parameters of this estimator.+        The method works on simple estimators as well as on nested objects+        (such as pipelines). The latter have parameters of the form+        ``<component>__<parameter>`` so that it's possible to update each+        component of a nested object.++        This method mimics sklearn.base.BaseEstimator.set_params+        +        Arguments:+            **params : dict+                Estimator parameters.+        Returns:+            self : object+                Estimator instance.+        """+        if not params:+            # Simple optimization to gain speed+            return self+        valid_params = self.get_params(deep=True)++        nested_params = defaultdict(dict)  # grouped by prefix+        for key, value in params.items():+            key, delim, sub_key = key.partition("__")+            if key not in valid_params:+                raise ValueError(+                    "Invalid parameter %s for estimator %s. "+                    "Check the list of available parameters "+                    "with `estimator.get_params().keys()`." % (key, self)+                )+            if delim:+                nested_params[key][sub_key] = value+            else:+                setattr(self, key, value)+                valid_params[key] = value++        for key, sub_params in nested_params.items():+            valid_params[key].set_params(**sub_params)++        return self++    def __getstate__(self):+        """Used by various scikit-learn methods to clone estimators. Also used+           for pickling.+           Because some objects (mainly Keras `Model` instances) are not pickleable,+           it is necessary to iterate through all attributes and clone the+           unpicklables manually.++        Returns:+            state : dictionary containing a copy of all attributes of this +                    estimator with Keras Model instances being saved as +                    HDF5 binary objects.+        """++        def __pack_obj(obj):+            if hasattr(obj, "save"):  # for models+                as_bytes = io.BytesIO()+                with h5py.File(as_bytes, mode="w") as file:+                    obj.save(file)

This wrapper looks pretty good and should work for non-subclassed models. To support subclassed models, you should save both the class name and config (the results from generic_utils.serialize_keras_object).

Calling keras.layers.deserialize will load subclassed models if they were registered with tf.keras.utils.generic_utils.register_keras_serializable.

adriangb

comment created time in 3 months

pull request commenttensorflow/tensorflow

Undo changes to the input spec when RNN.unroll is True.

The import/copybara test is still there so I'll create a completely new branch

k-w-w

comment created time in 3 months

PR closed tensorflow/tensorflow

Reviewers
Undo changes to the input spec when RNN.unroll is True. cla: yes ready to pull size:S

The changes from fc7116dd0d087e707adacf6d1f4c6f3e76d83b8c that made the input spec more stringent do not apply to all unrolled recurrent layers. I'll have to revisit b/148491963 later.

PiperOrigin-RevId: 298492355 Change-Id: I442ad2a23576cae8fa2fad02a27a199f13b159c1

+6 -4

0 comment

2 changed files

k-w-w

pr closed time in 3 months

PR opened tensorflow/tensorflow

Undo changes to the input spec when RNN.unroll is True. cla: yes

The changes from fc7116d that made the input spec more stringent do not apply to all unrolled recurrent layers. I'll have to revisit b/148491963 later.

PiperOrigin-RevId: 298492355 Change-Id: I442ad2a23576cae8fa2fad02a27a199f13b159c1

+6 -4

1 comment

2 changed files

pr created time in 3 months

PR opened tensorflow/tensorflow

Reviewers
Undo changes to the input spec when RNN.unroll is True. cla: yes

The changes from fc7116d that made the input spec more stringent do not apply to all unrolled recurrent layers. I'll have to revisit b/148491963 later.

PiperOrigin-RevId: 298492355 Change-Id: I442ad2a23576cae8fa2fad02a27a199f13b159c1

+6 -4

0 comment

2 changed files

pr created time in 3 months

create barnchk-w-w/tensorflow

branch : cherrypicks_SXE8X

created branch time in 3 months

create barnchk-w-w/tensorflow

branch : cherrypicks_V0Q0E

created branch time in 3 months

PR opened tensorflow/tensorflow

Reviewers
Undo changes to the input spec when RNN.unroll is True.

The changes from fc7116dd0d087e707adacf6d1f4c6f3e76d83b8c that made the input spec more stringent do not apply to all unrolled recurrent layers. I'll have to revisit b/148491963 later.

PiperOrigin-RevId: 298492355 Change-Id: I442ad2a23576cae8fa2fad02a27a199f13b159c1

+6 -4

0 comment

2 changed files

pr created time in 3 months

more