profile
viewpoint

Ask questionsTraining fails when a multi-output Keras model has one output without a loss function

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes, see minimal example.
  • OS Platform and Distribution: Ubuntu 18.04.3 LTS
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
  • TensorFlow installed from (source or binary): binary (specifically, tensorflow/tensorflow:nightly-py3 Docker image)
  • TensorFlow version (use command below): 2.1.0-dev20191216
  • Python version: 3.6.9
  • Bazel version (if compiling from source): N/A
  • GCC/Compiler version (if compiling from source): N/A
  • CUDA/cuDNN version: N/A
  • GPU model and memory: N/A

Describe the current behavior

A multi-output Keras model compiled so that one output doesn't have a loss function raises an exception when calling .fit.

Describe the expected behavior

Training should minimise the losses defined for the other output(s).

Code to reproduce the issue

import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

input_a = keras.layers.Input(shape=(10,), name="input_a")
input_b = keras.layers.Input(shape=(20,), name="input_b")
output_a = keras.layers.Dense(1, name="output_a")(input_a)
output_b = keras.layers.Dense(1, name="output_b")(input_b)
model = keras.Model(inputs=[input_a, input_b], outputs=[output_a, output_b])
model.compile(optimizer="sgd", loss={"output_a": None, "output_b": "mse"})

n = 128
input_a = np.ones((n, 10))
input_b = np.ones((n, 20))
output_a = np.ones((n, 1))
output_b = np.ones((n, 1))

dataset = tf.data.Dataset.from_tensor_slices(
    ((input_a, input_b), (output_a, output_b))
).batch(64)

model.fit(dataset)

Raises:

ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), for inputs ['output_b'] but instead got the following list of 2 arrays: [<tf.Tensor 'args_2:0' shape=(None, 1) dtype=float64>, <tf.Tensor 'args_3:0' shape=(None, 1) dtype=float64>]...
tensorflow/tensorflow

Answer questions pavithrasv

@tomwphillips this is by design. You do not need to feed target data for the output for which there is no loss function during training. You can pass a dictionary with just the other output like {'output_b': ...}.

Target data is used for computing loss so in this case it is not required.

useful!

Related questions

ModuleNotFoundError: No module named 'tensorflow.contrib'
Error occurred when finalizing GeneratorDataset iterator
ModuleNotFoundError: No module named 'tensorflow.contrib'
When importing TensorFlow, error loading Hadoop hot 4
The flag 'log_dir' is defined twice. hot 3
[TF 2.0] Dataset has no attribute 'make_one_shot_iterator' hot 3
Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning. hot 3
TF2.0 AutoGraph issue hot 3
Error loading tensorflow hot 3
AttributeError: module 'tensorflow' has no attribute 'set_random_seed' hot 3
AttributeError: module &#39;tensorflow&#39; has no attribute &#39;Session&#39; hot 3
No tf.lite.experimental.nn.bidirectional_dynamic_rnn ops is finded hot 3
AttributeError: module 'tensorflow' has no attribute 'app' hot 3
Incorrect Error TypeError: padded_batch() missing 1 required positional argument: &#39;padded_shapes&#39; hot 3
tensorflow2.0 detected 'xla_gpu' , but 'gpu' expected hot 2
Github User Rank List