Ask questionsDataset iterating different behavior in TF 2.1 and 2.2
I was not sure how to report this issue as it might be bug or just expected behavior. There are difference in TF 2.1 and 2.2. This is a code snippet to reproduce my issue:
import math import numpy as np import tensorflow as tf # simple dataset with zeros batch_size = 32 features = np.zeros((10000, 60, 2)) labels = np.zeros((10000, 1)) train_data = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size) train_steps = int(math.ceil(features.shape / batch_size)) # simple model with Dense layers inputs = tf.keras.Input(shape=(features.shape, features.shape)) x = tf.keras.layers.Dense(32, activation="relu")(inputs) outputs = tf.keras.layers.Dense(1, activation="relu")(x) model = tf.keras.Model(inputs, outputs, name="example_model") # model fitting model.compile(loss="mse", optimizer="adam", metrics=["mse"]) model.fit(train_data, epochs=100, steps_per_epoch=train_steps)
When I run this code in TF2.1 it will produce this error: https://pastebin.com/4M43SE44
After the first epoch, there are warnings about end of sequence, that my input ran out of data. And finally, as you can see in the pasted output, it raises
ValueError: Empty training data..
When I change line with dataset creation to
train_data = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size).<b>repeat()</b>
than everything works as expected.
This is behavior I would expect. (Note
steps_per_epoch attribute as I wan to control this by myself, of course when I do not use
steps_per_epoch is set to None, it will work under TF2.1 as it will iterate whole dataset every epoch).
When I run the same code with TF2.2 (no repeat, train_steps are specified) it works without any issue. Is this behavior intentional? Why does it work in TF2.2 and not 2.1? Could anyone elaborate on this issue?
Answer questions tomerk
@omalleyt12 is this one of the known regressions that have fixes for 2.3 that didn't quite make it in to 2.2? Or is this something to add to the list?