Related
The following code gives a log ending with
Epoch 19/20
1/1 [==============================] - 0s 473ms/step - loss: 1.4018 - accuracy: 0.8750 - val_loss: 1.8656 - val_accuracy: 0.8900
Epoch 20/20
1/1 [==============================] - 0s 444ms/step - loss: 0.5904 - accuracy: 0.8750 - val_loss: 2.1255 - val_accuracy: 0.8700
get_dataset: validation
Found 1000 files belonging to 2 classes.
Using 100 files for validation.
4/4 [==============================] - 1s 81ms/step
eval acc: 0.81
My question is:
Why is the val_accuracy after the last epoch (0.87) different from the eval acc (0.81) after the fit?
In my code, I try to use the same dataset for the validation of each epoch during fit and the additional validation afterwards.
[Update 1, 2022-07-19:
Obviously, the two accuracy calculations don't really use the same data. How can I debug which data is actually used?
[Update 3, 2022-07-20: I have followed the data into TensorFlow. The last thing I see is that in Model.evaluate (during fit) and Model.predict the x.filenames are equal. I did not manage to debug much further, because soon in quick_execute the __inference_test_function_248219 resp. the __inference_predict_function_231438 are evaluated outside Python, and the arguments are tensors with dtype=resource, whose contents I cannot see.]
I have deliberately removed my class balancing code to keep my example small. I know that this makes the accuracies less useful, but I don't care about that for now.
Note that get_dataset('validation') is only called once at the beginning of the fit, not at each epoch.
I have now also set max_queue_size=0, use_multiprocessing=False, workers=0 (as seen here, found via this related SO question about TensorFlow 1), but this did not make the accuracies equal.
]
Code:
import tensorflow as tf
from sklearn.metrics import accuracy_score
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.preprocessing import image_dataset_from_directory
inputs = tf.keras.Input(shape=(224, 224, 3))
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False)
base_output = base_model(inputs)
base_model.trainable = False
out = Flatten(name='flat')(base_output)
out = Dense(1, activation='sigmoid')(out)
model = Model(inputs=inputs, outputs=out)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
def get_dataset(subset):
print('get_dataset:', subset)
return image_dataset_from_directory(
'data-nodup-1000',
labels="inferred",
label_mode='binary',
color_mode="rgb",
image_size=(224, 224),
shuffle=True,
seed=1,
validation_split=0.1,
subset=subset,
crop_to_aspect_ratio=False,
)
model.fit(
get_dataset('training'),
steps_per_epoch=1,
epochs=20,
validation_data=get_dataset('validation'),
max_queue_size=0,
use_multiprocessing=False,
workers=0,
)
val_dataset = get_dataset('validation')
true_class = tf.concat([y for x, y in val_dataset], axis=0)
pred = model.predict(val_dataset)
pred_class = pred >= .5
print('eval acc:', accuracy_score(true_class, pred_class))
[Update 2, 2022-07-19:
I can also reproduce the behavior with the deprecated ImageDataGenerator, using
from tensorflow.keras.applications.resnet50 import preprocess_input
from keras_preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input,
validation_split=0.1,
)
def get_dataset(subset):
print('get_dataset:', subset)
return datagen.flow_from_directory(
'data-nodup-1000',
class_mode='binary',
target_size=(224, 224),
shuffle=True,
seed=1,
subset=subset,
)
and
true_class = val_dataset.labels
]
[Update 4, 2022-07-21: Note that deactivating shuffling of validation data by setting shuffle=(subset == 'training') makes the two validation accuracies equal. This is not a workaround, however, because the validation set then consists only of class 1, since flow_from_directory doesn't do stratification.
]
My environment:
I am using all up-to-date libraries, like tensorflow 2.9.1 and sklearn 1.1.1 (via pip-compile -U).
The folder data-nodup-1000 contains one subfolder with 113 files of class 0, and one subfolder with 887 files of class 1.
I have now found out that in TensorFlow 2.9.1 model.predict uses the second iteration of the dataset, which is shuffled differently than the first iteration!
It even uses the second iteration when I directly call model.predict(get_dataset('validation'))!
Therefore, the entries of true_class and pred do not match.
Switching to TensorFlow 2.10.0-rc3 and its tf.keras.utils.split_dataset makes the accuracies equal.
Here's the updated code:
import tensorflow as tf
from sklearn.metrics import accuracy_score
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.preprocessing import image_dataset_from_directory
inputs = tf.keras.Input(shape=(224, 224, 3))
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False)
base_output = base_model(inputs)
base_model.trainable = False
out = Flatten(name='flat')(base_output)
out = Dense(1, activation='sigmoid')(out)
model = Model(inputs=inputs, outputs=out)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
dataset = image_dataset_from_directory(
'data-synthetic',
labels="inferred",
label_mode='binary',
color_mode="rgb",
image_size=(224, 224),
shuffle=True,
seed=1,
crop_to_aspect_ratio=False,
)
train_dataset, val_dataset = tf.keras.utils.split_dataset(dataset, right_size=0.1)
model.fit(
train_dataset,
steps_per_epoch=1,
epochs=20,
validation_data=val_dataset,
max_queue_size=0,
use_multiprocessing=False,
workers=0,
)
true_class = tf.concat([y for x, y in val_dataset], axis=0)
pred = model.predict(val_dataset)
pred_class = pred >= .5
print('eval acc:', accuracy_score(true_class, pred_class))
which correctly yields:
Epoch 19/20
1/1 [==============================] - 0s 438ms/step - loss: 0.4426 - accuracy: 0.9062 - val_loss: 0.4658 - val_accuracy: 0.8800
Epoch 20/20
1/1 [==============================] - 0s 444ms/step - loss: 2.1619 - accuracy: 0.8438 - val_loss: 0.5886 - val_accuracy: 0.8900
4/4 [==============================] - 1s 87ms/step
eval acc: 0.89
there are a few points about your data which causes this:
First, your data is highly imbalanced (8 to 1 label ratio) which makes the model rather overfit and the CV estimate inaccurate.
Second, in the get_dataset function, the shuffle is set to True so every time you call the get_dataset(), it shuffles your data, and because (1) Your validation set is very small and (2) your train/val split is not stratified over your labels, the validation metrics would vary a lot due to this shuffling.
Suggestions to solve this:
call the get_dataset() only once for train and val dataset before fitting the model and save them as variables. and if there is no sequential order in your data, maybe set shuffle=False.
(optional) If possible make your dataset more balanced by techniques such as data augmentation, over-/under-sampling, etc.
def get_dataset(subset):
return image_dataset_from_directory(
'data-nodup-1000',
labels="inferred",
label_mode='binary',
color_mode="rgb",
image_size=(224, 224),
shuffle=False,
seed=0,
validation_split=0.1,
subset=subset,
crop_to_aspect_ratio=False,
)
train_dataset = get_dataset('training')
val_dataset = get_dataset('validation')
model.fit(
train_dataset,
steps_per_epoch=1,
epochs=20,
validation_data=val_dataset,
)
true_class = tf.concat([y for x, y in val_dataset], axis=0)
pred = model.predict(val_dataset)
pred_class = pred >= .5
print('eval acc:', accuracy_score(true_class, pred_class))
I use the Keras model training API and observed differences when training the model with NumPy arrays (x_train and y_train) and with tf.data.Dataset.from_tensor_slices((x_train, y_train)). A minimal working example is shown below:
import numpy as np
import tensorflow as tf
tf.keras.utils.set_random_seed(0)
n_examples, n_dims = (100, 10)
raw_dataset = np.random.randn(n_examples, n_dims)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Dense(
1024, activation="relu", use_bias=True
),
tf.keras.layers.Dense(
1, activation="linear", use_bias=True
),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss="mse",
)
x_train = raw_dataset[:, :-1]
y_train = raw_dataset[:, -1]
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
n_epochs = 10
batch_size = 16
use_dataset = True
if use_dataset:
model.fit(
dataset.batch(batch_size=batch_size),
epochs=n_epochs,
)
else:
model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=n_epochs,
)
print("Evaluation:")
model.evaluate(x_train, y_train)
model.evaluate(dataset.batch(batch_size=batch_size))
If I run this code with use_dataset = True, the final performance is:
Evaluation:
4/4 [==============================] - 0s 825us/step - loss: 0.4132
7/7 [==============================] - 0s 701us/step - loss: 0.4132
If I run it with use_dataset = False, I get:
Evaluation:
4/4 [==============================] - 0s 855us/step - loss: 0.4219
7/7 [==============================] - 0s 808us/step - loss: 0.4219
I expected that the two training loops would perform identically. Interestingly, the model performance is identical if I set batch_size = n_examples. The difference seems to be related with the way that batches are handled internally. Why is this happening? Is it a bug or a feature?
The behavior is due to the default parameter shuffle=True in model.fit(*) and not a bug. According to the docs regarding shuffle:
Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator or an object of tf.data.Dataset. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.
So this parameter is ignored when a tf.data.Dataset is passed, and the data is not reshuffled after each epoch as in the other approach with arrays.
Here is the code to get the same results for both methods:
import numpy as np
import tensorflow as tf
tf.keras.utils.set_random_seed(0)
n_examples, n_dims = (100, 10)
raw_dataset = np.random.randn(n_examples, n_dims)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Dense(
1024, activation="relu", use_bias=True
),
tf.keras.layers.Dense(
1, activation="linear", use_bias=True
),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss="mse",
)
x_train = raw_dataset[:, :-1]
y_train = raw_dataset[:, -1]
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
n_epochs = 10
batch_size = 16
use_dataset = False
if use_dataset:
model.fit(
dataset.batch(batch_size=batch_size),
epochs=n_epochs,
)
else:
model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
shuffle=False,
epochs=n_epochs,
)
print("Evaluation:")
model.evaluate(x_train, y_train)
model.evaluate(dataset.batch(batch_size=batch_size))
I want to use an AutoEncoder model with Keras functional API. Also I want to use tf.data.Dataset as an input pipeline for the model. However, there is limitation that I can pass the dataset to the keras.model.fit only with tuple (inputs, targets) accroding to the docs:
Input data. It could be: A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).
So here is the question: can I pass the tf.data.Dataset without repeating inputs like that (inputs, inputs) and more like (inputs, None). And if I can't, will the repeated inputs double the GPU memory for my model?
You can use map() to return your input twice:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Conv2DTranspose, Reshape
from functools import partial
(xtrain, _), (xtest, _) = tf.keras.datasets.mnist.load_data()
ds = tf.data.Dataset.from_tensor_slices(
tf.expand_dims(tf.concat([xtrain, xtest], axis=0), axis=-1))
ds = ds.take(int(1e4)).batch(4).map(lambda x: (x/255, x/255))
custom_convolution = partial(Conv2D, kernel_size=(3, 3),
strides=(1, 1),
activation='relu',
padding='same')
custom_pooling = partial(MaxPool2D, pool_size=(2, 2))
conv_encoder = Sequential([
custom_convolution(filters=16, input_shape=(28, 28, 1)),
custom_pooling(),
custom_convolution(filters=32),
custom_pooling(),
custom_convolution(filters=64),
custom_pooling()
])
# conv_encoder(next(iter(ds))[0].numpy().astype(float)).shape
custom_transpose = partial(Conv2DTranspose,
padding='same',
kernel_size=(3, 3),
activation='relu',
strides=(2, 2))
conv_decoder = Sequential([
custom_transpose(filters=32, input_shape=(3, 3, 64), padding='valid'),
custom_transpose(filters=16),
custom_transpose(filters=1, activation='sigmoid'),
Reshape(target_shape=[28, 28, 1])
])
conv_autoencoder = Sequential([
conv_encoder,
conv_decoder
])
conv_autoencoder.compile(loss='binary_crossentropy', optimizer='adam')
history = conv_autoencoder.fit(ds)
2436/2500 [============================>.] - ETA: 0s - loss: 0.1282
2446/2500 [============================>.] - ETA: 0s - loss: 0.1280
2456/2500 [============================>.] - ETA: 0s - loss: 0.1279
2466/2500 [============================>.] - ETA: 0s - loss: 0.1278
2476/2500 [============================>.] - ETA: 0s - loss: 0.1277
2487/2500 [============================>.] - ETA: 0s - loss: 0.1275
2497/2500 [============================>.] - ETA: 0s - loss: 0.1274
2500/2500 [==============================] - 14s 6ms/step - loss: 0.1273
"the repeated inputs double the GPU memory for my model?"; Generally , the dataset pipeline run on CPU, not on GPU.
For your AutoEncoder model, if you want to use a dataset that only contains the examples without labels, you can use custom training :
def loss(model, x):
y_ = model(x, training=True) # use x as input
return loss_object(y_true=x, y_pred=y_) # use x as label (y_true)
with tf.GradientTape() as tape:
loss_value = loss(model, inputs)
If it is necessary to use the fit() method, you can subclass keras.Model and overrides the train_step() method link . (I did not verify this code ) :
class CustomModel(keras.Model):
def train_step(self, data):
x = data
y = data # the same data as label ++++
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
In TensorFlow 2.4, I've got a dataset that returns a tuple of one element, ie (inputs,), which is training just fine. The only caveat is of course that you cannot pass a loss or metrics to model.compile, but must instead use the add_loss and add_metric APIs somewhere in the model.
Essence of this question:
I'd like to find a proper way to calculate the Fscore for the validation and training data after each epoch (not batch-wise)
For a binary classification task, I'd like to calculate the Fscore after each epoch using a simple keras model. But how to calculate the Fscore seems quite the discussion.
I know keras works in batches and one way to calculate the fscore for each batch would be https://stackoverflow.com/a/45305384/10053244 (Fscore-calculation: f1).
The batch-wise calculation can be quite confusing and I prefer to calculate Fscore after each epoch. So just calling history.history['f1'] or history.history['val_f1'] does not do the trick, cause it shows the batch-wise fscores.
I figured one way is to save each model using the
from keras.callbacks import ModelCheckpoint function:
Saving each model-weights after every epoch
Reloading the model and using model.evaluate or model.predict
Edit:
Using tensorflow backend, I decided to track TruePositives, FalsePositives and FalseNegatives(as umbreon29 suggested).
But now comes the fun part: The results when reloading the model are different for the training data (TP, FP, FN are different) but not for the validation set!
So a simple model storing the weights to rebuild each model and recalculate the TP,FN,TP (and finally the Fscore) looks like:
from keras.metrics import TruePositives, TrueNegatives, FalseNegatives, FalsePositives
## simple keras model
sequence_input = Input(shape=(input_dim,), dtype='float32')
preds = Dense(1, activation='sigmoid',name='output')(sequence_input)
model = Model(sequence_input, preds)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=[TruePositives(name='true_positives'),
TrueNegatives(name='true_negatives'),
FalseNegatives(name='false_negatives'),
FalsePositives(name='false_positives'),
f1])
# model checkpoints
filepath="weights-improvement-{epoch:02d}-{val_f1:.2f}.hdf5"
checkpoint = ModelCheckpoint(os.path.join(savemodel,filepath), monitor='val_f1', verbose=1, save_best_only=False, save_weights_only=True, mode='auto')
callbacks_list = [checkpoint]
history = model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=epoch, batch_size=batch,
callbacks=[callbacks_list])
## Saving TP, FN, FP to calculate Fscore
tp.append(history.history['true_positives'])
fp.append(history.history['false_positives'])
fn.append(history.history['false_negatives'])
arr_train = np.stack((tp, fp, fn), axis=1)
## doing the same for tp_val, fp_val, fn_val
[...]
arr_val = np.stack((tp_val, fp_val, fn_val), axis=1)
## following method just showes batch-wise fscores and shouldnt be used:
## f1_sc.append(history.history['f1'])
Reloading the model after each epoch to calculate the Fscores (The predict method with sklearn fscore metric from sklearn.metrics import f1_score is equivalent to the calculating fscore metric from TP,FP, FN):
Fscore_val = []
fscorepredict_val_sklearn = []
Fscore_train = []
fscorepredict_train = []
## model_loads contains list of model-paths
for i in model_loads:
## rebuilding the model each time since only weights are stored
sequence_input = Input(shape=(input_dim,), dtype='float32')
preds = Dense(1, activation='sigmoid',name='output')(sequence_input)
model = Model(sequence_input, preds)
model.load_weights(i)
# Compile model (required to make predictions)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=[TruePositives(name='true_positives'),
TrueNegatives(name='true_negatives'),
FalseNegatives(name='false_negatives'),
FalsePositives(name='false_positives'),
f1
])
### For Validation data
## using evaluate
y_pred = model.evaluate(x_val, y_val, verbose=0)
Fscore_val.append(y_pred) ## contains (loss,tp,fp,fn, f1-batchwise)
## using predict
y_pred = model.predict(x_val)
val_preds = [1 if x > 0.5 else 0 for x in y_pred]
cm = f1_score(y_val, val_preds)
fscorepredict_val_sklearn.append(cm) ## equivalent to Fscore calculated from Fscore_vals tp,fp, fn
### For the training data
y_pred = model.evaluate(x_train, y_train, verbose=0)
Fscore_train.append(y_pred) ## also contains (loss,tp,fp,fn, f1-batchwise)
y_pred = model.predict(x_train, verbose=0) # gives probabilities
train_preds = [1 if x > 0.5 else 0 for x in y_pred]
cm = f1_score(y_train, train_preds)
fscorepredict_train.append(cm)
Calculating the Fscore from the tp,fn, and fp using Fscore_val's tp,fn,fp and comparing it tofscorepredict_val_sklearn is equivalent and identical to calculating it from arr_val.
However, the number of tp,fn, and fp is different when comparing Fscore_train and arr_train. Therefore, I also arrive at different Fscores. The number of tp,fn,fp should be the same but they arent.. Is this a bug?
Which one should I trust? The fscorepredict_train seem actually more trustworthy, since they start above the "always guessing class 1"-Fscore (when recall=1). (fscorepredict_train[0]=0.6784 vs f_hist[0]=0.5736 vs always-guessing-class-1-fscore = 0.6751)
[Note: Fscore_train[0] = [0.6853608025386962, 2220.0, 250.0, 111.0, 1993.0, 0.6730511784553528] (loss,tp,tn,fp,fn) leading to fscore= 0.6784 , so Fscore from Fscore_train = fscorepredict_train ]
I provide a custom callback that computes the score (in your case F1 from sklearn) on ALL the data at the end of the epoch (for train and optionally validation)
class F1History(tf.keras.callbacks.Callback):
def __init__(self, train, validation=None):
super(F1History, self).__init__()
self.validation = validation
self.train = train
def on_epoch_end(self, epoch, logs={}):
logs['F1_score_train'] = float('-inf')
X_train, y_train = self.train[0], self.train[1]
y_pred = (self.model.predict(X_train).ravel()>0.5)+0
score = f1_score(y_train, y_pred)
if (self.validation):
logs['F1_score_val'] = float('-inf')
X_valid, y_valid = self.validation[0], self.validation[1]
y_val_pred = (self.model.predict(X_valid).ravel()>0.5)+0
val_score = f1_score(y_valid, y_val_pred)
logs['F1_score_train'] = np.round(score, 5)
logs['F1_score_val'] = np.round(val_score, 5)
else:
logs['F1_score_train'] = np.round(score, 5)
here a dummy example:
x_train = np.random.uniform(0,1, (30,10))
y_train = np.random.randint(0,2, (30))
x_val = np.random.uniform(0,1, (20,10))
y_val = np.random.randint(0,2, (20))
sequence_input = Input(shape=(10,), dtype='float32')
preds = Dense(1, activation='sigmoid',name='output')(sequence_input)
model = Model(sequence_input, preds)
es = EarlyStopping(patience=3, verbose=1, min_delta=0.001, monitor='F1_score_val', mode='max', restore_best_weights=True)
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(x_train,y_train, epochs=10,
callbacks=[F1History(train=(x_train,y_train),validation=(x_val,y_val)),es])
the output print:
Epoch 1/10
1/1 [==============================] - 0s 78ms/step - loss: 0.7453 - F1_score_train: 0.3478 - F1_score_val: 0.4762
Epoch 2/10
1/1 [==============================] - 0s 57ms/step - loss: 0.7448 - F1_score_train: 0.3478 - F1_score_val: 0.4762
Epoch 3/10
1/1 [==============================] - 0s 58ms/step - loss: 0.7444 - F1_score_train: 0.3478 - F1_score_val: 0.4762
Epoch 4/10
1/1 [==============================] - ETA: 0s - loss: 0.7439Restoring model weights from the end of the best epoch.
1/1 [==============================] - 0s 70ms/step - loss: 0.7439 - F1_score_train: 0.3478 - F1_score_val: 0.4762
I have TF 2.2 and works without problems, I hope this help
I am trying the the training and evaluation example on the tensorflow website.
Specifically, this part:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
y_train = y_train.astype('float32')
y_test = y_test.astype('float32')
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.BatchNormalization()(x)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
return model
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=3)
It appears that if I add the batch normalization layer (this line: x = layers.BatchNormalization()(x)) I get the following error:
InvalidArgumentError: The second input must be a scalar, but it has shape [64]
[[{{node batch_normalization_2/cond/ReadVariableOp/Switch}}]]
Any ideas?
The same code works for me.
The only lines I changed are :
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3)
to model.compile(optimizer=keras.optimizers.RMSprop(lr=1e-3)
(which is version specific)
Then
model.fit(train_dataset, epochs=3) to model.fit(train_dataset, epochs=3, steps_per_epoch=30)
Reason : When using iterators as input to a model, you should specify the steps_per_epoch argument
If you just want to use sample weights, you don't have to use tf.data.Dataset, you can simply run:
model.fit(x=x_train, y=y_train, sample_weight=sample_weight, batch_size=64, epochs=3)
and it works for me (when I change learning_rate to lr as #ASHu2 mentioned).
It gets 97% accuracy after 3 epochs:
...
57408/60000 [===========================>..] - ETA: 0s - loss: 0.1010 - sparse_categorical_accuracy: 0.9709
58816/60000 [============================>.] - ETA: 0s - loss: 0.1011 - sparse_categorical_accuracy: 0.9708
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1007 - sparse_categorical_accuracy: 0.9709
I used TF 1.14.0 on windows.
The problem was solved when I updated tensorflow from version 1.14.1 to 2.0.0-rc1.