I use the Keras model training API and observed differences when training the model with NumPy arrays (x_train and y_train) and with tf.data.Dataset.from_tensor_slices((x_train, y_train)). A minimal working example is shown below:
import numpy as np
import tensorflow as tf
tf.keras.utils.set_random_seed(0)
n_examples, n_dims = (100, 10)
raw_dataset = np.random.randn(n_examples, n_dims)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Dense(
1024, activation="relu", use_bias=True
),
tf.keras.layers.Dense(
1, activation="linear", use_bias=True
),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss="mse",
)
x_train = raw_dataset[:, :-1]
y_train = raw_dataset[:, -1]
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
n_epochs = 10
batch_size = 16
use_dataset = True
if use_dataset:
model.fit(
dataset.batch(batch_size=batch_size),
epochs=n_epochs,
)
else:
model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=n_epochs,
)
print("Evaluation:")
model.evaluate(x_train, y_train)
model.evaluate(dataset.batch(batch_size=batch_size))
If I run this code with use_dataset = True, the final performance is:
Evaluation:
4/4 [==============================] - 0s 825us/step - loss: 0.4132
7/7 [==============================] - 0s 701us/step - loss: 0.4132
If I run it with use_dataset = False, I get:
Evaluation:
4/4 [==============================] - 0s 855us/step - loss: 0.4219
7/7 [==============================] - 0s 808us/step - loss: 0.4219
I expected that the two training loops would perform identically. Interestingly, the model performance is identical if I set batch_size = n_examples. The difference seems to be related with the way that batches are handled internally. Why is this happening? Is it a bug or a feature?
The behavior is due to the default parameter shuffle=True in model.fit(*) and not a bug. According to the docs regarding shuffle:
Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator or an object of tf.data.Dataset. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.
So this parameter is ignored when a tf.data.Dataset is passed, and the data is not reshuffled after each epoch as in the other approach with arrays.
Here is the code to get the same results for both methods:
import numpy as np
import tensorflow as tf
tf.keras.utils.set_random_seed(0)
n_examples, n_dims = (100, 10)
raw_dataset = np.random.randn(n_examples, n_dims)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Dense(
1024, activation="relu", use_bias=True
),
tf.keras.layers.Dense(
1, activation="linear", use_bias=True
),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss="mse",
)
x_train = raw_dataset[:, :-1]
y_train = raw_dataset[:, -1]
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
n_epochs = 10
batch_size = 16
use_dataset = False
if use_dataset:
model.fit(
dataset.batch(batch_size=batch_size),
epochs=n_epochs,
)
else:
model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
shuffle=False,
epochs=n_epochs,
)
print("Evaluation:")
model.evaluate(x_train, y_train)
model.evaluate(dataset.batch(batch_size=batch_size))
Related
I want to get mean absolute error (MAE) for each split of data using 5-fold cross validation. I have built a custom model using Xception.
Hence, to try this, I coded the following:
# Data Generators:
train_gen = flow_from_dataframe(core_idg, train_df,
path_col = 'path',
y_col = 'boneage_zscore',
target_size = IMG_SIZE,
color_mode = 'rgb',
batch_size = 32,
shuffle = True)
X_train, Y_train = next(train_gen)
#-----------------------------------------------------------------------
# Custom Model initiation:
base_model = Xception(input_shape = X_train.shape[1:], include_top = False, weights = 'imagenet')
base_model.trainable = True
model = Sequential()
model.add(base_model)
model.add(GlobalMaxPooling2D())
model.add(Flatten())
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation = 'linear'))
def mae_months(in_gt, in_pred):
return mean_absolute_error(boneage_div * in_gt, boneage_div * in_pred)
# Compile model
adam = Adam(learning_rate = 0.0005)
model.compile(loss = 'mse', optimizer = adam, metrics = [mae_months])
#-----------------------------------------------------------------------
# KFold
n_splits = 5
kf = KFold(n_splits = n_splits, shuffle = True, random_state = 42)
I coded up to KFold, but now I am stuck with proceeding to the cross validation step to get MAE for each data splits?
A post here suggests a for loop for each Kfold splits, but that's only if the model such as DecisionTreeRegressor() is used instead of a custom model using Xception like mine?
UPDATE
After referring to the suggestion below, I applied the code as follows after the using KFold:
# Data Generators:
train_gen = flow_from_dataframe(core_idg, train_df,
path_col = 'path',
y_col = 'boneage_zscore',
target_size = IMG_SIZE,
color_mode = 'rgb',
batch_size = 1024,
shuffle = True)
...
...
...
mae_list = []
n_splits = 5
kf = KFold(n_splits = n_splits, shuffle = True, random_state = 42)
split = kf.split(X_train, Y_train) # X_train, Y_train = next(train_gen) from above
for train, test in split:
x_train, x_test, y_train, y_test = X_train[train], X_train[test], Y_train[train], Y_train[test]
history = model.fit(x_train, y_train, validation_data = (x_test, y_test), batch_size = 16)
pred = model.predict(x_test, batch_size = 8)
err = mean_absolute_error(y_test, pred)
mae_list .append(err)
I set the batch size of train_gen to like 1024 first then run the code above, however, I get the following error:
52/52 [==============================] - 16s 200ms/step - loss: 0.9926 - mae_months: 31.5353 - val_loss: 4.4153 - val_mae_months: 81.5463
52/52 [==============================] - 9s 172ms/step - loss: 0.4185 - mae_months: 21.4242 - val_loss: 0.7401 - val_mae_months: 29.3815
52/52 [==============================] - 9s 172ms/step - loss: 0.2930 - mae_months: 17.3729 - val_loss: 0.5628 - val_mae_months: 23.9055
9/52 [====>.........................] - ETA: 7s - loss: 0.2355 - mae_months: 16.7444
ResourceExhaustedError Traceback (most recent call last)
Input In [11], in <cell line: 9>()
10 x_train, x_test, y_train, y_test = X_train[train], X_train[test], Y_train[train], Y_train[test]
11 # model = boneage_model()
12 # history = model.fit(train_gen, validation_data = (x_test, y_test))
---> 13 history = model.fit(x_train, y_train, validation_data = (x_test, y_test), batch_size = 16)
14 pred = model.predict(x_test, batch_size = 8)
15 err = mean_absolute_error(y_test, pred)
ResourceExhaustedError: Graph execution error:
....
....
....
Node: 'gradient_tape/sequential/xception/block14_sepconv2/separable_conv2d/Conv2DBackpropFilter'
OOM when allocating tensor with shape[2048,1536,1,1] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node gradient_tape/sequential/xception/block14_sepconv2/separable_conv2d/Conv2DBackpropFilter}}]]
The memory allocation looks like this from the prompt (hopefully this makes sense):
total_region_allocated_bytes_: 5769199616
memory_limit_: 5769199616
available bytes: 0
curr_region_allocation_bytes_: 8589934592
Stats:
Limit: 5769199616
InUse: 5762760448
MaxInUse: 5769190400
NumAllocs: 192519
MaxAllocSize: 2470510592
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0
Is it because my GPU cannot take the batch_size?
UPDATE 2
I have decreased the batch_size of the train_gen to 32. Took out the batch_size from the fit() and predict() method. Is this the right way to determine the MAE for each data split?
Code:
# Data Generators:
train_gen = flow_from_dataframe(core_idg, train_df,
path_col = 'path',
y_col = 'boneage_zscore',
target_size = IMG_SIZE,
color_mode = 'rgb',
batch_size = 32,
shuffle = True)
X_train, Y_train = next(train_gen)
...
...
...
mae_list = []
n_splits = 5
kf = KFold(n_splits = n_splits, shuffle = True, random_state = 42)
split = kf.split(X_train, Y_train) # X_train, Y_train = next(train_gen) from above
for train, test in split:
x_train, x_test, y_train, y_test = X_train[train], X_train[test], Y_train[train], Y_train[test]
history = model.fit(x_train, y_train, validation_data = (x_test, y_test))
pred = model.predict(x_test)
err = mean_absolute_error(y_test, pred)
mae_list.append(err)
UPDATE 3
According to the suggestions from the comments:
Edited the batch_size of the train_gen to 64.
Added valid_gen to use X_valid and y_valid as validation data of the fit() method.
Used x_test for the predict() method.
Added a method for limiting GPU memory growth.
Code:
# Checking the GPU availability
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
...
...
...
# Data Generators:
train_gen = flow_from_dataframe(core_idg, train_df,
path_col = 'path',
y_col = 'boneage_zscore',
target_size = IMG_SIZE,
color_mode = 'rgb',
batch_size = 64,
shuffle = True)
X_train, Y_train = next(train_gen)
valid_gen = flow_from_dataframe(core_valid, valid_df,
path_col = 'path',
y_col = 'boneage_zscore',
target_size = IMG_SIZE,
color_mode = 'rgb',
batch_size = 64,
shuffle = True)
X_valid, y_valid = next(valid_gen)
# Getting MAE for each data split using 5-fold (KFold)
cv_mae = []
n_splits = 5
kf = KFold(n_splits = n_splits, shuffle = True, random_state = 42)
split = kf.split(X_train, Y_train)
for train, test in split:
x_train, x_test, y_train, y_test = X_train[train], X_train[test], Y_train[train], Y_train[test]
history = model.fit(x_train, y_train, validation_data = (X_valid, y_valid))
pred = model.predict(x_test)
err = mean_absolute_error(y_test, pred)
cv_mae.append(err)
cv_mae
The output:
2/2 [==============================] - 8s 2s/step - loss: 3.6179 - mae_months: 66.8136 - val_loss: 2.1544 - val_mae_months: 47.2171
2/2 [==============================] - 1s 394ms/step - loss: 1.0826 - mae_months: 36.3370 - val_loss: 1.6431 - val_mae_months: 40.9770
2/2 [==============================] - 1s 344ms/step - loss: 0.6129 - mae_months: 23.0258 - val_loss: 1.8911 - val_mae_months: 45.6456
2/2 [==============================] - 1s 360ms/step - loss: 0.4500 - mae_months: 22.6450 - val_loss: 1.3592 - val_mae_months: 36.7073
2/2 [==============================] - 1s 1s/step - loss: 0.4222 - mae_months: 20.2543 - val_loss: 1.1010 - val_mae_months: 32.8488
[<tf.Tensor: shape=(13,), dtype=float32, numpy=
array([1.4442804, 1.3981661, 1.5037801, 2.2199252, 1.7645894, 1.4836203,
1.7916738, 1.3967942, 1.4069557, 2.516875 , 1.4077926, 1.4342965,
1.9279695], dtype=float32)>,
<tf.Tensor: shape=(13,), dtype=float32, numpy=
array([1.8153722, 1.9236553, 1.3917867, 1.5313213, 1.387209 , 1.3831038,
1.4519565, 1.4680854, 1.7810788, 2.5733376, 1.4269204, 1.3751 ,
1.446231 ], dtype=float32)>,
<tf.Tensor: shape=(13,), dtype=float32, numpy=
array([1.6616 , 1.6529323, 1.9181525, 2.536807 , 1.6306267, 2.856683 ,
2.113724 , 1.5543866, 1.9128528, 3.218016 , 1.4112593, 1.4043481,
3.229338 ], dtype=float32)>,
<tf.Tensor: shape=(13,), dtype=float32, numpy=
array([2.1295295, 1.8527019, 1.9779519, 3.1390932, 1.5525225, 2.0811615,
1.6279813, 1.87973 , 1.5029857, 1.6502519, 2.3677726, 1.8570358,
1.7251074], dtype=float32)>,
<tf.Tensor: shape=(12,), dtype=float32, numpy=
array([1.3926607, 1.7088655, 1.7379242, 3.5756006, 1.5988973, 1.3926607,
1.4928951, 1.4665956, 1.3926607, 1.4575896, 3.146022 , 1.3926607],
dtype=float32)>]
Does this mean that I have MAEs for 5 data splits? (where it says numpy = array[....] in the output?)
Ideally, you'd split train and test sets together from the kfold split, but it doesn't matter if you use the same seed. kfold split just returns indices to select train and test elements. So you need to get those indices from the split from the original dataset.
Answer based on OP comment and question:
from sklearn.model_selection import StratifiedKFold as kfold
x, y = # images, labels
cvscores = []
kf = kfold(n_splits = n_splits, shuffle = True, random_state = 42)
split = kf.split(x, y)
for train, test in split
x_train, x_test, y_train, y_test = x[train], x[test], y[train], y[test]
model = # do model stuff
_ = model.fit()
result = mode.evaluate()
#depending on how you want to handle the results
cvscores.append(result)
# do stuff with cvscores
I'm not sure if that would work with an object from flow_fromdataframe()` because that wouldn't be an array or array-like, although you should be able to get the arrays within.
The following code gives a log ending with
Epoch 19/20
1/1 [==============================] - 0s 473ms/step - loss: 1.4018 - accuracy: 0.8750 - val_loss: 1.8656 - val_accuracy: 0.8900
Epoch 20/20
1/1 [==============================] - 0s 444ms/step - loss: 0.5904 - accuracy: 0.8750 - val_loss: 2.1255 - val_accuracy: 0.8700
get_dataset: validation
Found 1000 files belonging to 2 classes.
Using 100 files for validation.
4/4 [==============================] - 1s 81ms/step
eval acc: 0.81
My question is:
Why is the val_accuracy after the last epoch (0.87) different from the eval acc (0.81) after the fit?
In my code, I try to use the same dataset for the validation of each epoch during fit and the additional validation afterwards.
[Update 1, 2022-07-19:
Obviously, the two accuracy calculations don't really use the same data. How can I debug which data is actually used?
[Update 3, 2022-07-20: I have followed the data into TensorFlow. The last thing I see is that in Model.evaluate (during fit) and Model.predict the x.filenames are equal. I did not manage to debug much further, because soon in quick_execute the __inference_test_function_248219 resp. the __inference_predict_function_231438 are evaluated outside Python, and the arguments are tensors with dtype=resource, whose contents I cannot see.]
I have deliberately removed my class balancing code to keep my example small. I know that this makes the accuracies less useful, but I don't care about that for now.
Note that get_dataset('validation') is only called once at the beginning of the fit, not at each epoch.
I have now also set max_queue_size=0, use_multiprocessing=False, workers=0 (as seen here, found via this related SO question about TensorFlow 1), but this did not make the accuracies equal.
]
Code:
import tensorflow as tf
from sklearn.metrics import accuracy_score
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.preprocessing import image_dataset_from_directory
inputs = tf.keras.Input(shape=(224, 224, 3))
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False)
base_output = base_model(inputs)
base_model.trainable = False
out = Flatten(name='flat')(base_output)
out = Dense(1, activation='sigmoid')(out)
model = Model(inputs=inputs, outputs=out)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
def get_dataset(subset):
print('get_dataset:', subset)
return image_dataset_from_directory(
'data-nodup-1000',
labels="inferred",
label_mode='binary',
color_mode="rgb",
image_size=(224, 224),
shuffle=True,
seed=1,
validation_split=0.1,
subset=subset,
crop_to_aspect_ratio=False,
)
model.fit(
get_dataset('training'),
steps_per_epoch=1,
epochs=20,
validation_data=get_dataset('validation'),
max_queue_size=0,
use_multiprocessing=False,
workers=0,
)
val_dataset = get_dataset('validation')
true_class = tf.concat([y for x, y in val_dataset], axis=0)
pred = model.predict(val_dataset)
pred_class = pred >= .5
print('eval acc:', accuracy_score(true_class, pred_class))
[Update 2, 2022-07-19:
I can also reproduce the behavior with the deprecated ImageDataGenerator, using
from tensorflow.keras.applications.resnet50 import preprocess_input
from keras_preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input,
validation_split=0.1,
)
def get_dataset(subset):
print('get_dataset:', subset)
return datagen.flow_from_directory(
'data-nodup-1000',
class_mode='binary',
target_size=(224, 224),
shuffle=True,
seed=1,
subset=subset,
)
and
true_class = val_dataset.labels
]
[Update 4, 2022-07-21: Note that deactivating shuffling of validation data by setting shuffle=(subset == 'training') makes the two validation accuracies equal. This is not a workaround, however, because the validation set then consists only of class 1, since flow_from_directory doesn't do stratification.
]
My environment:
I am using all up-to-date libraries, like tensorflow 2.9.1 and sklearn 1.1.1 (via pip-compile -U).
The folder data-nodup-1000 contains one subfolder with 113 files of class 0, and one subfolder with 887 files of class 1.
I have now found out that in TensorFlow 2.9.1 model.predict uses the second iteration of the dataset, which is shuffled differently than the first iteration!
It even uses the second iteration when I directly call model.predict(get_dataset('validation'))!
Therefore, the entries of true_class and pred do not match.
Switching to TensorFlow 2.10.0-rc3 and its tf.keras.utils.split_dataset makes the accuracies equal.
Here's the updated code:
import tensorflow as tf
from sklearn.metrics import accuracy_score
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.preprocessing import image_dataset_from_directory
inputs = tf.keras.Input(shape=(224, 224, 3))
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False)
base_output = base_model(inputs)
base_model.trainable = False
out = Flatten(name='flat')(base_output)
out = Dense(1, activation='sigmoid')(out)
model = Model(inputs=inputs, outputs=out)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
dataset = image_dataset_from_directory(
'data-synthetic',
labels="inferred",
label_mode='binary',
color_mode="rgb",
image_size=(224, 224),
shuffle=True,
seed=1,
crop_to_aspect_ratio=False,
)
train_dataset, val_dataset = tf.keras.utils.split_dataset(dataset, right_size=0.1)
model.fit(
train_dataset,
steps_per_epoch=1,
epochs=20,
validation_data=val_dataset,
max_queue_size=0,
use_multiprocessing=False,
workers=0,
)
true_class = tf.concat([y for x, y in val_dataset], axis=0)
pred = model.predict(val_dataset)
pred_class = pred >= .5
print('eval acc:', accuracy_score(true_class, pred_class))
which correctly yields:
Epoch 19/20
1/1 [==============================] - 0s 438ms/step - loss: 0.4426 - accuracy: 0.9062 - val_loss: 0.4658 - val_accuracy: 0.8800
Epoch 20/20
1/1 [==============================] - 0s 444ms/step - loss: 2.1619 - accuracy: 0.8438 - val_loss: 0.5886 - val_accuracy: 0.8900
4/4 [==============================] - 1s 87ms/step
eval acc: 0.89
there are a few points about your data which causes this:
First, your data is highly imbalanced (8 to 1 label ratio) which makes the model rather overfit and the CV estimate inaccurate.
Second, in the get_dataset function, the shuffle is set to True so every time you call the get_dataset(), it shuffles your data, and because (1) Your validation set is very small and (2) your train/val split is not stratified over your labels, the validation metrics would vary a lot due to this shuffling.
Suggestions to solve this:
call the get_dataset() only once for train and val dataset before fitting the model and save them as variables. and if there is no sequential order in your data, maybe set shuffle=False.
(optional) If possible make your dataset more balanced by techniques such as data augmentation, over-/under-sampling, etc.
def get_dataset(subset):
return image_dataset_from_directory(
'data-nodup-1000',
labels="inferred",
label_mode='binary',
color_mode="rgb",
image_size=(224, 224),
shuffle=False,
seed=0,
validation_split=0.1,
subset=subset,
crop_to_aspect_ratio=False,
)
train_dataset = get_dataset('training')
val_dataset = get_dataset('validation')
model.fit(
train_dataset,
steps_per_epoch=1,
epochs=20,
validation_data=val_dataset,
)
true_class = tf.concat([y for x, y in val_dataset], axis=0)
pred = model.predict(val_dataset)
pred_class = pred >= .5
print('eval acc:', accuracy_score(true_class, pred_class))
I have a dataset with multiple fields, but only two are relevant for my machine learning implementation. The rest shall not be considered for predictions, but might unveil interesting correlations.
Is there a way to return prediction results when calling model.evaluate?
For example:
[loss, accuracy, predicted_results] = model.evaluate(input, results)
AFAIK, we can't get prediction on x using model.evaluate, it simply returns the loss and acc, source. But for your need, you can write a custom class and define the necessary calls such as .evaluate and .predict. Let's define a simple model to demonstrate.
Train and Run
import tensorflow as tf
import numpy as np
img = tf.random.normal([20, 32], 0, 1, tf.float32)
tar = np.random.randint(2, size=(20, 1))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(10, input_dim = 32,
kernel_initializer ='normal', activation= 'relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam', metrics=['accuracy'])
model.fit(img, tar, epochs=2, verbose=2)
Epoch 1/2
1/1 - 1s - loss: 0.7083 - accuracy: 0.5000
Epoch 2/2
1/1 - 0s - loss: 0.6983 - accuracy: 0.5000
Now, for your request, we can do something as follows:
class Custom_Evaluate:
def __init__(self, model):
self.model = model
def eval_predict(self, x, y):
loss, acc = self.model.evaluate(x, y)
pred = self.model.predict(x)
return loss, acc, pred
custom_evaluate = Custom_Evaluate(model)
loss, acc, pred = custom_evaluate.eval_predict(img, tar)
print(loss, acc)
print(pred)
0.6886215806007385 0.6499999761581421
[[0.5457604 ]
[0.6126752 ]
[0.53668976]
[0.40323135]
[0.37159938]
[0.5520069 ]
[0.4959099 ]
[0.5363802 ]
[0.5033434 ]
[0.65680957]
[0.6863682 ]
[0.44409862]
[0.4672098 ]
[0.49656072]
[0.620726 ]
[0.47991502]
[0.58834356]
[0.5245693 ]
[0.5359181 ]
[0.4575624 ]]
data = np.random.random((10000, 150))
labels = np.random.randint(10, size=(10000, 1))
labels = to_categorical(labels, num_classes=10)
model = Sequential()
model.add(Dense(units=32, activation='relu', input_shape=(150,)))
model.add(Dense(units=10, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, epochs=30, validation_split=0.2)
I created 10000 random samples to train my net, but it use only few of them(250/10000)
Exaple of the 1st epoch:
Epoch 1/30
250/250 [==============================] - 0s 2ms/step - loss: 2.1110 - accuracy: 0.2389 - val_loss: 2.2142 - val_accuracy: 0.1800
Your data is split into training and validation subsets (validation_split=0.2).
Training subset has size 8000 and validation 2000.
Training goes in batches, each batch has size 32 samples by default.
So one epoch should take 8000/32=250 batches, as it shows in the progress.
Try code like following example
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(10, size=(1000, 1))
# Convert labels to categorical one-hot encoding
one_hot_labels = keras.utils.to_categorical(labels, num_classes=10)
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, one_hot_labels, epochs=10, batch_size=32)
I am trying the the training and evaluation example on the tensorflow website.
Specifically, this part:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
y_train = y_train.astype('float32')
y_test = y_test.astype('float32')
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.BatchNormalization()(x)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
return model
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=3)
It appears that if I add the batch normalization layer (this line: x = layers.BatchNormalization()(x)) I get the following error:
InvalidArgumentError: The second input must be a scalar, but it has shape [64]
[[{{node batch_normalization_2/cond/ReadVariableOp/Switch}}]]
Any ideas?
The same code works for me.
The only lines I changed are :
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3)
to model.compile(optimizer=keras.optimizers.RMSprop(lr=1e-3)
(which is version specific)
Then
model.fit(train_dataset, epochs=3) to model.fit(train_dataset, epochs=3, steps_per_epoch=30)
Reason : When using iterators as input to a model, you should specify the steps_per_epoch argument
If you just want to use sample weights, you don't have to use tf.data.Dataset, you can simply run:
model.fit(x=x_train, y=y_train, sample_weight=sample_weight, batch_size=64, epochs=3)
and it works for me (when I change learning_rate to lr as #ASHu2 mentioned).
It gets 97% accuracy after 3 epochs:
...
57408/60000 [===========================>..] - ETA: 0s - loss: 0.1010 - sparse_categorical_accuracy: 0.9709
58816/60000 [============================>.] - ETA: 0s - loss: 0.1011 - sparse_categorical_accuracy: 0.9708
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1007 - sparse_categorical_accuracy: 0.9709
I used TF 1.14.0 on windows.
The problem was solved when I updated tensorflow from version 1.14.1 to 2.0.0-rc1.