How does flow_from_directory implemented? - python

My main question is, does it iterate over every sample in the directory for every epoch? I have directory with 6 classes with almost same number of samples in each class, when I trained model with batch_size=16 it didn't work at all, predicts only 1 class correctly. Making batch_size=128 made that, it can predict 3 classes with high accuracy and other 3 never appeared in test predictions. Why it did so? Does every steps_per_epoch uniquely generated and it only remembers samples of that batch? Which means that it does not remember last used batch samples and creates new random batch with possibility to use already used samples and miss others, if so then it means that it misses whole class samples and the only way to overcome this would be increasing batch_size so that it will remember it in one batch. I can't increase batch_size more than 128 because there is not enough memory on my GPU.
So what should I do?
Here is my code for ImageDataGenerator
train_d = ImageDataGenerator(rescale=1. / 255, shear_range=0.2, zoom_range=0.1, validation_split=0.2,
rotation_range=10.,
width_shift_range=0.1,
height_shift_range=0.1)
train_s = train_d.flow_from_directory('./images/', target_size=(width, height),
class_mode='categorical',
batch_size=32, subset='training')
validation_s = train_d.flow_from_directory('./images/', target_size=(width, height), class_mode='categorical',
subset='validation')
And here is code for fit_generator
classifier.fit_generator(train_s, epochs=20, steps_per_epoch=100, validation_data=validation_s,
validation_steps=20, class_weight=class_weights)

Yes, it iterates for every sample in each folder every epoch. This is the definition of en epoch, a complete pass over the whole dataset.
steps_per_epoch should be set to len(dataset) / batch_size, then only issue is when the batch size does not exactly divide the number samples, and in that case you round steps_per_epoch up and the last batch is smaller than batch_size.

Related

WARNING:tensorflow:Your input ran out of data; interrupting training

Python
Dataset problem in last train step
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches (in this case, 2000 batches). You may need to use the repeat() function when building your dataset.
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
classifer.fit_generator(training_set,
steps_per_epoch=(8000),
epochs=25,`enter code here`
validation_data=test_set,
validation_steps=2000)
you have code
classifer.fit_generator(training_set,
steps_per_epoch=(8000),
epochs=25,`enter code here`
validation_data=test_set,
validation_steps=2000)
the entry 'enter code here' doesn't belong in model.fit_generator. Also .fit_generator is depreciated just use .fit. You do not need to specify steps_per_epoch or validation_steps in .fit. It will internally calculate them. However if you wish to specify them then use code
steps_per_epoch= total images in trainset//batch_size
For the validation steps you can use a similar code, however if you want to go through the validation set exactly once per epoch then use this code
length=total number of images in test set
valid_batch_size=sorted([int(length/n) for n in range(1,length+1) if length % n ==0 and length/n<=80],reverse=True)[0]
validation_steps=int(length/test_batch_size)
use valid_batch_size as the batch size in your test_datagen. What the code does is determine the batch size and steps such that
valid_batch_size * validation_steps = total number of images in test set.

Keras DataGenerator with a validation set smaller than batch size make no validation

I wrote a DataGenerator and initialized a validation_generator. If the batch size specified for training is larger than the size of the validation set, no validation loss/acc is calculated.
If the validation set is larger, everything works fine. Specifying validation_steps does not help.
# Create data generators
training_generator = DataGenerator(partition['train'], embedding_model, **params)
validation_generator = DataGenerator(partition['validation'], embedding_model, **params)
# create LSTM
model = get_LSTM_v1(seq_length, input_dim, hot_enc_dim)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# train LSTM
history = model.fit_generator(
generator=training_generator,
validation_data=validation_generator,
epochs=n_epochs,
use_multiprocessing=True,
workers=cpu_cores
)
DataGenerator may need to be modified in order to return a partial batch when the batch size is smaller than the size of the validation set.
Most of the time, the number of computable batches returned by the generator correspond to the floor of the division of the number of samples by the batch size. This would return zero if the batch size is bigger than the size of the set.
You could try to work around by repeating the data in order to have enough for a full batch when needed.

Prevent predict_generator from shuffling batches

I am trying to check the performance of my model on the validation-dataset. As such, I am using predict_generator to return predictions from my validation_generator. However, I am not able to match the predictions with true labels returned from validation_generator.classes since the order of my predictions is mixed up.
This is how I initialize my generator:
BATCH_SIZE = 64
data_generator = ImageDataGenerator(rescale=1./255,
validation_split=0.20)
train_generator = data_generator.flow_from_directory(main_path, target_size=(IMAGE_HEIGHT, IMAGE_SIZE), shuffle=False, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="training")
validation_generator = data_generator.flow_from_directory(main_path, target_size=(IMAGE_HEIGHT, IMAGE_SIZE), shuffle=False, seed=13,
class_mode='categorical', batch_size=BATCH_SIZE, subset="validation")
#Found 4473 images belonging to 3 classes.
#Found 1116 images belonging to 3 classes.
Now I am using the predict_generator like so:
validation_steps_per_epoch = np.math.ceil(validation_generator.samples / validation_generator.batch_size)
predictions = model.predict_generator(validation_generator, steps=validation_steps_per_epoch)
I realize that there is a mismatch between my validation-data size (=1116) and validation_steps_per_epoch (=1152). Since these two dont match, I find the output predictions is different each time I run model.predict_generator(...).
Is there any way to fix this besides changing batch_size to 1 in order to make sure that generator steps through all samples?
I found someone with a similar issue here keras predict_generator is shuffling its output when using a keras.utils.Sequence, however his solution does not fix my problem since I am not writing any custom functions.
There is no randomization or shuffling going on, what happens is that since the batch size of the validation generator does not exactly divide the number of samples, then the leftover samples spill into the next time the generator is called, which messes up everything.
What you could do is set a batch size for the validation generator that divides exactly the number of validation samples, or set the batch size to one.

How to properly set steps_per_epoch and validation_steps in Keras?

I've trained several models in Keras. I have 39, 592 samples in my training set, and 9, 899 in my validation set. I used a batch size of 2.
As I was examining my code, it occurred to me that my generators may have been missing some batches of data.
This is the code for my generator:
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
val_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224)
batch_size=batch_size,
class_mode='categorical')
validation_generator = val_datagen.flow_from_directory(
val_dir,
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical')
I searched around to see how my generators behave, and found this answer:
what if steps_per_epoch does not fit into numbers of samples?
I calculated my steps_per_epoch and validation_steps this way:
steps_per_epoch = int(number_of_train_samples / batch_size)
val_steps = int(number_of_val_samples / batch_size)
Using the code in this link with my own batch size and number of samples, I got these results:
"missing the last batch" for train_generator and "weird behavior" for val_generator.
I'm afraid that I have to retrain my models again. What values should I choose for steps_per_epoch and validation_steps? Is there a way to use exact values for these variables(Other than setting batch_size to 1 or removing some of the samples)? I have several other models with different number of samples, and I think they've all been missing some batches. Any help would be much appreciated.
Two related question:
1- Regarding the models I already trained, are they reliable and properly trained?
2- What would happen if I set these variables using following values:
steps_per_epoch = np.ceil(number_of_train_samples / batch_size)
val_steps = np.ceil(number_of_val_samples / batch_size)
will my model see some of the images more than once in each epoch during training and validation? or Is this the solution to my question?!
Since Keras data generator is meant to loop infinitely, steps_per_epoch indicates how many times you will fetch a new batch from generator during single epoch. Therefore, if you simply take steps_per_epoch = int(number_of_train_samples / batch_size), your last batch would have less than batch_size items and would be discarded. However, in your case, it's not a big deal to lose 1 image per training epoch. The same is for validation step. To sum up: your models are trained [almost :) ] correctly, because the quantity of lost elements is minor.
Corresponding to implementation ImageDataGenerator https://keras.io/preprocessing/image/#imagedatagenerator-class if your number of steps would be larger than expected, after reaching the maximum number of samples you will receive new batches from the beginning, because your data is looped over. In your case, if steps_per_epoch = np.ceil(number_of_train_samples / batch_size) you would receive one additional batch per each epoch which would contains repeated image.
In addition to Greeser's answer,
To avoid losing some training samples, you could calculate your steps with this function:
def cal_steps(num_images, batch_size):
# calculates steps for generator
steps = num_images // batch_size
# adds 1 to the generator steps if the steps multiplied by
# the batch size is less than the total training samples
return steps + 1 if (steps * batch_size) < num_images else steps

What does nb_epoch in neural network stands for?

i'm currently beginning to discover Keras library for deap learning, it seems that in the training phase a centain number of epoch is chosen, but i don't know on which assumption is this choice based on.
In the Mnist dataset the number of epochs chosen is 4 :
model.fit(X_train, Y_train,
batch_size=128, nb_epoch=4,
show_accuracy=True, verbose=1,
validation_data=(X_test, Y_test))
Could someone tell me why and how do we choose a correct number of epochs ?
Starting Keras 2.0, nb_epoch argument has been renamed to epochs everywhere.
Neural networks are trained iteratively, making multiple passes over entire dataset. Each pass over entire dataset is referred to as epoch.
There are two possible ways to choose an optimum number of epochs:
1) Set epochs to a large number, and stop training when validation accuracy or loss stop improving: so-called early stopping
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=4, mode='auto')
model.fit(X_train, Y_train,
batch_size=128, epochs=500,
show_accuracy=True, verbose=1,
validation_data=(X_test, Y_test),callbacks = [early_stopping])
2) Consider number of epochs as a hyperparameter and select the best value based on a set of trials (runs) on a grid of epochs values
it seems you might be using old version of keras ,nb_epoch refers to number of epochs which has been replaced by epoch
if you look here you will see that it has been deprecated.
One epoch means that you have trained all dataset(all records) once,if you have 384 records,one epoch means that you have trained your model for all on all 384 records.
Batch size means the data you model uses on single iteration,in that case,128 batch size means that at once,your model takes 128 and do some a single forward pass and backward pass(backpropation)[This is called one iteration]
.it
To break it down with this example,one iteration,your model takes 128 records[1st batch] from your whole 384 to be trained and do a forward pass and backward pass(back propagation).
on second batch,it takes from 129 to 256 records and do another iteration.
then 3rd batch,from 256 to 384 and performs the 3rd iteration.
In this case,we say that it has completed one epoch.
the number of epoch tells the model the number it has to repeat all those processes above then stops.
There is no correct way to choose a number of epoch,its something that is done by experimenting,usually when the model stops to learn(loss is not going down anymore) you usually decrease the learning rate,if it doesn't go down after that and the results seems to be more or less as you expected then you select at that epoch where the model stopped to learn
I hope it helps
In neural networks, an epoch is equivalent to training the network using each data once.
The number of epochs, nb_epoch, is hence how many times you re-use your data during training.

Categories

Resources