Has Tensorflow training output style changed? - python

I have been working on my deep learning model for a while. Today, when I started the model training, I noticed only a fraction of my dataset is being trained and the size of data used in each epoch changes with the batch size.
print(mixture_train_shaped.shape)
print(clear_train_shaped.shape)
model.fit(mixture_train_shaped, clear_train_shaped,
validation_split=0.2,
epochs=40,
batch_size=32,
shuffle=True,
verbose=1
)
When I run this code, this is what I see.
(51226, 129, 8, 4)
(51226, 129, 1, 1)
Epoch 1/40
1281/1281 [===========]
Epoch 2/40
1281/1281 [===========]
In my previous training outputs, the model would use the entire set in one epoch. On the above example though, the training set has 40,980 sample and each epoch trains only 40,980/32=1281. In a way, every epoch trains a single batch.
Train on 47 samples, validate on 6 samples
Epoch 1/5000
47/47 [==========]
I haven't changed the code. Is every epoch still using the entire training set or has it changed?

In the previous versions of Colab, the training set size was shown. With this update, the batch numbers are shown in the progress bar but no change in how many items have been trained for the model.

Related

Progress bar not shown during training, Python

I'm trying to train a 2D Unet, for the segmentation task.
I execute this line of code:
model.fit(training_generator, epochs = params["nEpoches"],
validation_data=validation_generator, verbose = 1, use_multiprocessing = True, workers = 6, callbacks=[callbacks_list,csv_logger])
Where
training_generator = Istance of DataGenerator(x_training, y_train_flat, **params), with the image and the masks array as parameters of this class.
epochs = 2
validation_generator = Istance of DataGenerator(x_validation, y_validation_flat, **params), with validation data.
callbacks_list = checkPoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=False, mode='min', period=1)
callbacks_list = checkPoint
With the verbose=1 parameter I think I should see a progress bar showing the training status for each epoch, but the only thing I see is Epoch 1/2, without any bar. So I can't say if the training process is going on or if it's stucked somewhere.
According to Tensorflow documentation,
steps_per_epoch:-
Integer or None. Total number of steps (batches of samples) before
declaring one epoch finished and starting the next epoch. When
training with input tensors such as TensorFlow data tensors, the
default None is equal to the number of samples in your dataset divided
by the batch size, or 1 if that cannot be determined. If x is a
tf.data dataset, and 'steps_per_epoch' is None, the epoch will run
until the input dataset is exhausted. When passing an infinitely
repeating dataset, you must specify the steps_per_epoch argument.
validation_steps:-
Only relevant if validation_data is provided and is a tf.data dataset.
Total number of steps (batches of samples) to draw before stopping
when performing validation at the end of every epoch. If
'validation_steps' is None, validation will run until the
validation_data dataset is exhausted. In the case of an infinitely
repeated dataset, it will run into an infinite loop. If
'validation_steps' is specified and only part of the dataset will be
consumed, the evaluation will start from the beginning of the dataset
at each epoch. This ensures that the same validation samples are used
every time.
In your case, training progress is going on, as rightly mentioned by #Kaveh, it does not know how much steps it should have for one epoch and ran into an infinite loop. Check your batch size and add steps_per_epoch and validation_steps to the model.fit() as shown below will resolve your issue.
model.fit(training_generator,
steps_per_epoch = len(training_generator) // training_generator.batch_size,
epochs = params["nEpoches"],
validation_data=validation_generator,
validation_steps=len(validation_generator) // validation_generator.batch_size,
verbose = 1,
use_multiprocessing = True, workers = 6, callbacks=[callbacks_list,csv_logger])
For more information you can refer here

Load and continue running from the epoch at which they stopped

I'm training a huge model. Unfortunately, the runtime environment breaks off about halfway and I have to restart the model.I save the model after each epoch.
But my question now is, for example, I've trained 5 out of 10 epcohs.
How do I load it and indicate that I was at the 5th epoch and that he has to continue there so only has to go through 5 epochs? I know that I can load the model, but how can I say I was at the 5 epoch and now you only have to go through 5 epochs because I wanted a total of 10.
cp_callback = [tf.keras.callbacks.ModelCheckpoint(
filepath='/saved/model.h5',
verbose=1,
save_weights_only=True,
save_freq= 'epoch'),
tf.keras.callbacks.EarlyStopping(monitor='loss', patience=2)]
You can save epoch number in a separate file (pickle or json file).
import json
train_parameters = {'iter': iteration, 'batch_size': batch_size'}
# saving
json.dump(trainParameters, open(output_path+"trainParameters.txt",'w'))
# loading
trainParameters = json.load(open(path_to_saved_model+"trainParameters.txt"))
input = tf.random.uniform([8, 24], 0, 100, dtype=tf.int32)
model.compile(optimizer=optimizer, loss=training_loss, metrics=evaluation_accuracy)
hist = model.fit((input, input), input, epochs=1,
steps_per_epoch=1, verbose=0)
model.load_weights(path_to_saved_model+'saved.h5')
But if you need to save learning rate step - save optimizer state. The state contain iteration number (number of batches passed).

Keras fit_generator(datagen.flow(X_train)) function takes less pictures than X_train have?

I am using keras fit_generator(datagen.flow()) function for training of my inception model, I am so confused about the number of images it is taking on every epoch. Can anyone please help me telling this How it is working. My code is below.
I am using this keras documentation.
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(rotation_range = 15, horizontal_flip = True)
# Fitting the model with
history = inc_model.fit_generator(datagen.flow(X_train, train_labels, batch_size=10), epochs=20, validation_data = (X_test, test_labels), callbacks=None)
Now my total images in X_train is 4676. However, everytime I run this history line, I get
Epoch 1/20
936/936 [========================] - 167s 179ms/step - loss: 1.4236 - acc: 0.3853 - val_loss: 1.0858 - val_acc: 0.5641
Why is it not taking whole of my X_train images?
Also, if I change batch_size from 10 to lets say 15 it start taking more less images such as
Epoch 1/20
436/436
Thank you.
The 936 and 436 actually refer to batches of samples per epoch. You set your batch size to 10 and 15, so in each case the model is trained on 936 X 10 and 436 X 15 samples per epoch. The samples is even more than your original training set, since you use the ImageDataGenerator which creates additional training instances by applying transformations to existing ones.

What does nb_epoch in neural network stands for?

i'm currently beginning to discover Keras library for deap learning, it seems that in the training phase a centain number of epoch is chosen, but i don't know on which assumption is this choice based on.
In the Mnist dataset the number of epochs chosen is 4 :
model.fit(X_train, Y_train,
batch_size=128, nb_epoch=4,
show_accuracy=True, verbose=1,
validation_data=(X_test, Y_test))
Could someone tell me why and how do we choose a correct number of epochs ?
Starting Keras 2.0, nb_epoch argument has been renamed to epochs everywhere.
Neural networks are trained iteratively, making multiple passes over entire dataset. Each pass over entire dataset is referred to as epoch.
There are two possible ways to choose an optimum number of epochs:
1) Set epochs to a large number, and stop training when validation accuracy or loss stop improving: so-called early stopping
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=4, mode='auto')
model.fit(X_train, Y_train,
batch_size=128, epochs=500,
show_accuracy=True, verbose=1,
validation_data=(X_test, Y_test),callbacks = [early_stopping])
2) Consider number of epochs as a hyperparameter and select the best value based on a set of trials (runs) on a grid of epochs values
it seems you might be using old version of keras ,nb_epoch refers to number of epochs which has been replaced by epoch
if you look here you will see that it has been deprecated.
One epoch means that you have trained all dataset(all records) once,if you have 384 records,one epoch means that you have trained your model for all on all 384 records.
Batch size means the data you model uses on single iteration,in that case,128 batch size means that at once,your model takes 128 and do some a single forward pass and backward pass(backpropation)[This is called one iteration]
.it
To break it down with this example,one iteration,your model takes 128 records[1st batch] from your whole 384 to be trained and do a forward pass and backward pass(back propagation).
on second batch,it takes from 129 to 256 records and do another iteration.
then 3rd batch,from 256 to 384 and performs the 3rd iteration.
In this case,we say that it has completed one epoch.
the number of epoch tells the model the number it has to repeat all those processes above then stops.
There is no correct way to choose a number of epoch,its something that is done by experimenting,usually when the model stops to learn(loss is not going down anymore) you usually decrease the learning rate,if it doesn't go down after that and the results seems to be more or less as you expected then you select at that epoch where the model stopped to learn
I hope it helps
In neural networks, an epoch is equivalent to training the network using each data once.
The number of epochs, nb_epoch, is hence how many times you re-use your data during training.

How many iterations a needed to train tensorflow with the entire MNIST data set (60000 images)?

The MNIST set consists of 60,000 images for training set. While training my Tensorflow, I want to run the train step to train the model with the entire training set. The deep learning example on the Tensorflow website uses 20,000 iterations with a batch size of 50 (totaling to 1,000,000 batches). When I try more than 30,000 iterations, my number predictions fail (predicts 0 for all handwritten numbers). My questions is, how many iterations should I use with a batch size of 50 to train the tensorflow model with the entire MNIST set?
self.mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
for i in range(FLAGS.training_steps):
batch = self.mnist.train.next_batch(50)
self.train_step.run(feed_dict={self.x: batch[0], self.y_: batch[1], self.keep_prob: 0.5})
if (i+1)%1000 == 0:
saver.save(self.sess, FLAGS.checkpoint_dir + 'model.ckpt', global_step = i)
With Machine learning you tend to have serious cases of diminishing returns. for example here is a list of accuracy from one of my CNNs:
Epoch 0 current test set accuracy : 0.5399
Epoch 1 current test set accuracy : 0.7298
Epoch 2 current test set accuracy : 0.7987
Epoch 3 current test set accuracy : 0.8331
Epoch 4 current test set accuracy : 0.8544
Epoch 5 current test set accuracy : 0.8711
Epoch 6 current test set accuracy : 0.888
Epoch 7 current test set accuracy : 0.8969
Epoch 8 current test set accuracy : 0.9064
Epoch 9 current test set accuracy : 0.9148
Epoch 10 current test set accuracy : 0.9203
Epoch 11 current test set accuracy : 0.9233
Epoch 12 current test set accuracy : 0.929
Epoch 13 current test set accuracy : 0.9334
Epoch 14 current test set accuracy : 0.9358
Epoch 15 current test set accuracy : 0.9395
Epoch 16 current test set accuracy : 0.942
Epoch 17 current test set accuracy : 0.9436
Epoch 18 current test set accuracy : 0.9458
As you can see the returns start to fall off after ~10 Epochs*, however this may vary based on your network and learning rate. Based on how critical/ how much time you have the amount that is good to do varies, but I have found 20 to be a reasonable number
*I have always used the word epoch to mean one entire run through a data set but i am unaware as to the accuracy of that definition, each epoch here is ~429 training steps with batches of size 128.
I think that depends on your stop criteria. You can stop training when loss doesn't improve, or you can have a validation data set, and stop training when validation accuracy doesn't improve any more.
You can use something like no_improve_epoch and set it to let's say 3. What it will simply mean that if in 3 iterations there is no improvement of >1%, then stop the iterations.
no_improve_epoch= 0
with tf.Session() as sess:
sess.run(cls.init)
if cls.config.reload=='True':
print(cls.config.reload)
cls.logger.info("Reloading the latest trained model...")
saver.restore(sess, cls.config.model_output)
cls.add_summary(sess)
for epoch in range(cls.config.nepochs):
cls.logger.info("Epoch {:} out of {:}".format(epoch + 1, cls.config.nepochs))
dev = train
acc, f1 = cls.run_epoch(sess, train, dev, tags, epoch)
cls.config.lr *= cls.config.lr_decay
if f1 >= best_score:
nepoch_no_imprv = 0
if not os.path.exists(cls.config.model_output):
os.makedirs(cls.config.model_output)
saver.save(sess, cls.config.model_output)
best_score = f1
cls.logger.info("- new best score!")
else:
no_improve_epoch+= 1
if nepoch_no_imprv >= cls.config.nepoch_no_imprv:
cls.logger.info("- early stopping {} Iterations without improvement".format(
nepoch_no_imprv))
break
Sequence Tagging GITHUB
I have found that with MNIST, training on 3,833 images (validating on 56,167 because 60k**0.75 is just over 3.833) per epoch tends to converge well before 500 epochs. By "converge," I mean that validation loss does not decrease for 50 consecutive epochs of training with batch size 16; see this repo for an example of using early stopping with tf.keras; it mattered a lot to me in this case because I was doing model search and did not have time to train a single model very long.

Categories

Resources