Creating custom data_generator in Keras for fit_generate() - python

I am trying to train a CNN-LSTM to read a sequence of 6 frames at a time to the CNN (VGG16 without top layer) and give the extracted features to an LSTM in Keras.
The issue is that, since I need to send 6 frames at a time, I need to reshape every 6 frames and add a dimension. Also, since the labels are for every frame, I need to create another variable to get the label of the first frame for every sequence and put it in a new array and then feed both to feed the model (code below).
The issue is that the data gets way too big to use model.fit() and even when trying with it on a small part of the data I get weird horrible results, so I am trying to use model.fit_generator to iterate the input to the model. But since I cannot just directly feed the data I load from the dataset (because I need to reshape and do what I explained in the first paragraph), I am trying to make my own generator. However, things are not going well and I keep getting errors saying 'tuple' is not an iterator. Does anyone know how I can fix the code to make it work?
train_batches = ImageDataGenerator().flow_from_directory(train_path, target_size=(224, 224),
classes=['Bark', 'Bitting', 'Engage', 'Hidden', 'Jump',
'Stand', 'Walk'], batch_size=18156, shuffle=False)
valid_batches = ImageDataGenerator().flow_from_directory(valid_path, target_size=(224, 224),
classes=['Bark', 'Bitting', 'Engage', 'Hidden', 'Jump',
'Stand', 'Walk'], batch_size=6, shuffle=False)
test_batches = ImageDataGenerator().flow_from_directory(test_path, target_size=(224, 224),
classes=['Bark', 'Bitting', 'Engage', 'Hidden', 'Jump',
'Stand','Walk'], batch_size=6, shuffle=False)
def train_gen():
n_frames=6
n_samples=6 #to decide
H=W=224
C = 3
imgs, labels = next(train_batches)
y = np.empty((n_samples, 7))
j = 0
for i in range(n_samples):
y[i] = labels[j]
j +=6
frame_sequence = imgs.reshape(n_samples,n_frames, H,W,C)
return frame_sequence,y
def valid_gen():
v_frames=6
v_samples=1
H=W=224
C = 3
vimgs,vlabels = next(valid_batches)
y2 = np.empty((v_samples, 7))
k = 0
for l in range(v_samples):
y2[l] = vlabels[k]
k +=6
valid_sequence = vimgs.reshape(v_samples,v_frames, H,W,C)
return valid_sequence,y2
def main():
cnn = VGG16(weights='imagenet',
include_top='False', pooling='avg')
cnn.layers.pop()
print(cnn.summary())
cnn.trainable = False
video_input= Input(shape=(None,224,224,3), name='video_input')
print(video_input.shape)
encoded_frame_sequence = TimeDistributed(cnn)(video_input) # the output will be a sequence of vectors
encoded_video = LSTM(256)(encoded_frame_sequence) # the output will be a vector
output = Dense(7, activation='relu')(encoded_video)
video_model = Model(inputs=[video_input], outputs=output)
tr_data = train_gen()
vd_data= valid_gen()
print(video_model.summary())
imgs, labels = next(train_batches)
vimgs,vlabels = next(valid_batches)
print("Training ...")
video_model.compile(Adam(lr=.001), loss='categorical_crossentropy', metrics=['accuracy'])
video_model.fit_generator(tr_data,
steps_per_epoch=1513,
validation_data=vd_data,
validation_steps=431,
epochs=1,
verbose=2)
Is there a mistake in the way I define the generator?

It seems like the way I defined the generators was not correct. As a Keras admin explained to me, the definition has two issues.
Instead of return we need to use the yield
We need a while True loop to make sure it keeps reading
Note that there are few errors on the rest of the code that I dealt with, but since this question is about the generator, I am only posting an answer about that part (There are two generators but they are similar except for the input):
def train_gen():
n_frames=6
n_samples=5 #to decide
H=W=224
C = 3
while True:
imgs, labels = next(train_batches)
#frame_sequence = imgs.reshape(n_samples,n_frames, H,W,C)
y = np.empty((n_samples, 7))
j = 0
#print("labels")
#print(labels)
#print("y")
#print(y.shape)
if len(labels) == n_frames*n_samples:
frame_sequence = imgs.reshape(n_samples,n_frames, H,W,C)
for i in range(n_samples):
y[i] = labels[j]
# print("inside: ")
#print(y[i])
# print(labels[j])
j +=6
yield frame_sequence,y

I think that you should implement a class for the data-generator, I found this link, it might help you. A detailed example of how to use data generators with Keras

Related

Keras Multi Input Network, using Images and structured data : How do I build the correct input data?

I am building a multi input Network using the Keras functionnal API, but I struggle to find and understand the right format for my input data throw the network.
I have two main input :
One is an image, that goes throw a fine-tuned ResNet50 CNN
The second is a simple numpy array (X_train) containing metadata about the image (position and size of the image). This one goes throw a simple dense network.
I load the images from a dataframe, containing the metadata, and the filepath to the corresponding image.
I use ImageDataGenerator and the flow_from_dataframe method to load my images :
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_flow = datagen.flow_from_dataframe(
dataframe=df_train,
x_col="cropped_img_filepath",
y_col="category",
batch_size=batch_size,
shuffle=False,
class_mode="categorical",
target_size=(224,224)
)
I can train the two networks separately using their own data, no problems until here.
The two output of the two distinct networks are then combined to a dense network to output a 10 digits probability vector :
# Create the input for the final dense network using the output of both the dense MLP and CNN
combinedInput = concatenate([cnn.output, mlp.output])
x = Dense(512, activation="relu")(combinedInput)
x = Dense(256, activation="relu")(x)
x = Dense(128, activation="relu")(x)
x = Dense(32, activation="relu")(x)
x = Dense(10, activation="softmax")(x)
model = Model(inputs=[cnn.input, mlp.input], outputs=x)
# Compile the model
opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss="categorical_crossentropy",
metrics=['accuracy'],
optimizer=opt)
# Train the model
model_history = model.fit(x=(train_flow, X_train),
y=y_train,
epochs=1,
batch_size=batch_size)
However, when I cannot train the overall network, I get the following error :
ValueError: Failed to find data adapter that can handle input: (<class 'tuple'> containing values of types {"<class 'keras_preprocessing.image.dataframe_iterator.DataFrameIterator'>", "<class 'numpy.ndarray'>"}), <class 'pandas.core.series.Series'>
I understand I am not using the correct input format for my input data.
I can train my CNN with the train_flow, and my dense network with X_train, so I was hoping this would work.
Do you have any idea of how to combine image data and nump array into a multi input array ?
Thank you for all the information you can give me!
I finally found how to do it, inspiring me from the post # Nima Aghli proposed.
Here is how I did that :
First instanciate the preprocessing function (for me the one used for ResNest50) :
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
def preprocess_function(x):
if x.ndim == 3:
x = x[np.newaxis, :, :, :]
return preprocess_input(x)
# Initializing the datagen, using the above function :
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
And then Define the Custom Data Generator that will yield randomly sampled array coupling image & metadata, whiule making sure not to be ever out of data (so that you can run on which ever number of epochs) :
def createGenerator(dff, verif=False, batch_size=BATCH_SIZE):
# Shuffles the dataframe, and so the batches as well
dff = dff.sample(frac=1)
# Shuffle=False is EXTREMELY important to keep order of image and coord
flow = datagen.flow_from_dataframe(
dataframe=dff,
directory=None,
x_col="cropped_img_filepath",
y_col="category",
batch_size=batch_size,
shuffle=False,
class_mode="categorical",
target_size=(224,224),
seed=42
)
idx = 0
n = len(dff) - batch_size
batch = 0
while True :
# Get next batch of images
X1 = flow.next()
# idx to reach
end = idx + X1[0].shape[0]
# get next batch of lines from df
X2 = dff[["x", "y", "w", "h"]][idx:end].to_numpy()
dff_verif = dff[idx:end]
# Updates the idx for the next batch
idx = end
# print("batch nb : ", batch, ", batch_size : ", X1[0].shape[0])
batch+=1
# Checks if we are at the end of the dataframe
if idx==len(dff):
# print("END OF THE DATAFRAME\n")
idx = 0
# Yields the image, metadata & target batches
if verif==True :
yield [X1[0], X2], X1[1], dff_verif
else :
yield [X1[0], X2], X1[1] #Yield both images, metadata and their mutual label
I voluntarily kept the commentaries as it helps grasps all the operations that are computed.
The main point/problem is to get images from all the dataframe, without ever getting short on images, and having batches of the same size.
Also, we have to be careful to the order of the images/metadata, so tht the right info is connected to the right image in the returned array.

Character-based Text Classification with Triplet Loss

Im trying to implement a text-classifier using triplet loss to classify different job descriptions into categories based on this paper. But whatever i do, the classifier yields very bad results.
For the embedding i followed this tutorial and the NN architecture is based on this article.
I create my encodings using:
max_char_len = 20
group_numbers = range(0, len(job_groups))
char_vocabulary = {'PAD':0}
X_char = []
y_temp = []
i = 1
for group, number in zip(job_groups, group_numbers):
for job in group:
job_cleaned = some_cleaning_function(job)
job_enc = []
for c in job_cleaned:
if c in char_vocabulary.keys():
job_enc.append(char_vocabulary[c])
else:
char_vocabulary[c] = i
job_enc.append(char_vocabulary[c])
i+=1
X_char.append(job_enc)
y_temp.append(number)
X_char = pad_sequences(X_char, maxlen = max_char_length, truncating='post')
My Neural Network is set up the following way:
def create_base_model():
char_in = Input(shape=(max_char_length,), name='Char_Input')
char_enc = Embedding(input_dim=len(char_vocabulary)+1, output_dim=20, mask_zero=True,name='Char_Embedding')(char_in)
x = Bidirectional(LSTM(64, return_sequences=True, recurrent_dropout=0.2, dropout=0.4))(char_enc)
x = Bidirectional(LSTM(64, return_sequences=True, recurrent_dropout=0.2, dropout=0.4))(x)
x = Bidirectional(LSTM(64, return_sequences=True, recurrent_dropout=0.2, dropout=0.4))(x)
x = Bidirectional(LSTM(64, return_sequences=False, recurrent_dropout=0.2, dropout=0.4))(x)
out = Dense(128, activation = "softmax")(x)
return Model(char_in, out)
def get_siamese_triplet_char():
anchor_input_c = Input(shape=(max_char_length,),name='Char_Input_Anchor')
pos_input_c = Input(shape=(max_char_length,),name='Char_Input_Positive')
neg_input_c = Input(shape=(max_char_length,),name='Char_Input_Negative')
base_model = create_base_model(encoding_generator)
encoded_anchor = base_model(anchor_input_c)
encoded_positive = base_model(pos_input_c)
encoded_negative = base_model(neg_input_c)
inputs = [anchor_input_c, pos_input_c, neg_input_c]
outputs = [encoded_anchor, encoded_positive, encoded_negative]
siamese_triplet = Model(inputs, outputs)
siamese_triplet.add_loss((triplet_loss(outputs)))
siamese_triplet.compile(loss=None, optimizer='adam')
return siamese_triplet, base_model
The triplet loss is defined as follows:
def triplet_loss(inputs):
anchor, positive, negative = inputs
positive_distance = K.square(anchor - positive)
negative_distance = K.square(anchor - negative)
positive_distance = K.sqrt(K.sum(positive_distance, axis=-1, keepdims = True))
negative_distance = K.sqrt(K.sum(negative_distance, axis=-1, keepdims = True))
loss = positive_distance - negative_distance
loss = K.maximum(0.0, 1 + loss)
return K.mean(loss)
The model is then trained with:
siamese_triplet_char.fit(x=
[Anchor_chars_train,
Positive_chars_train,
Negative_chars_train],
shuffle=True, batch_size=8, epochs=22, verbose=1)
My goal is to: First, train the network with no label data in order to minimize the space of the different phrases and second, add a classification layer and create the final classifier.
My general problem is that even the first phase shows sinking cost-values it overfits and the validation results jump around and the second phase fails badly as I'm not able to train the model to actually classify.
My questions are the following:
Could someone explain the Embedding Architecture? What is the output dimension refering to? The individual characters? Would that even make sense? Or is there a better way to encode the input data?
How can i add validation_data to a network that does not contain labeled data? I could use validation_split, but i would rather prefer passing specific data to validate as my data is stratified.
Is there a reason why the classification does not work? Applying a simple K-Nearest Neighbor algorithm achieves at best 0.5 accuracy! Is it because of the data? Or is there a systematic error in my system?
All ideas and suggestions are really appreciated!

fit_generator in keras, loading everything into memory

I have original data (X_train, y_train), and I am modifying this data into something else. Original data are just images with labels. Modified data should be pairs of images for Siamese network which are high in number and it would be around 30 GB in memory. So can't run this function to create pairs on whole original data. So, I used keras fit_generator thinking it would load only that particular batch.
I ran both model.fit and also model.fit_generator on sample pairs but i observed both are using the same amount memory. So, I guess think some problem with my code in using fit_generator. Below is the relevant code. Can you guys please help me with this?
Code Below:
def create_pairs(X_train, y_train):
tr_pairs = []
tr_y = []
y_train = np.array(y_train)
digit_indices = [np.where(y_train == i)[0] for i in list(set(y_train))]
for i in range(len(digit_indices)):
n = len(digit_indices[i])
for j in range(n):
random_index = digit_indices[i][j]
anchor_image = X_train[random_index]
anchor_label = y_train[random_index]
anchor_indices = [i for i, x in enumerate(y_train) if x == anchor_label]
negate_indices = list(set(list(range(0,len(X_train)))) - set(anchor_indices))
for k in range(j+1,n):
support_index = digit_indices[i][k]
support_image = X_train[support_index]
tr_pairs += [[anchor_image,support_image]]
negate_index = random.choice(negate_indices)
negate_image = X_train[negate_index]
tr_pairs += [[anchor_image,negate_image]]
tr_y += [1,0]
return np.array(tr_pairs),np.array(tr_y)
def myGenerator():
tr_pairs, tr_y = create_pairs(X_train, y_train)
while 1:
for i in range(110): # 1875 * 32 = 60000 -> # of training samples
if i%125==0:
print("i = " + str(i))
yield [tr_pairs[i*32:(i+1)*32][:, 0], tr_pairs[i*32:(i+1)*32][:, 1]], tr_y[i*32:(i+1)*32]
model.fit_generator(myGenerator(), steps_per_epoch=110, epochs=2,
verbose=1, callbacks=None, validation_data=([te_pairs[:, 0], te_pairs[:, 1]], te_y), validation_steps=None, class_weight=None,
max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)
myGenerator returns a generator.
However you should notice that create_pairs is loading the full dataset into memory. When you call tr_pairs, tr_y = create_pairs(X_train, y_train) the dataset is loaded, so the memory resources are being used.
myGenerator simply traverses a structure that is already in memory.
The solution would be to make create_pairs a generator itself.
If the data is a numpy array I can suggest using h5 files to read chuncks of data from disk.
http://docs.h5py.org/en/latest/high/dataset.html#chunked-storage

predict_generator and class labels

I am using ImageDataGenerator to generate new augmented images and extract bottleneck features from pretrained model but most of the tutorial I see on keras
samples same no of training samples as number of images in directory.
train_generator = train_datagen.flow_from_directory(
train_path,
target_size=image_size,
shuffle = "false",
class_mode='categorical',
batch_size=1)
bottleneck_features_train = model.predict_generator(
train_generator, 2* nb_train_samples // batch_size)
Suppose I want 2 times more images from the above code, how I can get the desired class labels for the features extracted from bottleneck layer which are stored in tuple train_generator.
shouldnt the code in training_generator.py at line 422
x, _ = generator_output
do something like this
=> x, y = generator_output
and return tuple [np.concatenate(out) for out in all_outs],y from predict_generator
i.e return the corresponding class labels along with the predicted features all_outs since there is no way to get the corresponding labels without running generator twice.
If you're using predict, normally you simply don't want Y, because Y will be the result of the prediction. (You're not training, so you don't need the true labels)
But you can do it yourself:
bottleneck = []
labels = []
for i in range(2 * nb_train_samples // batch_size):
x, y = next(train_generator)
bottleneck.append(model.predict(x))
labels.append(y)
bottleneck = np.concatenate(bottleneck)
labels = np.concatenate(labels)
If you want it with indexing (if your generator supports that):
#...
for epoch in range(2):
for i in range(nb_train_samples // batch_size):
x,y = train_generator[i]
#...

Tensorflow and reading binary data properly

I am trying to properly read in my own binary data to Tensorflow based on Fixed length records section of this tutorial, and by looking at the read_cifar10 function here. Mind you I am new to tensorflow, so my understanding may be off.
My Data
My files are binary with float32 type. The first 32 bit sample is the label, and the remaining 256 samples are the data. I want to reshape the data at the end to a [2, 128] matrix.
My Code So far:
import tensorflow as tf
import os
def read_data(filename_queue):
item_type = tf.float32
label_items = 1
data_items = 256
label_bytes = label_items * item_type.size
data_bytes = data_items * item_type.size
record_bytes = label_bytes + data_bytes
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
key, value = reader.read(filename_queue)
record_data = tf.decode_raw(value, item_type)
# labels = tf.cast(tf.strided_slice(record_data, [0], [label_items]), tf.int32)
label = tf.strided_slice(record_data, [0], [label_items])
data0 = tf.strided_slice(record_data, [label_items], [label_items + data_items])
data = tf.reshape(data0, [2, data_items/2])
return data, label
if __name__ == '__main__':
os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Set GPU device
datafiles = ['train_0000.dat', 'train_0001.dat']
num_epochs = 2
filename_queue = tf.train.string_input_producer(datafiles, num_epochs=num_epochs, shuffle=True)
data, label = read_data(filename_queue)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
(x, y) = read_data(filename_queue)
print(y.eval())
This code hands at the print(y.eval()), but I fear I have much bigger issues than that.
Question:
When I execute this, I get a data and label tensor returned. The problem is I don't quite understand how to actually read the data from the tensor. For example, I understand the autoencoder example here, however this has a mnist.train.next_batch(batch_size) function that is called to read the next batch. Do I need to write that for my function, or is it handled by something internal to my read_data() function. If I need to write that function, what does it look like?
Are their any other obvious things I'm missing? My goal in using this method is to reduce I/O overhead, and not store all of the data in memory, since my file are quite large.
Thanks in advance.
Yes. You are pretty much done. At this point you need to:
1) Write your neural network model model which is supposed to take your data and return a label.
2) Write your cost function C which takes the network prediction and the true label and gives you a cost.
3) Choose and optimizer.
4) Put everything together:
opt = tf.AdamOptimizer(learning_rate=0.001)
datafiles = ['train_0000.dat', 'train_0001.dat']
num_epochs = 2
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
filename_queue = tf.train.string_input_producer(datafiles, num_epochs=num_epochs, shuffle=True)
data, label = read_data(filename_queue)
example_batch, label_batch = tf.train.shuffle_batch(
[data, label], batch_size=128)
y_pred = model(data)
loss = C(label, y_pred)
After which you iterate and minimize the loss with:
opt.minimize(loss)
See also tf.train.string_input_producer behavior in a loop for related information.

Categories

Resources