wifi gesture recognition,dl,ml, python, cnn, - python

I have dataset look like this
(7500, 200, 30, 3)
which 7500 samples (there are a tensor of shape 200,30,3) which is related to CSI data (kind of wifi data for gesture recognition) It has 150 different labels (gestures) the aim is to classify
I used a CNN by keras to classify, I faced with huge overfitting
def create_DL_model():
# input layer
csi = Input(shape=(200,30,3))
# first feature extractor
x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-01')(csi)
x=BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2),name='layer1-02')(x)
x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-03')(x)
x=BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2),name='layer1-04')(x)
x=BatchNormalization()(x)
x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-05',padding='same')(x)
x=Conv2D(32, kernel_size=3, activation='relu',name='layer1-06',padding='same')(x)
x=Conv2D(64, (3,3),padding='same',activation='relu',name='layer-01')(x)
x=BatchNormalization()(x)
x=MaxPool2D(pool_size=(2, 2,),name='layer-02')(x)
x=Conv2D(32, (3,3),padding="same",activation='relu',name='layer-03')(x)
x=BatchNormalization()(x)
x=MaxPool2D(pool_size=(2, 2),name='layer-04')(x)
x=Flatten()(x)
x=Dense(16,activation='relu')(x)
keras.layers.Dropout(.50, seed=1)
probability=Dense(150,activation='softmax')(x)
model= Model(inputs=csi, outputs=probability)
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
return model
as you see, I used drop out for dense layer, early stopping and batch normalization for fight with overfitting, as you see still, there is the problem
after cross validation, I have accuracy around 70 (some papers got 90 pecent accuracy however we have 150 labels and it seems 90 pecent it is really grear result, they used meta-learning which I could not use), is there any way that you can recommend
many thanks

The accuracy vs epoch graph signals the over fitting issue present in your model. This is due to few training samples (7500/150 = 50 per class). One possible solution could be applying Data Augmentation which allows you to build a powerful image classifier using only very few training examples.
Data structure
Store your data according to the following structure
data/
train/
class1/
class1_img001.jpg
class1_img002.jpg
class2/
class2_img001.jpg
class2_img002.jpg
...
class150/
class150_img001.jpg
class150_img002.jpg
...
validation/
class1/
class1_img001.jpg
class1_img002.jpg
class2/
class2_img001.jpg
class2_img002.jpg
...
class150/
class150_img001.jpg
class150_img002.jpg
...
You can do:
def create_DL_model(img_height, img_width, channel):
# input layer
csi = Input(shape=(img_height, img_width, channel))
# first feature extractor
x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-01')(csi)
x=BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2),name='layer1-02')(x)
x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-03')(x)
x=BatchNormalization()(x)
x = MaxPooling2D(pool_size=(2, 2),name='layer1-04')(x)
x=BatchNormalization()(x)
x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-05',padding='same')(x)
x=Conv2D(32, kernel_size=3, activation='relu',name='layer1-06',padding='same')(x)
x=Conv2D(64, (3,3),padding='same',activation='relu',name='layer-01')(x)
x=BatchNormalization()(x)
x=MaxPool2D(pool_size=(2, 2,),name='layer-02')(x)
x=Conv2D(32, (3,3),padding="same",activation='relu',name='layer-03')(x)
x=BatchNormalization()(x)
x=MaxPool2D(pool_size=(2, 2),name='layer-04')(x)
x=Flatten()(x)
x=Dense(16,activation='relu')(x)
keras.layers.Dropout(.50, seed=1)
probability=Dense(150,activation='softmax')(x)
model= Model(inputs=csi, outputs=probability)
return model
from keras.preprocessing.image import ImageDataGenerator
batch_size = 32
img_height = 200
img_width = 30
channel = 3
model = create_DL_model(img_height, img_width, channel)
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1./255)
# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'data/train', # this is the target directory
target_size=(img_height , img_width ), # all images will be resized
batch_size=batch_size,
class_mode='categorical_crossentropy')
# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
'data/validation',
target_size=(img_height , img_width ),
batch_size=batch_size,
class_mode='categorical_crossentropy')
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=7500// batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=YOUR_VALIDATION_SIZE// batch_size) # use YOUR_VALIDATION_SIZE as per your validation data
model.save('model-e50-b32.h5') # always save your weights after training or during training
The number of CNN layers can be decreased and accuracy-loss can be monitored since we are only feeding 7500 training images.
The above code is not tested. Please share your errors for further advice.
More about data augmentation and how to apply is here.

Related

CNN accuracy: 0.0000e+00 for multi-classification on images

I have the following code that produces my horrible accuracy dilema, has anyone else encountered this issue for multi classification task(49 different images to classify)?
I am running resnet50 on top of my CNN model with softmax as last activation FN, my loss is categorical_crossentropy and my optimizer is Adam.
What might I be doing wrong?
## Build CNN architecture
model1 = Sequential()
model1.add(Conv2D(32, (3,3), strides=1, input_shape = (720, 720, 3)))
model1.add(Activation('relu'))
model1.add(Conv2D(32, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Conv2D(64, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(Conv2D(64, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Flatten())
model1.add(Dense(200))
model1.add(Activation('relu'))
model1.add(Dense(200))
model1.add(Dropout(0.24))
model1.add(Activation('relu'))
model1.add(Dense(49, activation='softmax'))
model1.summary()
# Image data generator for on the fly image augmentation
directory = '/home/carlini-TF2/data/train/'
batch_size = 64
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=90.,
shear_range=0.2,
zoom_range=[0.8,1.2],
horizontal_flip=True,
validation_split=0.2,
preprocessing_function=tf.keras.applications.resnet50.preprocess_input)
train_generator = train_datagen.flow_from_directory(directory=directory,
subset='training',
target_size=(720, 720),
shuffle=True,
seed=42,
color_mode='rgb',
class_mode='categorical',
batch_size=batch_size)
valid_directory = '/home/carlini-TF2/data/test/'
valid_generator = train_datagen.flow_from_directory(directory=valid_directory,
target_size=(720, 720),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
subset='validation',
shuffle=True,
seed=42)
## Compile and train Neural Network
METRICS = [
tf.keras.metrics.Accuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall')]
# optimal optimizer FN | loss FN to work with accuracy metric
model1.compile(loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=METRICS)
# stop training when loss gets worse after consecutive epochs
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
# fit model with augmented training set and validation set | shuffle batch
history = model1.fit(train_generator,
validation_data = valid_generator,
steps_per_epoch = train_generator.n//batch_size,
validation_steps = valid_generator.n//batch_size,
shuffle=True, callbacks = [callback],
epochs=50)
The issue is that ResNet50 was being used for data augmentation and not in the CNN architecture. In order to reach somewhat robust model the following code is needed.
We can throw out the previous architecture and use a very simple model and the ResNet50 since this gives conclusive results.
We must use Functional API since ResNet50 was built on it
data_bias = np.log(1802./4657)
initializer = tf.keras.initializers.Constant(data_bias)
resnet50_imagenet_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, input_shape=(720,720,3) )
resnet50_imagenet_model.trainable = False
#Flatten output layer of Resnet
flattened = tf.keras.layers.Flatten()(resnet50_imagenet_model.output)
#Fully connected layer, output layer with 49 diff labels
fc2 = tf.keras.layers.Dense(49, activation='softmax', bias_initializer=initializer, name="AddedDense2")(flattened)
model1 = tf.keras.models.Model(inputs=resnet50_imagenet_model.input, outputs=fc2)

Validation and training curve remains flat

I'm building a Tensorflow/Keras multiclassification model to identify 33 different species of birds. I've around 33,000 images and I've cleaned and removed the corrupt images and reduced the size of the images down from ~4347x3260 to 512x512 while retaining the aspect ratio.
I'm using a batch size of 32, input image size: 250x250. I've split the training/validation/test datasets into ratios of 80:10:10 which consists of 26253 training images, 3266 validation images and 3311 test images.
I've set the seed for the training image generator but the problem I've ran into is that when I change the number of layers/architecture the model stops improving at roughly the same level, around the 50% accuracy level and the validation/training curve remains flat albeit a gap starts to open up as seen in the figures below, a sign of overfitting but it retains the gap more or less.
Updated Validation/Training Accuracy
Updated Validation/Training Loss
So, I plotted the graph of the optimized learning rate curve shown below and it is flat. I chose 1e-4 as my base learning rate and 1e-2 as my max learning rate and implemented a CLR as shown by Smith. You can see the results below. Now it does improve the validation accuracy to around 58% at present and it hasn't started to really overfit yet, in fact all the curves are only slightly overfitting besides the original baseline graph. Maybe if I reduce the learning rate interval, the plot would give a better view of the curve.
But would it be worth to train it for longer or can someone advise on the next steps to take? Any advice is welcome.
I've added updates to the graphs.
batch_size = 32
epochs = 2000
IMG_HEIGHT = 250
IMG_WIDTH = 250
STEPS_PER_EPOCH = count_data_items(os.path.join(data_dir, 'train')) // batch_size
training_dir = 'train_test_val/train'
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
zoom_range=0.15,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.15,
horizontal_flip=True,
vertical_flip=True,
fill_mode="reflect"
)
print("The number of training images: ")
train_datagen = train_datagen.flow_from_directory(
directory = training_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=batch_size,
seed=123,
class_mode='categorical'
)
test_dir = 'train_test_val/test'
print("The number of test images: ")
# All images will be rescaled by 1./255
test_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = test_datagen.flow_from_directory(
directory = test_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=batch_size,
#seed=123,
class_mode='categorical'
)
val_dir = 'train_test_val/val'
print("The number of validation images: ")
# All images will be rescaled by 1./255
val_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = val_datagen.flow_from_directory(
directory = val_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=batch_size,
#seed=123,
class_mode='categorical'
)
clr_step_size = int(4 * STEPS_PER_EPOCH)
base_lr = 1e-4
max_lr = 1e-2
mode='triangular'
# The mode is set to triangular: that's equal to linear moce.
def get_callbacks():
return [
tf.keras.callbacks.TensorBoard(log_dir=run_logdir, histogram_freq=0, embeddings_freq=0, update_freq='epoch'),
tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=21, verbose=1),
#tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=8,verbose=1),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, monitor ='val_loss', save_best_only=True, verbose=1),
CyclicLR(base_lr=base_lr, max_lr=max_lr, step_size=clr_step_size, mode=mode)
#tf.keras.callbacks.CSVLogger(filename=csv_logger, separator=',', append=True)
]
def fit_model(model, n_epochs, initial_epoch=0, batch_size=batch_size):
print("Fitting model on training data")
history = model.fit(
train_datagen,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=n_epochs,
validation_data=val_datagen,
validation_steps=n_val // batch_size,
callbacks=get_callbacks(),
verbose=1,
initial_epoch = initial_epoch
)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Conv2D(64, (3,3), padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Conv2D(512, (3,3), padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.compile(optimizer=optimizers.RMSprop(),
loss='categorical_crossentropy',
metrics=['accuracy'])
Edit: 20/03/2021 Applied InceptionV3 Transfer Learning Model and the Accuracy stalls out at 50%.

Getting very poor accuracy on stanford_dogs dataset

I'm trying to train a model on the stanford_dogs dataset to classify 120 dog breeds but my code is acting strange.
I downloaded the image data from http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar
Then ran the following code to split each folder of the breeds into training and testing folders:
dataset_dict = {}
source_path = 'C:/Users/visha/Downloads/stanford_dogs/dataset'
dir_root = os.getcwd()
dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]
for category in dataset_folders:
dataset_dict[category] = {'source_path': os.path.join(dir_root, source_path, category),
'train_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/train',
folder_type='train',
data_class=category),
'validation_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/validation',
folder_type='validation',
data_class=category)}
dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]
for key in dataset_dict:
print("Splitting Category {} ...".format(key))
split_data(source_path=dataset_dict[key]['source_path'],
train_path=dataset_dict[key]['train_path'],
validation_path=dataset_dict[key]['validation_path'],
split_size=0.7)
I fed the images through the network after some image augmentation and used sigmoid activation in the final layer and categorical_crossentropy loss.
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(120, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
TRAINING_DIR = 'C:/Users/visha/Downloads/stanford_dogs/train'
train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=40,width_shift_range=0.2,height_shift_range=0.2,
shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=10,
class_mode='categorical',
target_size=(150, 150))
VALIDATION_DIR = 'C:/Users/visha/Downloads/stanford_dogs/validation'
validation_datagen = ImageDataGenerator(rescale=1./255, rotation_range=40,width_shift_range=0.2, height_shift_range=0.2,
shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=10,
class_mode='categorical',
target_size=(150, 150))
history = model.fit(train_generator,
epochs=10,
verbose=1,
validation_data=validation_generator)
But the code is not working as intended. The val_accuracy after 10 epochs is something like 4.756.
For validation data you should not do any image augmentation, just do rescale. In validation flow_from_directory set shuffle=False. Be advised that the Stanford Dog Data set is very difficult. To achieve a reasonable degree of accuracy you will need a much more complex model. I recommend you consider transfer learning using the Mobilenet model. The code below shows how to do that.
base_model=tf.keras.applications.mobilenet.MobileNet( include_top=False,
input_shape=(150,150,3) pooling='max', weights='imagenet',dropout=.4)
x=base_model.output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
x = Dense(1024, activation='relu')(x)
x=Dropout(rate=.3, seed=123)(x)
output=Dense(120, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adamax(lr=.001),loss='categorical_crossentropy',metrics=
['accuracy'] )
I forgot to mention that Mobilenet was trained on Images with pixel values in the range -1 to +1. So in ImageDataGenerator include the code
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
this scales the pixels so you do not need the code
rescale=1./255
or alternatively set
rescale=1/157.5-1
which will rescale the values between -1 and +1

Validation Accuracy Stuck, Accuracy low

I want to create a machine learning model with Tensorflow which detects flowers. I went in the nature and took pictures of 4 different species (~600 per class, one class got 700).
I load these images with Tensorflow Train Generator:
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.15,
brightness_range=[0.7, 1.4],
fill_mode='nearest',
vertical_flip=True,
horizontal_flip=True,
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
pfad,
target_size=(imageShape[0],imageShape[1]),
batch_size=batchSize,
class_mode='categorical',
subset='training',
seed=1,
shuffle=False,
#save_to_dir=r'G:\test'
)
validation_generator = train_datagen.flow_from_directory(
pfad,
target_size=(imageShape[0],imageShape[1]),
batch_size=batchSize,
shuffle=False,
seed=1,
class_mode='categorical',
subset='validation')
Then I am creating a simple model looking like this:
model = tf.keras.Sequential([
keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(imageShape[0], imageShape[1],3)),
keras.layers.MaxPooling2D(2,2),
keras.layers.Dropout(0.5),
keras.layers.Conv2D(256, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Conv2D(512, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Flatten(),
keras.layers.Dense(280, activation='relu'),
keras.layers.Dense(4, activation='softmax')
])
opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
model.compile(loss='categorical_crossentropy',
optimizer= opt,
metrics=['accuracy'])
And want to start the training process (CPU):
history=model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batchSize,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batchSize,
epochs = 200,callbacks=[checkpoint,early,tensorboard],workers=-1)
The result should be that my validation Accuracy improves, but it starts with 0.3375 and stays at this level the whole training process. Validation loss (1.3737) decreases by 0.001. Accuracy start with 0.15 but increases.
Why is my validation accuracy stuck?
Am I using the right loss? Or do I build my model wrong? Is my Tensorflow Train Generator hot encoding the labels?
Thanks
I solved the problem by using RMSprop() without any parameters.
So I changed from:
opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
model.compile(loss='categorical_crossentropy',optimizer= opt, metrics=['accuracy'])
to:
opt = tf.keras.optimizers.RMSprop()
model.compile(loss='categorical_crossentropy',
optimizer= opt,
metrics=['accuracy'])
This is a similar example, except that for 4 categorical classes, the below is binary. You may want to change the loss to categorical cross entropy, class_mode from binary to categorical in the train and test generators and final dense layer activation to softmax. I am still able to use model.fit_generator()
image_dataGen = ImageDataGenerator(rotation_range=20,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,
zoom_range=0.1,fill_mode='nearest',horizontal_flip=True,
vertical_flip=True,rescale=1/255)
train_images = image_dataGen.flow_from_directory(train_path,target_size = image_shape[:2],
color_mode = 'rgb',class_mode = 'binary')
test_images = image_dataGen.flow_from_directory(test_path,target_size = image_shape[:2],
color_mode = 'rgb',class_mode = 'binary',
shuffle = False)
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Conv2D(filters = 48, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Flatten())
model.add(Dense(units = 128,activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy',metrics = ['accuracy'], optimizer = 'adam')
results = model.fit_generator(train_images, epochs = 10, callbacks = [early_stop],
validation_data = test_images)
Maybe your learning rate is too high.
Use learning rate = 0.000001 and if that does not work then try another optimizer like Adam.
use model.fit_generator() instead of model.fit() Also below points could be helpful.
In order to use .flow_from_directory, you must organize the images in sub-directories. This is an absolute requirement, otherwise the method won't work. The directories should only contain images of one class, so one folder per class of images. Also could you check if the path for the training data and test data is correct ? They cannot point to the same location. I have used the ImageGenerator class for classification problem. You can also try changing the optimizer to 'Adam'
Structure Needed:
Image Data Folder
Class 1
0.jpg
1.jpg
...
Class 2
0.jpg
1.jpg
...
...
Class n

How to fix 'ValueError: Empty Training Data' error in tensorflow

I am new to tensorflow and keras. I am trying to train a model to identify different images for rock paper and scissors. I am using an online tutorial for that and they have provided me with a google collab worksheet. When I train the model on google collab everything works fine but if I try training the model on my machine, it gives me this error:
ValueValueError: Empty training data
I have tried changing the batch size and also tried tried changing the amount of images in the dataset but it doesnt help(And it shouldn't).
Here is my code:
###### ROCK PAPER SISSORS #######
import os
import numpy as np
import cv2
import tensorflow as tf
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# import matplotlib.image as mpimg
# Provide the path to the directory of the classes
rock_dir = os.path.join('/media/visheshchanana/New Volume/Projects/datasets/RPS/rps/rock')
paper_dir = '/media/visheshchanana/New Volume/Projects/datasets/RPS/rps/paper'
scissors_dir = '/media/visheshchanana/New Volume/Projects/datasets/RPS/rps/scissors'
rock_files = os.listdir(rock_dir)
# print(rock_files[:10])
# ​
paper_files = os.listdir(paper_dir)
# print(paper_files[:10])
# ​
scissors_files = os.listdir(scissors_dir)
# # print(scissors_files[:10])
# Use the augmentation tool to change the augmentation of the images so that we can have a better classifier
TRAINING_DIR = "/media/visheshchanana/New Volume/Projects/datasets/RPS/rps"
training_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Provide the path to the validation dataset
VALIDATION_DIR = "/media/visheshchanana/New Volume/Projects/datasets/RPS/RPS_validation"
validation_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = training_datagen.flow_from_directory(
TRAINING_DIR,
target_size=(150,150),
class_mode='categorical'
)
validation_generator = validation_datagen.flow_from_directory(
VALIDATION_DIR,
target_size=(150,150),
class_mode='categorical'
)
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 150x150 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()
model.compile(loss = 'categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
history = model.fit_generator(train_generator, epochs=5, validation_data = validation_generator, verbose = 1)
The dataset is the same as used in the google collab. I can't figure out the reason behind this error.
There might be other reasons for this error, but I realized that I had a batch size that was greater than my sample size.
I had the same problem. My model trains and gives this error (ValueValueError: Empty training data) at the end of the first epoch. I figured out that it was because there was no data in the validation path.
Check if steps_per_epoch is not set to 0 (e.g. because of integer division)

Categories

Resources