Face recognition with keras - python

Below is my face recognition model. I get several issues while training my training data on it. My dataset contains images of me. When I train it, validation accuracy is 100%. And also its prediction is bad. What can I do to solve this problem?
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32,(3,3),activation='relu',
input_shape = (150,150,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Dropout(0.5))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Dropout(0.5))
model.add(layers.Conv2D(128,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(512,activation='relu'))
model.add(layers.Dense(1,activation='sigmoid'))
print(model.summary())
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (150,150),
batch_size=20)
validation_generator = val_datagen.flow_from_directory(
validation_dir,
target_size = (150,150),
batch_size=20)
history = model.fit_generator(
train_generator,
steps_per_epoch = 100,
epochs = 3,
validation_data = validation_generator,
validation_steps = 50)
model.save('/home/monojit/Desktop/me3.h5')

How large is the dataset that is being used? A small dataset might be the problem, or your model architecture if the model is not generalizing well. Also, you might look at image augmentation using the ImageDataGenerator, see the blog post that i'll link to in a bit.
If the purpose of this project is to get as high accuracy as possible, without explicitly learning how CNN and their different layers work, then i would suggest the following. As you are working with images, you might not want to re-invent the wheel. Go on and take a pre-trained convolutional neural network, then train that on your imaged. This will yield you way higher accuracy with less epochs, than an untrained network. A great blog post can be found here Keras Cats vs dogs.
Now this tutorial is cats vs dogs. But you can use (almost, depending on your input images) the exact same code for your problem.

Related

Python- Mask Detection Nerual Network using Keras [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
So this is my first real try on neural networks with keras. I've been trying to make a classifier which decides wether someone is wearing a mask. Below i provided my model. I achived a training accuracy of about 87 percent and a validation accuracy of 85 percent. With own images it performs mostly well too. I created my own dataset with about 600 images for each of the two classes. However I have some questions.
Is it correct? Any suggested improvements? As I said this is my first neural network project i dont really know if i made anything wrong(which I most likely did)
How would i implemet this to predict on camera feed and not only images. Would I just get the frames from for example opencv and predict those? Any review and/or help is highly appreciated
This is my model:
from keras.layers import Dense, Input, Dropout, GlobalAveragePooling2D, Flatten, Conv2D, BatchNormalization, Activation, MaxPooling2D
from keras.models import Model, Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop
from sklearn.model_selection import train_test_split
NAME = "mask-detection"
tensorboard = TensorBoard(log_dir="logs/{}".format(NAME))
train_datagen = ImageDataGenerator(rescale=1 / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
'data/train',
target_size=(100, 100),
batch_size=32,
class_mode='categorical',
color_mode = "grayscale"
)
label_map = (train_generator.class_indices)
validation_generator = test_datagen.flow_from_directory(
'data/test',
target_size=(100, 100),
batch_size=32,
class_mode='categorical',
color_mode = "grayscale"
)
model=Sequential()
model.add(Conv2D(100,(3,3),padding='same', input_shape=(100,100,1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(100,(3,3), padding='same',))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(50,activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(
optimizer=RMSprop(lr=0.0001),
loss="binary_crossentropy",
metrics=["accuracy"])
model.fit(train_generator, epochs=20, callbacks=[tensorboard])
model.evaluate(validation_generator)
model.save('saved_model/model')
model.summary()
print(label_map)
And with this i predict the imgs:
import numpy as np
from keras.preprocessing import image
model = tf.keras.models.load_model('saved_model/model')
predictions = ["Mask", "No Mask"]
def predict_mask(img):
x = image.load_img(img, color_mode="grayscale", target_size=(100, 100))
x = image.img_to_array(x)
x = np.expand_dims(x, axis= 0)
x = np.array(x).astype("float32")/255
x = x.reshape([1, 100, 100, 1])
classes = model.predict(x)
return predictions[np.argmax(classes)]
img = "test.png"
print(predict_mask(img))
There are a number of things you can do. If you want to use your existing model then I recommend you use the keras callbacks ReduceLROnPlateau and ModelCheckpoint. The first enables you to use an adjustable learning rate. Set it up to monitor validation loss. A typical use is shown in the code below where if the validation loss fails to reduce on an epoch the learning rate is reduced by 50% .This allows you to use a larger learning rate initially and have it reduce automatically in later epochs. The second enables you to save the model with the lowest validation loss. Typical application is shown in the code below. Documentation for these callbacks is here. After training load this model to do predictions. If you want to get better results I recommend you try transfer learning. Many models are available with documentation here. I prefer to use the MobileNet model because it has only 4 million trainable parameters versus say 140 million for VGG16 and is about as accurate in most cases. Code below shows typical use.
rlrp=tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.5, patience=1, verbose=0, mode='auto',
min_delta=0.0001, cooldown=0, min_lr=0)
checkpoint_filepath = '/tmp/checkpoint'
mcp=tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_filepath,
monitor='val_loss', verbose=0, save_best_only=True,
save_weights_only=False, mode='auto', save_freq='epoch', options=None)
callbacks=[rlrp,mcp]
mobile = tf.keras.applications.mobilenet.MobileNet( include_top=False,
input_shape=(image_size,image_size,3),
pooling='max', weights='imagenet',
alpha=1, depth_multiplier=1,dropout=.5)
x=mobile.layers[-1].output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
predictions=Dense (2, activation='softmax')(x)
model = Model(inputs=mobile.input, outputs=predictions)
for layer in model.layers:
layer.trainable=True
model.compile(Adamax(lr=.05), loss='categorical_crossentropy', metrics=['accuracy'])
data=model.fit(x=train_generator, epochs=20, verbose=1,
callbacks=callbacks, validation_data=validation_generator, shuffle=True)

What Can I do to improve the 96% (f-score) in my CNN Keras?

I'm running a project with roughly 22000 images (11000 each class) with ResNet50 fine tuning. This is my code:
base_model = ResNet50(weights='imagenet', include_top=True, input_shape=(224,224,3))
head_model = base_model.get_layer("conv5_block1_1_conv").output
head_model = Dropout(0.75)(head_model)
head_model = Flatten()(head_model)
head_model = Dense(1, activation="sigmoid")(head_model)
model = Model(inputs=base_model.input, outputs=head_model)
model.summary()
for layer in base_model.layers:
layer.trainable = False
adam = Adam(lr=0.001)
model.compile(optimizer= adam, loss='binary_crossentropy', metrics=['accuracy'])
train_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(TRAIN_DIR,
target_size=(224, 224),
batch_size=50,
class_mode='binary')
model.fit_generator(train_generator, steps_per_epoch=100)
model.save("asd.h5")
With this model I reached 96 % of f-score. What method I can apply to improve its accuracy? I already tried include colormap as preprocessing and Include Dense layers.
There're a lot of techniques:
You can change the structure of model. Add or remove some layers (and not only Dense layers). Or use other pretrained model.
Change the optimizer. For example, despite the Adam another popular optimizer is RMSprop. You can also try to tune optimizer's hyperparameters.
Preprocess data. You can do zoom, shear and etc.

Bad training results when I am trying to find out defected and non-defected solar cells

I am trying to classify which are defected solar cells. I have a huge dataset of both defected plates and non-defected solar cells. As per a few suggestions from research papers I have been using the VGG 16 model for the training purpose. But even after 3 epochs, it is showing 100 % accuracy. I don't why it is coming like this. Is there any other way to solve this problem, any other Algorithm.
I am uploading some of the defected cells which I have in my dataset
]2]2]3
from keras.layers import Input, Lambda, Dense, Flatten
from keras.models import Model
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
import numpy as np
from glob import glob
import matplotlib.pyplot as plt
# re-size all the images to this
IMAGE_SIZE = [224, 224]
train_path = 'Datasets/Train'
valid_path = 'Datasets/Test'
# add preprocessing layer to the front of VGG
vgg = VGG16(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
# don't train existing weights
for layer in vgg.layers:
layer.trainable = False
# useful for getting number of classes
folders = glob('Datasets/Train/*')
# our layers - you can add more if you want
x = Flatten()(vgg.output)
# x = Dense(1000, activation='relu')(x)
prediction = Dense(len(folders), activation='softmax')(x)
# create a model object
model = Model(inputs=vgg.input, outputs=prediction)
# view the structure of the model
model.summary()
# tell the model what cost and optimization method to use
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('Datasets/Train',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('Datasets/Test',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
# fit the model
r = model.fit_generator(
training_set,
validation_data=test_set,
epochs=10,
steps_per_epoch=len(training_set),
validation_steps=len(test_set)
)
# loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# accuracies
plt.plot(r.history['acc'], label='train acc')
plt.plot(r.history['val_acc'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
import tensorflow as tf
from keras.models import load_model
model.save('defect_features_new_model.h5')
It feels like the images are too simple and do not contain any complicated data. Hence they tend to overfit the VGG16 model. VGG16 models are generally trained on much more complicated data. You can try defining your own convolutional neural network. Since you are already using keras you can create sequential model to define your own model with minimum as 2 or 3 convolutional layersPlease refer the link for keras sequential model
Your data for the network looks very simple so it is not unexpected that you achieve high accuracy. Remember the network trains in batches so for every 32 images it goes through a back propagation cycle and updates the weights accordingly. So if you have a lot of images which you say you do then you are executing a lot of iterations on the weights.
I do not see a problem here. You are not over training in the sense that your validation accuracy is 100%. Certainly you could get good results with a simpler model but why bother your results are what you would want.

Vgg16 for gender detection (male,female)

We have used vgg16 and freeze top layers and retrain the last 4 layers on gender dataset 12k male and 12k female. It gives very low accuracy especially for male. We are using the IMDB dataset. On female test data it gives female as output but on male it gives same output.
vgg_conv=VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
Freeze the layers except the last 4 layers
for layer in vgg_conv.layers[:-4]:
layer.trainable = False
Create the model
model = models.Sequential()
Add the vgg convolutional base model
model.add(vgg_conv)
Add new layers
model.add(layers.Flatten())
model.add(layers.Dense(4096, activation='relu'))
model.add(layers.Dense(4096, activation='relu'))
model.add(layers.Dropout(0.5)) model.add(layers.Dense(2, activation='softmax'))
nTrain=16850 nTest=6667
train_datagen = image.ImageDataGenerator(rescale=1./255)
test_datagen = image.ImageDataGenerator(rescale=1./255)
batch_size = 12 batch_size1 = 12
train_generator = train_datagen.flow_from_directory(train_dir, target_size=(224, 224), batch_size=batch_size, class_mode='categorical', shuffle=False)
test_generator = test_datagen.flow_from_directory(test_dir, target_size=(224, 224), batch_size=batch_size1, class_mode='categorical', shuffle=False)
model.compile(optimizer=optimizers.RMSprop(lr=1e-6), loss='categorical_crossentropy', metrics=['acc'])
history = model.fit_generator( train_generator, steps_per_epoch=train_generator.samples/train_generator.batch_size, epochs=3, validation_data=test_generator, validation_steps=test_generator.samples/test_generator.batch_size, verbose=1)
model.save('gender.h5')
Testing Code:
model=load_model('age.h5')
img=load_img('9358807_1980-12-28_2010.jpg', target_size=(224,224))
img=img_to_array(img)
img=img.reshape((1,img.shape[0],img.shape[1],img.shape[2]))
img=preprocess_input(img)
yhat=model.predict(img)
print(yhat.size)
label=decode_predictions(yhat)
label=label[0][0]
print('%s(%.2f%%)'% (label[1],label[2]*100))
Firstly, you are saving the model as gender.h5 and during testing you are loading the model age.h5. Probably you have added different code for the testing here.
Coming to improving the accuracy of the program -
Most importantly is that you are using loss = 'categorical_crossentropy', change it to loss = 'binary_crossentropy' in model.compile as you have just 2 classes. So your
model.compile(optimizer="adam",loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) will look like this.
Also change class_mode='categorical' to class_mode='binary' in flow_from_directory.
As categorical_crossentropy goes hand in hand with softmax activation in the last layer, and if you change the loss to binary_crossentropy the last activation should also be changed to sigmoid. So last layer should be Dense(1, activation='sigmoid').
You have added 2 Dense layers of 4096, this will add 4096 * 4096 = ‭16,777,216‬ weights to be learnt by the model. Reduce them may be to 1026 and 512 respectively.
You have added Dropout layer of 0.5, that is to keep the 50% of neurons off during the epoch. That is huge number. Better is to drop off the Dropout layer and use to only if your model is overfitting.
Set batch_size = 1. As you have very less input let every epoch have same number of steps as input records.
Use Data Augmentation technique like horizontal_flip, vertical_flip, shear_range, zoom_range of ImageDataGenerator to generate the new batches of training and validation images during every epoch.
Train your model for large number of epoch. You are just training for epoch=3, that is too less for learning the weights. Train for epoch=50 and later trim the number.
Hope this answers your question. Happy Learning.

Why is the model not learning with pretrained vgg16 in keras?

I am using the pre-trained VGG 16 model available with Keras and applying it on the SVHN dataset which is a dataset of 10 classes of number 0 - 10. The network is not learning and has been stuck at 0.17 accuracy. There is something that I am doing incorrectly but I am unable to recognise it. The way I am running my training is as follows:
import tensorflow.keras as keras
## DEFINE THE MODEL ##
vgg16 = keras.applications.vgg16.VGG16()
model = keras.Sequential()
for layer in vgg16.layers:
model.add(layer)
model.layers.pop()
for layer in model.layers:
layer.trainable = False
model.add(keras.layers.Dense(10, activation = "softmax"))
## START THE TRAINING ##
train_optimizer_rmsProp = keras.optimizers.RMSprop(lr=0.0001)
model.compile(loss="categorical_crossentropy", optimizer=train_optimizer_rmsProp, metrics=['accuracy'])
batch_size = 128*1
data_generator = keras.preprocessing.image.ImageDataGenerator(
rescale = 1./255
)
train_generator = data_generator.flow_from_directory(
'training',
target_size=(224, 224),
batch_size=batch_size,
color_mode='rgb',
class_mode='categorical'
)
validation_generator = data_generator.flow_from_directory(
'validate',
target_size=(224, 224),
batch_size=batch_size,
color_mode='rgb',
class_mode='categorical')
history = model.fit_generator(
train_generator,
validation_data = validation_generator,
validation_steps = math.ceil(val_split_length / batch_size),
epochs = 15,
steps_per_epoch = math.ceil(num_train_samples / batch_size),
use_multiprocessing = True,
workers = 8,
callbacks = model_callbacks,
verbose = 2
)
What is it that I am doing wrong? Is there something that I am missing? I was expecting a very high accuracy since it is carrying weights from imagenet but it is stuck at 0.17 accuracy from the first epoch.
I assume you're upsampling the 32x32 MNIST-like images to fit the VGG16 input, what you should actually do in this case is to remove all the dense layers, this way you can input any image size as in convolutional layers the weights are agnostic to the image size.
You can do this like:
vgg16 = keras.applications.vgg16.VGG16(include_top=False, input_shape=(32, 32))
Which I consider should be the default behaviour of the constructor.
When you upsample the image, best case scenario you're basically blurring it, in this case you have to consider that a single pixel of the original image corresponds to 7 pixels of the upsampled one, while VGG16's filters are 3 pixels wide, so in other words you're losing the image's features.
It is not necessary to add 3 dense layers at the end like the original VGG16, you can try with the same layer you have in your code.

Categories

Resources