Sorry for bothering, and if i spell something wrong, i haven't practised my english in a while! Well, i'm following a yt tutorial which trains a cnn to recognize facial expressions. The problem is that when i try it locally with my own .h5 file generated by the training, it only returns the same option even though my face change.
On the other hand, (i have not changed the code), i've tried with the .h5 generated by the yt tutorial and it works!
Searching for the differences, i have found different things, can you help me?
The CNN is configured this way (i haven't changed it):
model = Sequential()
model.add(Conv2D(64,(3,3),padding = 'same',input_shape = (48,48,1)))
model.add(BatchNormalization()) model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2,2))) model.add(Dropout(0.25))
model.add(Conv2D(128,(5,5),padding = 'same'))
model.add(BatchNormalization()) model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2,2))) model.add(Dropout (0.25))
model.add(Conv2D(512,(3,3),padding = 'same'))
model.add(BatchNormalization()) model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2,2))) model.add(Dropout (0.25))
model.add(Conv2D(512,(3,3), padding='same'))
model.add(BatchNormalization()) model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256)) model.add(BatchNormalization())
model.add(Activation('relu')) model.add(Dropout(0.25))
model.add(Dense(512)) model.add(BatchNormalization())
model.add(Activation('relu')) model.add(Dropout(0.25))
model.add(Dense(no_of_classes, activation='softmax'))
opt = Adam(lr = 0.0001)
model.compile(optimizer=opt,loss='categorical_crossentropy',
metrics=['accuracy']) model.summary()
And comparing both .h5 files, these are the following results:
enter image description here
Related
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
As far as I understand, model.add(Conv2D(32, (3, 3), input_shape=input_shape)) is the input layer here and model.add(Activation('sigmoid')) is the output layer.
There are a total 13 other layers between the input and output layers. So are there 13 hidden layers in the model? Or less? What are the names of the layers that should be counted as hidden layers?
I am confused about whether Activation or MaxPooling2D or Dropout should be counted as a single hidden layer or not.
Activation functions are not the hidden layers.
Layers will be - Conv2D,MaxPooling2D,Flatten,Dense
You can use below code to get the model architecture details.
model.summary()
i'm trying to make a simple classification model for the cifar-10 dataset. The model fails when it gets to Maxpooling fuction. It says that it has the incorrect Syntax but for the life of me i cannot figure out whats wrong.
Is it the version of keras i'm using? when i add maxpooling to the model with a size of 2, 2 it don't work and in the documentation, i am doing the exact same thing which makes me think its a version problem.
Sorry if the problem is obvious
model = Sequential()
model.add(Conv2D(32, (3,3), padding = 'same', input_shape=(32,32,3)))
model.add(Activation('relu')
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu')
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
Max pooling does not have any issue.your issue is you are missing some brackets in the previous line. find below the corrected code
model = Sequential()
model.add(Conv2D(32, (3,3), padding = 'same', input_shape=(32,32,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
Hope this helps.
I am trying to make an emotion classifier using face expressions with FER2013 dataset. It contains of 35887 samples with 2304 features each and an integer label 0-6 for emotions. When I was using Conv1D with shape (2304,1) then it achieved training accuracy of ~86% but wasn't doing well on any unseen test image. So I thought of reshaping it to (48,48,1) for each sample and using Conv2D on it. But now it just gets stuck on 0.2505 while training after the 2nd epoch and never increases. What's happening?
import pandas as pd
import numpy as np
from PIL import Image
import matplotlib.image as mpimg
from skimage import transform
import random
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
emotion = {0 : 'Angry', 1 : 'Disgust',2 : 'Fear',3 : 'Happy',
4 : 'Sad',5 : 'Surprise',6 : 'Neutral'}
df=pd.read_csv('fer.csv')
faces=df.values[:,1]
faces=faces.tolist()
emos=df.values[:,0]
for i in range(len(faces)):
faces[i]=[int(x) for x in faces[i].split()]
emos[i]=int(emos[i])
faces=np.array(faces)
faces=transform.resize(faces, (35887,48,48))
faces=np.expand_dims(faces, axis=3)
model = Sequential()
model.add(Conv2D(48, (3,3), padding='same', input_shape=(48,48,1), activation='relu'))
model.add(Conv2D(48, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(96, (3,3), padding='same', activation='relu'))
model.add(Conv2D(96, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(192, (3,3), padding='same', activation='relu'))
model.add(Conv2D(192, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(384, (3,3), padding='same', activation='relu'))
model.add(Conv2D(384, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(384, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(192, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(96, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(faces,emos,epochs=100,batch_size=48)
model.save_weights('model.h5')
Model Accuracy Curves
Model Loss Curves
Normalizing the output batch after each layer fixes the issue.
Just add
model.add(BatchNormalization())
after every layer.
EDIT :
Thought that I should add more information here.
So, this was the final model I ended up making.
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(200,200,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(256, (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(256, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(len(classes), activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
And these were the results I got with it.
I increased the nodes and changed the optimizer too but it was the batch normalisation which gave a dramatic increase in accuracy. Using more nodes and nadam optimizer further helped a bit.
I have just build my first model using Phyton, Keras and Tensorflow.
My model looks like this:
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(img_width, img_height,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
Now i'm trying to compile it,
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
but i got this error:
ValueError: Only call sigmoid_cross_entropy_with_logits with named
arguments (labels=..., logits=..., ...)
I can't seem to find what the problem is. Please let me know what I'm doing wrong..
I've used Floyd hub to to train the following model and saved it
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 50
adammax = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=adammax, metrics=['accuracy'])
print(model.summary())
When I try to load it on my PC, it's working fine. But when I load it on the Raspberry Pi I get the following error. I tried also to save just the weights and load them, but it didn't work and I got the same error. I am using the same version of Tensorflow as Floyd hub on the Raspberry Pi.
As mentioned above, you're passing T=DT_INT64, while that is not one of the supported kernels for this op. You could see if the int64 version is just not shipped in the .so file, write the op kernel yourself, or try casting to tf.int32 right before this op in the python code. The last one worked well for me.