Twitter Sentiment Analysis Constant Zero (0.0000e+00) Loss value - python

I'm trying to figure out why model's Loss value is always 0.0, so the accuracy seems to be constant as well (which is incorrect in my case, afaik).
Code snippet:
model = Sequential()
model.add(Embedding(vocab_size, glove_vectors.vector_size, weights=[embedding_matrix], input_length=X.shape[1]))
model.add(Dropout(0.5))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=["accuracy"])
model.summary()
EPOCHS = 20
train_data, test_data, train_labels, test_labels = train_test_split(X, Y, test_size=0.20, random_state = 42)
print(train_data.shape, train_labels.shape)
print(test_data.shape, test_labels.shape)
val_data = (test_data, test_labels)
history = model.fit(train_data, train_labels, validation_data=val_data, epochs=EPOCHS)
score = model.evaluate(test_data, test_labels)
Output:
Epoch 1/20
25/25 [==============================] - 4s 69ms/step - loss: 0.0000e+00 - accuracy: 0.5241 - val_loss: 0.0000e+00 - val_accuracy: 0.4650
Epoch 2/20
25/25 [==============================] - 1s 55ms/step - loss: 0.0000e+00 - accuracy: 0.4927 - val_loss: 0.0000e+00 - val_accuracy: 0.4650
Epoch 3/20
25/25 [==============================] - 1s 55ms/step - loss: 0.0000e+00 - accuracy: 0.5110 - val_loss: 0.0000e+00 - val_accuracy: 0.4650
Epoch 4/20
25/25 [==============================] - 1s 56ms/step - loss: 0.0000e+00 - accuracy: 0.5074 - val_loss: 0.0000e+00 - val_accuracy: 0.4650
Epoch 5/20
25/25 [==============================] - 1s 55ms/step - loss: 0.0000e+00 - accuracy: 0.5363 - val_loss: 0.0000e+00 - val_accuracy: 0.4650
Epoch 6/20
25/25 [==============================] - 1s 53ms/step - loss: 0.0000e+00 - accuracy: 0.5042 - val_loss: 0.0000e+00 - val_accuracy: 0.4650
Epoch 7/20
25/25 [==============================] - 1s 54ms/step - loss: 0.0000e+00 - accuracy: 0.4904 - val_loss: 0.0000e+00 - val_accuracy: 0.4650

In binary classification there will be 1 node in the output layer even though we will be predicting between two classes. In order to get the output in a probability format between 0 and 1 we will use the sigmoid function.
Hence binary_crossentropy is the correct loss function in your case
model.compile(loss='binary_crossentropy', optimizer="adam", metrics=["accuracy"])

Related

Transfer Learning Neural Network is not learning

First of all, I know that there is a similar thread here:
https://stats.stackexchange.com/questions/352036/what-should-i-do-when-my-neural-network-doesnt-learn
But unfortunately, it does not help. I probably have a bug inside my code which I cannot find. What I am trying to do is to classify some WAV files. But the model does not learn.
At first, I am collecting the files and saving them in an array.
Second, create new directories, one for train data and one for val data.
Next, I am reading the WAV files, creating spectrograms, and saving them all to the train directory.
Afterward, I am moving 20% of the data from the train directory to the val directory.
Note: While creating the spectrograms I am checking the length of the WAV. If it is too short (less than 2 sec), I am doubling it. Out of this spectrogram, I am cutting a random chunk and saving only this. As a result, all images do have the same height and width.
Then as the next step, I am loading the train and val images. And here I am also doing the normalization.
IMG_WIDTH=300
IMG_HEIGHT=300
IMG_DIM = (IMG_WIDTH, IMG_HEIGHT, 3)
train_files = glob.glob(DBMEL_PATH + "*",recursive=True)
train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files]
train_imgs = np.array(train_imgs) / 255 # normalizing Data
train_labels = [fn.split('\\')[-1].split('.')[1].strip() for fn in train_files]
validation_files = glob.glob(DBMEL_VAL_PATH + "*",recursive=True)
validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files]
validation_imgs = np.array(validation_imgs) / 255 # normalizing Data
validation_labels = [fn.split('\\')[-1].split('.')[1].strip() for fn in validation_files]
I have checked the variables and printing them. I guess this is working quite well. The arrays contain 80% and respectively 20% of the total data.
#Train dataset shape: (3756, 300, 300, 3)
#Validation dataset shape: (939, 300, 300, 3)
Next, I have also implemented a One-Hot-Encoder.
So far so good. In the next step I create empty DataGenerators, so without any data augmentation. When calling the DataGenerators, one time for train-data and one time for val-data, I'll pass the arrays for images (train_imgs, validation_imgs) and the one-hot-encoded-labels (train_labels_enc, validation_labels_enc).
Okay. Here now comes the tricky part.
First, create/load a pre-trained network
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.models import Model
import tensorflow.keras
input_shape=(IMG_HEIGHT,IMG_WIDTH,3)
restnet = ResNet50(include_top=False, weights='imagenet', input_shape=(IMG_HEIGHT,IMG_WIDTH,3))
output = restnet.layers[-1].output
output = tensorflow.keras.layers.Flatten()(output)
restnet = Model(restnet.input, output)
for layer in restnet.layers:
layer.trainable = False
And now finally creating the model itself. While creating the model I am using the pre-trained network for transfer learning. I guess somewhere there must be a problem.
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer
from tensorflow.keras.models import Sequential
from tensorflow.keras import optimizers
model = Sequential()
model.add(restnet) # <-- transfer learning
model.add(Dense(512, activation='relu', input_dim=input_shape))# 512 (num_classes)
model.add(Dropout(0.3))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(7, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
And the models run with this
history = model.fit_generator(train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=val_generator,
validation_steps=10,
verbose=1
)
But even after 50 epochs the accuracy stalls at around 0.15
Epoch 1/100
100/100 [==============================] - 711s 7s/step - loss: 10.6419 - accuracy: 0.1530 - val_loss: 1.9416 - val_accuracy: 0.1467
Epoch 2/100
100/100 [==============================] - 733s 7s/step - loss: 1.9595 - accuracy: 0.1550 - val_loss: 1.9372 - val_accuracy: 0.1267
Epoch 3/100
100/100 [==============================] - 731s 7s/step - loss: 1.9940 - accuracy: 0.1444 - val_loss: 1.9388 - val_accuracy: 0.1400
Epoch 4/100
100/100 [==============================] - 735s 7s/step - loss: 1.9416 - accuracy: 0.1535 - val_loss: 1.9380 - val_accuracy: 0.1733
Epoch 5/100
100/100 [==============================] - 737s 7s/step - loss: 1.9394 - accuracy: 0.1656 - val_loss: 1.9345 - val_accuracy: 0.1533
Epoch 6/100
100/100 [==============================] - 741s 7s/step - loss: 1.9364 - accuracy: 0.1667 - val_loss: 1.9286 - val_accuracy: 0.1767
Epoch 7/100
100/100 [==============================] - 740s 7s/step - loss: 1.9389 - accuracy: 0.1523 - val_loss: 1.9305 - val_accuracy: 0.1400
Epoch 8/100
100/100 [==============================] - 737s 7s/step - loss: 1.9394 - accuracy: 0.1623 - val_loss: 1.9441 - val_accuracy: 0.1667
Epoch 9/100
100/100 [==============================] - 735s 7s/step - loss: 1.9391 - accuracy: 0.1582 - val_loss: 1.9458 - val_accuracy: 0.1333
Epoch 10/100
100/100 [==============================] - 734s 7s/step - loss: 1.9381 - accuracy: 0.1602 - val_loss: 1.9372 - val_accuracy: 0.1700
Epoch 11/100
100/100 [==============================] - 739s 7s/step - loss: 1.9392 - accuracy: 0.1623 - val_loss: 1.9302 - val_accuracy: 0.2167
Epoch 12/100
100/100 [==============================] - 741s 7s/step - loss: 1.9368 - accuracy: 0.1627 - val_loss: 1.9326 - val_accuracy: 0.1467
Epoch 13/100
100/100 [==============================] - 740s 7s/step - loss: 1.9381 - accuracy: 0.1513 - val_loss: 1.9312 - val_accuracy: 0.1733
Epoch 14/100
100/100 [==============================] - 736s 7s/step - loss: 1.9396 - accuracy: 0.1542 - val_loss: 1.9407 - val_accuracy: 0.1367
Epoch 15/100
100/100 [==============================] - 741s 7s/step - loss: 1.9393 - accuracy: 0.1597 - val_loss: 1.9336 - val_accuracy: 0.1333
Epoch 16/100
100/100 [==============================] - 737s 7s/step - loss: 1.9375 - accuracy: 0.1659 - val_loss: 1.9354 - val_accuracy: 0.1267
Epoch 17/100
100/100 [==============================] - 741s 7s/step - loss: 1.9422 - accuracy: 0.1487 - val_loss: 1.9307 - val_accuracy: 0.1567
Epoch 18/100
100/100 [==============================] - 738s 7s/step - loss: 1.9399 - accuracy: 0.1680 - val_loss: 1.9408 - val_accuracy: 0.1567
Epoch 19/100
100/100 [==============================] - 743s 7s/step - loss: 1.9405 - accuracy: 0.1610 - val_loss: 1.9335 - val_accuracy: 0.1533
Epoch 20/100
100/100 [==============================] - 738s 7s/step - loss: 1.9410 - accuracy: 0.1575 - val_loss: 1.9331 - val_accuracy: 0.1533
Epoch 21/100
100/100 [==============================] - 746s 7s/step - loss: 1.9395 - accuracy: 0.1639 - val_loss: 1.9344 - val_accuracy: 0.1733
Epoch 22/100
100/100 [==============================] - 746s 7s/step - loss: 1.9393 - accuracy: 0.1585 - val_loss: 1.9354 - val_accuracy: 0.1667
Epoch 23/100
100/100 [==============================] - 746s 7s/step - loss: 1.9398 - accuracy: 0.1599 - val_loss: 1.9352 - val_accuracy: 0.1500
Epoch 24/100
100/100 [==============================] - 746s 7s/step - loss: 1.9392 - accuracy: 0.1585 - val_loss: 1.9449 - val_accuracy: 0.1667
Epoch 25/100
100/100 [==============================] - 746s 7s/step - loss: 1.9399 - accuracy: 0.1495 - val_loss: 1.9352 - val_accuracy: 0.1600
Can anyone please help to find the problem?
I solved the problem on my own.
I exchanged this
model = Sequential()
model.add(restnet) # <-- transfer learning
model.add(Dense(512, activation='relu', input_dim=input_shape))# 512 (num_classes)
model.add(Dropout(0.3))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(7, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
with this:
base_model = tf.keras.applications.MobileNetV2(input_shape = (224, 224, 3), include_top = False, weights = "imagenet")
model = Sequential()
model.add(base_model)
model.add(tf.keras.layers.GlobalAveragePooling2D())
model.add(Dropout(0.2))
model.add(Dense(number_classes, activation="softmax"))
model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.00001),
loss="categorical_crossentropy",
metrics=['accuracy'])
model.summary()
And I found out one more thing. In contrary to some tutorials, using data augmentation is not useful when working with spectrograms.
Without data augmentation I got 0.99 on train-accuracy and 0.72 on val-accuracy. But with data augmentation I got only 0.75 on train-accuracy and 0.16 on val-accuracy.

How can I prevent my model from being overfitted?

I'm a newbie with deep learning and I try to create a model and I don't really understand the model. add(layers). I m sure that the input shape (it's for recognition). I think the problem is in the Dropout, but I don't understand the value.
Can someone explains to me the
model = models.Sequential()
model.add(layers.Conv2D(32, (3,3), activation = 'relu', input_shape = (128,128,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(6, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.Adam(lr=1e-4), metrics=['acc'])
-------------------------------------------------------
history = model.fit(
train_data,
train_labels,
epochs=30,
validation_data=(test_data, test_labels),
)
and here is the result :
Epoch 15/30
5/5 [==============================] - 0s 34ms/step - loss: 0.3987 - acc: 0.8536 - val_loss: 0.7021 - val_acc: 0.7143
Epoch 16/30
5/5 [==============================] - 0s 31ms/step - loss: 0.3223 - acc: 0.8891 - val_loss: 0.6393 - val_acc: 0.7778
Epoch 17/30
5/5 [==============================] - 0s 32ms/step - loss: 0.3321 - acc: 0.9082 - val_loss: 0.6229 - val_acc: 0.7460
Epoch 18/30
5/5 [==============================] - 0s 31ms/step - loss: 0.2615 - acc: 0.9409 - val_loss: 0.6591 - val_acc: 0.8095
Epoch 19/30
5/5 [==============================] - 0s 32ms/step - loss: 0.2161 - acc: 0.9857 - val_loss: 0.6368 - val_acc: 0.7143
Epoch 20/30
5/5 [==============================] - 0s 33ms/step - loss: 0.1773 - acc: 0.9857 - val_loss: 0.5644 - val_acc: 0.7778
Epoch 21/30
5/5 [==============================] - 0s 32ms/step - loss: 0.1650 - acc: 0.9782 - val_loss: 0.5459 - val_acc: 0.8413
Epoch 22/30
5/5 [==============================] - 0s 31ms/step - loss: 0.1534 - acc: 0.9789 - val_loss: 0.5738 - val_acc: 0.7460
Epoch 23/30
5/5 [==============================] - 0s 32ms/step - loss: 0.1205 - acc: 0.9921 - val_loss: 0.5351 - val_acc: 0.8095
Epoch 24/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0967 - acc: 1.0000 - val_loss: 0.5256 - val_acc: 0.8413
Epoch 25/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0736 - acc: 1.0000 - val_loss: 0.5493 - val_acc: 0.7937
Epoch 26/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0826 - acc: 1.0000 - val_loss: 0.5342 - val_acc: 0.8254
Epoch 27/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0687 - acc: 1.0000 - val_loss: 0.5452 - val_acc: 0.8254
Epoch 28/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0571 - acc: 1.0000 - val_loss: 0.5176 - val_acc: 0.7937
Epoch 29/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0549 - acc: 1.0000 - val_loss: 0.5142 - val_acc: 0.8095
Epoch 30/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0479 - acc: 1.0000 - val_loss: 0.5243 - val_acc: 0.8095
I never depassed the 70% average but on this i have 80% but i think i'm on overfitting.. I evidemently searched on differents docs but i'm lost
Have you try following into your training:
Data Augmentation
Pre-trained Model
Looking at the execution time per epoch, it looks like your data set is pretty small. Also, it's not clear whether there is any class imbalance in your dataset. You probably should try stratified CV training and analysis on the folds results. It won't prevent overfit but it will eventually give you more insight into your model, which generally can help to reduce overfitting. However, preventing overfitting is a general topic, search online to get resources. You can also try this
model.compile(loss='categorical_crossentropy',
optimizer='adam, metrics=['acc'])
-------------------------------------------------------
# src: https://keras.io/api/callbacks/reduce_lr_on_plateau/
# reduce learning rate by a factor of 0.2 if val_loss -
# won't improve within 5 epoch.
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.00001)
# src: https://keras.io/api/callbacks/early_stopping/
# stop training if val_loss don't improve within 15 epoch.
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
history = model.fit(
train_data,
train_labels,
epochs=30,
validation_data=(test_data, test_labels),
callbacks=[reduce_lr, early_stop]
)
You may also find it useful of using ModelCheckpoint or LearningRateScheduler. This doesn't guarantee of no overfit but some approach for that to adopt.

Deep learning CNN model not learning

I am working on a image classification project and my model doesn't seem to train properly.
My dataset is made of 4000 images each with a shape of (120,120,3).
Test set represents 20% of the total dataset.
All images have been correctly labeled.
The images are normalized and one-hot encoded. For now I use only two targets, but I will add one more one I start getting decent results.
I use a batch size of 16
I want to use a CNN model.
My current model :
model = keras.models.Sequential()
model.add(Conv2D(filters=16, kernel_size=(6,6), input_shape=(IMG_SIZE,IMG_SIZE,3), activation='relu',))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu',))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=64, kernel_size=(4,4), activation='relu',))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=128, kernel_size=(3,3), activation='relu',))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=256, kernel_size=(2,2), activation='relu',))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
model.compile(optimizer='nadam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model summary gives :
Total params: 273,330
Trainable params: 273,330
Non-trainable params: 0
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss',patience=10)
history = model.fit(x_train_sample, y_train_sample,
batch_size = BATCH_SIZE,
epochs = EPOCHS,
verbose = 1,
validation_data = (x_test, y_test)
,callbacks=[early_stop,PlotLossesKeras()])
When I run my model for 30 epochs, earlystopping triggers.
Epoch 1/30
43/43 [==============================] - 9s 205ms/step - loss: 0.1109 - accuracy: 0.9531 - val_loss: 0.5259 - val_accuracy: 0.8397
Epoch 2/30
43/43 [==============================] - 10s 231ms/step - loss: 0.0812 - accuracy: 0.9692 - val_loss: 0.5793 - val_accuracy: 0.8355
Epoch 3/30
43/43 [==============================] - 9s 219ms/step - loss: 0.1000 - accuracy: 0.9721 - val_loss: 0.5367 - val_accuracy: 0.8547
Epoch 4/30
43/43 [==============================] - 9s 209ms/step - loss: 0.0694 - accuracy: 0.9707 - val_loss: 0.6101 - val_accuracy: 0.8269
Epoch 5/30
43/43 [==============================] - 9s 203ms/step - loss: 0.0891 - accuracy: 0.9633 - val_loss: 0.6116 - val_accuracy: 0.8419
Epoch 6/30
43/43 [==============================] - 9s 210ms/step - loss: 0.0567 - accuracy: 0.9765 - val_loss: 0.4833 - val_accuracy: 0.8419
Epoch 7/30
43/43 [==============================] - 9s 218ms/step - loss: 0.0312 - accuracy: 0.9897 - val_loss: 1.4513 - val_accuracy: 0.8034
Epoch 8/30
43/43 [==============================] - 9s 213ms/step - loss: 0.0820 - accuracy: 0.9707 - val_loss: 0.5821 - val_accuracy: 0.8248
Epoch 9/30
43/43 [==============================] - 9s 222ms/step - loss: 0.0513 - accuracy: 0.9897 - val_loss: 0.8516 - val_accuracy: 0.8462
Epoch 10/30
43/43 [==============================] - 11s 246ms/step - loss: 0.0442 - accuracy: 0.9853 - val_loss: 0.7927 - val_accuracy: 0.8397
Epoch 11/30
43/43 [==============================] - 10s 222ms/step - loss: 0.0356 - accuracy: 0.9897 - val_loss: 0.7730 - val_accuracy: 0.8141
Epoch 12/30
43/43 [==============================] - 10s 232ms/step - loss: 0.0309 - accuracy: 0.9824 - val_loss: 0.9528 - val_accuracy: 0.8226
Epoch 13/30
43/43 [==============================] - 9s 220ms/step - loss: 0.0424 - accuracy: 0.9839 - val_loss: 1.2109 - val_accuracy: 0.8013
Epoch 14/30
43/43 [==============================] - 10s 228ms/step - loss: 0.0645 - accuracy: 0.9824 - val_loss: 0.5308 - val_accuracy: 0.8547
Epoch 15/30
43/43 [==============================] - 11s 259ms/step - loss: 0.0293 - accuracy: 0.9927 - val_loss: 0.9271 - val_accuracy: 0.8333
Epoch 16/30
43/43 [==============================] - 9s 217ms/step - loss: 0.0430 - accuracy: 0.9795 - val_loss: 0.6687 - val_accuracy: 0.8483
I have tried many different model architectures, changing number of layers, kernel size etc... I can't seem to figure out what is going wrong.
There are many possible reasons.
For starters, depending on your categories, you might want to consider using transfer learning to speed up your training process.
Your architecture looks reasonable and the training and validation loss seems right as well (overfitting is occurring).
Given that you've stated that you could have 3 categories and am currently only using 2, might there be a different distribution between your training set and your test set? That might be causing the model to be unable to generalise well.
For instance, your dataset contains of evenly distributed number of images of Cats, Dogs and Humans. You set 2 categories to train on and thus your model attempts to segment between humans and animals when it tries to validate, there is an uneven distribution in the training data causing the model to see insufficient training size of humans (33%)?

Accuracy doesn't improve in training model character recognition

I am building a training model for my character recognition system. During every epochs, I am getting the same accuracy and it doesn't improve. I have currently 4000 training images and 77 validation images.
My model is as follows:
inputs = Input(shape=(32,32,3))
x = Conv2D(filters = 64, kernel_size = 5, activation = 'relu')(inputs)
x = MaxPooling2D()(x)
x = Conv2D(filters = 32,
kernel_size = 3,
activation = 'relu')(x)
x = MaxPooling2D()(x)
x = Flatten()(x)
x=Dense(256,
activation='relu')(x)
outputs = Dense(1, activation = 'softmax')(x)
model = Model(inputs = inputs, outputs = outputs)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
data_gen_train = ImageDataGenerator(rescale=1/255)
data_gen_test=ImageDataGenerator(rescale=1/255)
data_gen_valid = ImageDataGenerator(rescale=1/255)
train_generator = data_gen_train.flow_from_directory(directory=r"./drive/My Drive/train_dataset",
target_size=(32,32), batch_size=10, class_mode="binary")
valid_generator = data_gen_valid.flow_from_directory(directory=r"./drive/My
Drive/validation_dataset", target_size=(32,32), batch_size=2, class_mode="binary")
test_generator = data_gen_test.flow_from_directory(
directory=r"./drive/My Drive/test_dataset",
target_size=(32, 32),
batch_size=6,
class_mode="binary"
)
model.fit(
train_generator,
epochs =10,
steps_per_epoch=400,
validation_steps=37,
validation_data=valid_generator)
The result is as follows:
Found 4000 images belonging to 2 classes.
Found 77 images belonging to 2 classes.
Found 6 images belonging to 2 classes.
Epoch 1/10
400/400 [==============================] - 14s 35ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811
Epoch 2/10
400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811
Epoch 3/10
400/400 [==============================] - 13s 34ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676
Epoch 4/10
400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676
Epoch 5/10
400/400 [==============================] - 18s 46ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5541
Epoch 6/10
400/400 [==============================] - 13s 34ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676
Epoch 7/10
400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676
Epoch 8/10
400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5946
Epoch 9/10
400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811
Epoch 10/10
400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811
<tensorflow.python.keras.callbacks.History at 0x7fa3a5f4a8d0>
If you are trying to recognize charaters of 2 classes, you should:
use class_mode="binary" in the flow_from_directory function
use binary_crossentropy as loss
your last layer must have 1 neuron with sigmoid activation function
In case there are more than 2 classes:
do not use class_mode="binary" in the flow_from_directory function
use categorical_crossentropy as loss
your last layer must have n neurons with softmax activation, where n stands for the number of classes

loss value is high in the deep learning and accuracy is changing around 50%

I am training a deep learning network using pre-trained VGG-16 . I have high loss around 7-8 and accuracy is around 50%. I want to improve the accuracy.
1. Could you explain me if my data set is set correctly?
trdata = ImageDataGenerator()
traindata =
trdata.flow_from_directory(directory="/Users/khand/OneDrive/Desktop/Thesis/Case_db/data",target_size=(224,224))
tsdata = ImageDataGenerator()
testdata = tsdata.flow_from_directory(directory="/Users/khand/OneDrive/Desktop/Thesis/Case_db/data", target_size=(224,224))
Here is how I set my data set and in the folder of "data" I have 2 subfolder 1 is containing main data other one containing labels.
I think connection between networks and layers are fine since I can train the network.
from keras.callbacks import ModelCheckpoint, EarlyStopping
checkpoint = ModelCheckpoint("vgg16_1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
early = EarlyStopping(monitor='val_acc', min_delta=0, patience=20, verbose=1, mode='auto')
hist = model.fit_generator( steps_per_epoch=10,generator=traindata, validation_data=
testdata,validation_steps=10,epochs=10,callbacks=[ModelCheckpoint('VGG16-transferlearning.model', monitor='val_acc', save_best_only=True)])
Above how my validation and training goes on and result is below:
Epoch 1/10
10/10 [==============================] - 253s 25s/step - loss: 8.1311 - accuracy: 0.4437 - val_loss: 7.5554 - val_accuracy: 0.4875
Epoch 2/10
C:\Users\khand\Anaconda3\envs\TensorFlow-GPU\lib\site-packages\keras\callbacks\callbacks.py:707: RuntimeWarning: Can save best model only with val_acc available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
10/10 [==============================] - 255s 26s/step - loss: 7.8576 - accuracy: 0.5000 - val_loss: 5.0369 - val_accuracy: 0.5281
Epoch 3/10
10/10 [==============================] - 263s 26s/step - loss: 8.0590 - accuracy: 0.5000 - val_loss: 8.0590 - val_accuracy: 0.5094
Epoch 4/10
10/10 [==============================] - 258s 26s/step - loss: 7.6561 - accuracy: 0.5250 - val_loss: 7.0517 - val_accuracy: 0.4765
Epoch 5/10
10/10 [==============================] - 246s 25s/step - loss: 7.9090 - accuracy: 0.4899 - val_loss: 9.0664 - val_accuracy: 0.5281
Epoch 6/10
10/10 [==============================] - 257s 26s/step - loss: 7.7065 - accuracy: 0.5219 - val_loss: 8.5627 - val_accuracy: 0.4812
Epoch 7/10
10/10 [==============================] - 244s 24s/step - loss: 7.9079 - accuracy: 0.5094 - val_loss: 8.5627 - val_accuracy: 0.5031
Epoch 8/10
10/10 [==============================] - 231s 23s/step - loss: 8.5147 - accuracy: 0.4765 - val_loss: 5.5406 - val_accuracy: 0.4966
Epoch 9/10
10/10 [==============================] - 251s 25s/step - loss: 8.3613 - accuracy: 0.4812 - val_loss: 5.5406 - val_accuracy: 0.4938
Epoch 10/10
10/10 [==============================] - 247s 25s/step - loss: 8.0087 - accuracy: 0.5031 - val_loss: 8.5627 - val_accuracy: 0.4906
If you have any suggestion please feel free to help

Categories

Resources