I'm trying to use the Cifar-10 dataset to practice my CNN skills.
If I do this it's ok:
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
But I was trying to use tfds.load() and I don't understand how to do it.
With this I download it,
train_ds, test_ds = tfds.load('cifar10', split=['train','test'])
Now I tried this but is not working,
assert isinstance(train_ds, tf.data.Dataset)
assert isinstance(test_ds, tf.data.Dataset)
(train_images, train_labels) = tuple(zip(*train_ds))
(test_images, test_labels) = tuple(zip(*test_ds))
Can somebody show me the way to achieve it?
thank you!
You can do this as follows.
import tensorflow as tf
import tensorflow_datasets as tfds
train_ds, test_ds = tfds.load('cifar10', split=['train','test'], as_supervised=True)
These train_ds and test_ds are tf.data.Dataset objects and so you can use map, batch, and similar functions to each of those.
def normalize_resize(image, label):
image = tf.cast(image, tf.float32)
image = tf.divide(image, 255)
image = tf.image.resize(image, (28, 28))
return image, label
def augment(image, label):
image = tf.image.random_flip_left_right(image)
image = tf.image.random_saturation(image, 0.7, 1.3)
image = tf.image.random_contrast(image, 0.8, 1.2)
image = tf.image.random_brightness(image, 0.1)
return image, label
train = train_ds.map(normalize_resize).cache().map(augment).shuffle(100).batch(64).repeat()
test = test_ds.map(normalize_resize).cache().batch(64)
Now, we can pass train and test directly to model.fit.
model = tf.keras.models.Sequential(
[
tf.keras.layers.Flatten(input_shape=(28, 28, 3)),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation="softmax"),
]
)
model.compile(
optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
model.fit(
train,
epochs=5,
steps_per_epoch=60000 // 64,
validation_data=test, verbose=2
)
Epoch 1/5
17s 17ms/step - loss: 2.0848 - accuracy: 0.2318 - val_loss: 1.8175 - val_accuracy: 0.3411
Epoch 2/5
11s 12ms/step - loss: 1.8827 - accuracy: 0.3144 - val_loss: 1.7800 - val_accuracy: 0.3595
Epoch 3/5
11s 12ms/step - loss: 1.8383 - accuracy: 0.3272 - val_loss: 1.7152 - val_accuracy: 0.3904
Epoch 4/5
11s 11ms/step - loss: 1.8129 - accuracy: 0.3397 - val_loss: 1.6908 - val_accuracy: 0.4060
Epoch 5/5
11s 11ms/step - loss: 1.8022 - accuracy: 0.3461 - val_loss: 1.6801 - val_accuracy: 0.4081
You can also extract them like this:
train_ds, test_ds = tfds.load('cifar10', split=['train','test'],
as_supervised = True,
batch_size = -1)
To work with as_numpy() method, you need pass as_supervised and batch_size as shown. If you pass as_supervised = True then the dataset will have tuple structure that (inputs, labels) otherwise it will be a dictionary.
With them you simply call:
train_images, train_labels = tfds.as_numpy(train_ds)
Or another way is to iterate over it to obtain classes(assuming batch_size is not passed).
With as_supervised = False:
train_images, train_labels = [],[]
for images_labels in train_ds:
train_images.append(images_labels['image'])
train_labels.append(images_labels['label'])
With as_supervised = True:
for images, labels in train_ds:
train_images.append(images)
train_labels.append(labels)
Related
I'm trying to train my neural network with 10 epochs. But my attempts are unsuccessful. I don't get it why am I always getting something like this:
35/300 [==>...........................] - ETA: 1:09 - loss: 0.0000e+00 - accuracy: 1.0000
36/300 [==>...........................] - ETA: 1:09 - loss: 0.0000e+00 - accuracy: 1.0000
37/300 [==>...........................] - ETA: 1:08 - loss: 0.0000e+00 - accuracy: 1.0000
Here are my batch size and image width/height and whole feeding proccess:
batch_size = 32
img_height = 150
img_width = 150
dataset_url = "http://cnrpark.it/dataset/CNR-EXT-Patches-150x150.zip"
print(dataset_url)
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='CNR-EXT-Patches-150x150',
untar=True)
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
num_classes = 1
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
normalization_layer = tf.keras.layers.Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
print(np.min(first_image), np.max(first_image))
model = tf.keras.Sequential([
tf.keras.layers.Rescaling(1./255),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='sigmoid'),
tf.keras.layers.Dense(num_classes)
])
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
model.compile(
optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=10
)
From np.min and np.max I'm getting these values: 0.08627451 0.5568628 so this obviously wouldn't be the case. What should be wrong in my attempt?
EDIT, my epoch looks like this now:
5/300 [..............................] - ETA: 1:12 - loss: 0.1564 - accuracy: 0.9750
6/300 [..............................] - ETA: 1:15 - loss: 0.1311 - accuracy: 0.8333
7/300 [..............................] - ETA: 1:13 - loss: 0.1124 - accuracy: 0.7143
8/300 [..............................] - ETA: 1:13 - loss: 0.0984 - accuracy: 0.6250
9/300 [..............................] - ETA: 1:12 - loss: 0.0874 - accuracy: 0.5556
And a little later:
51/300 [====>.........................] - ETA: 1:04 - loss: 0.0154 - accuracy: 0.0980
You have set num_classes = 1, although your dataset has two classes:
LABEL is 0 for free, 1 for busy.
So, if you want to use tf.keras.losses.SparseCategoricalCrossentropy, try:
tf.keras.layers.Dense(2)
You could also consider using binary_crossentropy if you only have two classes. You would have to change your loss function and output layer to:
tf.keras.layers.Dense(1, activation="sigmoid")
You need the following:
num_classes = 2
tf.keras.layers.Dense(num_classes, activation="softmax)
I can see a normalization_layer, looks like you are rescaling twice.
At my job Interview yesterday, I was asked to build a neural network using TesnorFlow in python to classify images from the flowers images dataset.
But even though it should've worked theoretically, for some reason I couldn't increase the accuracy above 20s%.
Python Version: 3.8.13
TensorFlow Versioin: 2.4.1
The data preprocessing methods from the interviewer were given as follows:
# create datase
IMG_SIZE = 160
BATCH_SIZE = 32
AUTOTUNE = tf.data.experimental.AUTOTUNE
def _parse_data(x,y):
image = tf.io.read_file(x)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.cast(image, dtype=tf.float32)
image = tf.math.l2_normalize(image)
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image,y
def _input_fn(x,y):
ds = tf.data.Dataset.from_tensor_slices((x,y))
ds = ds.map(_parse_data)
ds = ds.shuffle(buffer_size=data_size)
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = _input_fn(x_train, y_train)
validation_ds = _input_fn(x_valid, y_valid)
With both training and validation datasets being
<PrefetchDataset shapes: ((None, 160, 160, 3), (None,)), types:
(tf.float32, tf.int32)>
With the network being as follows:
from tensorflow.keras import datasets, layers, models
model_seq = models.Sequential()
model_seq.add(layers.experimental.preprocessing.RandomFlip("horizontal",input_shape=(IMG_SIZE,IMG_SIZE,3)))
model_seq.add(layers.experimental.preprocessing.RandomRotation(0.2))
model_seq.add(layers.experimental.preprocessing.Rescaling(1./255))
model_seq.add(layers.Conv2D(16, 3, padding='same', activation='relu'))
model_seq.add(layers.MaxPooling2D())
model_seq.add(layers.Conv2D(32, 3, padding='same', activation='relu'))
model_seq.add(layers.MaxPooling2D())
model_seq.add(layers.Conv2D(64, 3, padding='same', activation='relu'))
model_seq.add(layers.MaxPooling2D())
model_seq.add(layers.Dropout(0.2))
model_seq.add(layers.Flatten())
model_seq.add(layers.Dense(128, activation='relu'))
model_seq.add(layers.Dense(len(label_names), activation='softmax'))
model_seq.summary()
The output layer being the only thing that isn't allowed to be change.
model_seq.add(layers.Dense(len(label_names), activation='softmax'))
(Please note I was for some reason asked to use model_seq.add(), and even though it could be triggering for some of you, please ignore it this once :) )
For compiling the model, I used the following:
model_seq.compile(optimizer="Adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
And for fitting the model:
history = model_seq.fit(train_ds,epochs=20,
validation_data = validation_ds,
steps_per_epoch=100,validation_steps=100)
The things I've tried:
Using different Augmentation methods (or removing the whole section
from the network).
Changing the Batch and Image sizes.
Using Dropout layers.
Using early stopping as follows:
callback = tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0, patience=3, verbose=0,
mode='auto',baseline=None,
restore_best_weights=True)
history = model_seq.fit(train_ds,epochs=20,
validation_data = validation_ds,
steps_per_epoch=100,validation_steps=100,
callbacks = [callback])
Yet despite all of the above, I couldn't get any results. Since I couldn't find out what I did wrong exactly, I'm hoping someone here could tell me, so I could learn from this experience.
(Please take into consideration that I wasn't allowed to change the preprocessing functions, with the parameters IMG_SIZE and BATCH_SIZE being the only exception).
“TLDR: If you want to use their preprocessing part and don't change anything go to First Approach. If you want to augment images, Go to Second Approach and use ImageDataGenerator”.
First Approach
As you say: I had to use the preprocessing functions and don't change this and because I don't access your data, I use cifar10 dataset and use your preprocessing part. (only line of reading from file changed).
Because you shouldn't change IMG_SIZE=160 in the preprocessing part, I add this layer : tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (32, 32)))) to the network because working with large images causes a crash.
You don't need a very large network, first check with a small network then step by step add parameters then add layers.
We can get a better result like the below: (On cifar10 with your network I get 10% accuracy like you.)
import tensorflow as tf
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()
# create datase
IMG_SIZE = 160
BATCH_SIZE = 32
data_size = 32
AUTOTUNE = tf.data.experimental.AUTOTUNE
def _parse_data(x,y):
# image = tf.io.read_file(x) <- because don't read from path
image = x
# image = tf.image.decode_jpeg(image, channels=3) <- because don't read from path and don't have jpeg
image = tf.cast(image, dtype=tf.float32)
image = tf.math.l2_normalize(image)
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image,y
def _input_fn(x,y):
ds = tf.data.Dataset.from_tensor_slices((x,y))
ds = ds.map(_parse_data)
ds = ds.shuffle(buffer_size=data_size)
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = _input_fn(X_train, y_train)
validation_ds = _input_fn(X_test, y_test)
model = tf.keras.Sequential([
tf.keras.Input(shape=(160, 160, 3)),
tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (32, 32))),
tf.keras.layers.Conv2D(filters=32,kernel_size=(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(50,activation='relu'),
tf.keras.layers.Dense(10,activation='softmax')
])
model.compile(optimizer="Adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['accuracy'])
history = model.fit(train_ds,epochs=4,
validation_data = validation_ds,
steps_per_epoch=100,validation_steps=100)
Output:
Epoch 1/4
100/100 [==============================] - 8s 38ms/step - loss: 2.2445 - accuracy: 0.1375 - val_loss: 2.1406 - val_accuracy: 0.2138
Epoch 2/4
100/100 [==============================] - 3s 33ms/step - loss: 2.0552 - accuracy: 0.2688 - val_loss: 1.9764 - val_accuracy: 0.3250
Epoch 3/4
100/100 [==============================] - 4s 38ms/step - loss: 1.9468 - accuracy: 0.3022 - val_loss: 1.9014 - val_accuracy: 0.3200
Epoch 4/4
100/100 [==============================] - 4s 36ms/step - loss: 1.8936 - accuracy: 0.3341 - val_loss: 1.8883 - val_accuracy: 0.3419
Second Approach: Augment images with ImageDataGenerator:
import tensorflow as tf
flowers = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
data_gen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
rotation_range = 10, # Degree range for random rotations.
horizontal_flip = True, # Randomly flip inputs horizontally.
vertical_flip = True, # Randomly flip inputs vertically.
)
imgs_dataset = data_gen.flow_from_directory(flowers, class_mode='categorical',
target_size=(160, 160), batch_size=32,
shuffle=True)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=32,kernel_size=(3,3),input_shape=(160, 160, 3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(50,activation='relu'),
tf.keras.layers.Dense(5,activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(imgs_dataset,epochs=5)
Output:
Found 3670 images belonging to 5 classes.
Epoch 1/5
115/115 [==============================] - 39s 311ms/step - loss: 1.7904 - accuracy: 0.4161
Epoch 2/5
115/115 [==============================] - 27s 236ms/step - loss: 1.0878 - accuracy: 0.5605
Epoch 3/5
115/115 [==============================] - 28s 244ms/step - loss: 1.0252 - accuracy: 0.6005
Epoch 4/5
115/115 [==============================] - 27s 233ms/step - loss: 0.9735 - accuracy: 0.6196
Epoch 5/5
115/115 [==============================] - 29s 248ms/step - loss: 0.9313 - accuracy: 0.6455
I have just started with Computer Vision and in the current task i am classifying images in 4 categories.
Total number of image files=1043
I am using pretrained InceptionV3 and fine tuning it on my dataset.
This is what i have after the epoch:
Epoch 1/5
320/320 [==============================] - 1925s 6s/step - loss: 0.4318 - acc: 0.8526 - val_loss: 1.1202 - val_acc: 0.5557
Epoch 2/5
320/320 [==============================] - 1650s 5s/step - loss: 0.1807 - acc: 0.9446 - val_loss: 1.2694 - val_acc: 0.5436
Epoch 3/5
320/320 [==============================] - 1603s 5s/step - loss: 0.1236 - acc: 0.9572 - val_loss: 1.2597 - val_acc: 0.5546
Epoch 4/5
320/320 [==============================] - 1582s 5s/step - loss: 0.1057 - acc: 0.9671 - val_loss: 1.3845 - val_acc: 0.5457
Epoch 5/5
320/320 [==============================] - 1580s 5s/step - loss: 0.0982 - acc: 0.9700 - val_loss: 1.2771 - val_acc: 0.5572
That is a huge difference. Kindly help me to figure out why is my model not able to generalize as it is fitting quite well on the train data.
my code for reference:-
from keras.utils import to_categorical
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D, Dropout
from keras.applications.inception_v3 import InceptionV3, preprocess_input
CLASSES = 4
# setup model
base_model = InceptionV3(weights='imagenet', include_top=False)
from sklearn.preprocessing import OneHotEncoder
x = base_model.output
x = GlobalAveragePooling2D(name='avg_pool')(x)
x = Dropout(0.4)(x)
predictions = Dense(CLASSES, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
df['Category']= encoder.fit_transform(df['Category'])
from keras.preprocessing.image import ImageDataGenerator
WIDTH = 299
HEIGHT = 299
BATCH_SIZE = 32
train_datagen = ImageDataGenerator(rescale=1./255,preprocessing_function=preprocess_input)
validation_datagen = ImageDataGenerator(rescale=1./255)
df['Category'] =df['Category'].astype(str)
#dfval['Category'] = dfval['Category'].astype(str)
from sklearn.utils import shuffle
df = shuffle(df)
from sklearn.model_selection import train_test_split
dftrain,dftest = train_test_split(df, test_size = 0.2, random_state = 0)
train_generator = train_datagen.flow_from_dataframe(dftrain,target_size=(HEIGHT, WIDTH),batch_size=BATCH_SIZE,class_mode='categorical', x_col='Path', y_col='Category')
validation_generator = validation_datagen.flow_from_dataframe(dftest,target_size=(HEIGHT, WIDTH),batch_size=BATCH_SIZE,class_mode='categorical', x_col='Path', y_col='Category')
EPOCHS = 5
BATCH_SIZE = 32
STEPS_PER_EPOCH = 320
VALIDATION_STEPS = 64
MODEL_FILE = 'filename.model'
history = model.fit_generator(
train_generator,
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=validation_generator,
validation_steps=VALIDATION_STEPS)
Any help would be appreciated :)
If you don't use preprocess_input in "all" your data, you will get terrible results.
Look at these:
train_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input,
...)
validation_datagen = ImageDataGenerator()
Now, I notice you are using rescale. Since you imported the correct preprocess_input function from the inception code, I really think you should not be using this rescale. The preprocess_input function is supposed to do all the necessary preprocessing. (Not all models were trained with normalized inputs)
But would rescale be a problem if you're applying it to both batasets?
Well... if the trainable=False applied correctly to the BatchNormalization layers, this means that these layers have stored values for mean and variation which will only work well if the data is within the expected range.
I trained a VGG16 with imagenet-weights to classfiy images with 4 classes.
Train data:3578 images belonging to 4 classes.
Validation data:894 images belonging to 4 classes
Each time i run the code, i get one of this two accuracy value. val_acc: 1.0000 in first run. val_acc: 0.3364 in second run.
Any explication for this? because the difference between the results is to much large.
train_dir = 'C:/Users/ucduq/Desktop/output1/train'
validation_dir = 'C:/Users/ucduq/Desktop/output1/val'
training_data_generator = ImageDataGenerator(
rescale=1./255,
#rotation_range=90,
#horizontal_flip=True,
# vertical_flip=True,
#shear_range=0.9
#zoom_range=0.9
)
validation_data_generator = ImageDataGenerator(rescale=1./255)
IMAGE_WIDTH=150
IMAGE_HEIGHT=150
BATCH_SIZE=32
input_shape=(150,150,3)
training_generator = training_data_generator.flow_from_directory(
train_dir,
target_size=(IMAGE_WIDTH, IMAGE_HEIGHT),
batch_size=BATCH_SIZE,
class_mode="categorical")
validation_generator = validation_data_generator.flow_from_directory(
validation_dir,
target_size=(IMAGE_WIDTH, IMAGE_HEIGHT),
batch_size=BATCH_SIZE,
class_mode="categorical",
shuffle=False)
from keras.applications import VGG16
vgg_conv = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
model = models.Sequential()
model.add(vgg_conv)
### Add new layers
model.add(layers.Flatten())
model.add(layers.Dense(1024, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(4, activation='softmax'))
model.compile(loss="categorical_crossentropy",optimizer='adam',metrics=["accuracy"])
results = model.fit_generator(training_generator, steps_per_epoch=training_generator.samples/training_generator.batch_size,
epochs=100,
callbacks=callbacks,
validation_data=validation_generator, validation_steps=28)
first run:
Epoch 100/100
111/110 [==============================] - 17s 152ms/step - loss: 1.3593 - acc: 0.3365 - val_loss: 1.3599 - val_acc: 0.3364
second run:
Epoch 100/100
111/110 [==============================] - 18s 158ms/step - loss: 1.9879e-06 - acc: 1.0000 - val_loss: 5.2915e-06 - val_acc: 1.0000
I assume that your data has a class that is 33% of the entire set? If that true, what happen in the first run: is the model didn't learn anything at all(acc: 0.3365).
This might because of incorrect using of data-augmentation, if the commented lines are what you use in the first run then they are the culprits.
The
#shear_range=0.9 and
#zoom_range=0.9
is too much, only one of this means that you discord 90% of each image so the model doesn't learn anything.
I'm trying to do image classification with dicom images that have balanced classes using the pre-trained InceptionV3 model.
def convertDCM(PathDCM) :
data = []
for dirName, subdir, files in os.walk(PathDCM):
for filename in sorted(files):
ds = pydicom.dcmread(PathDCM +'/' + filename)
im = fromarray(ds.pixel_array)
im = keras.preprocessing.image.img_to_array(im)
im = cv2.resize(im,(299,299))
data.append(im)
return data
PathDCM = '/home/Desktop/FULL_BALANCED_COLOURED/'
data = convertDCM(PathDCM)
#scale the raw pixel intensities to the range [0,1]
data = np.array(data, dtype="float")/255.0
labels = np.array(labels,dtype ="int")
#splitting data into training and testing
#test_size is percentage to split into test/train data
(trainX, testX, trainY, testY) = train_test_split(
data,labels,
test_size=0.2,
random_state=42)
img_width, img_height = 299, 299 #InceptionV3 size
train_samples = 300
validation_samples = 50
epochs = 25
batch_size = 15
base_model = keras.applications.InceptionV3(
weights ='imagenet',
include_top=False,
input_shape = (img_width,img_height,3))
model_top = keras.models.Sequential()
model_top.add(keras.layers.GlobalAveragePooling2D(input_shape=base_model.output_shape[1:], data_format=None)),
model_top.add(keras.layers.Dense(300,activation='relu'))
model_top.add(keras.layers.Dropout(0.5))
model_top.add(keras.layers.Dense(1, activation = 'sigmoid'))
model = keras.models.Model(inputs = base_model.input, outputs = model_top(base_model.output))
#Compiling model
model.compile(optimizer = keras.optimizers.Adam(
lr=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
#Image Processing and Augmentation
train_datagen = keras.preprocessing.image.ImageDataGenerator(
rescale = 1./255,
zoom_range = 0.1,
width_shift_range = 0.2,
height_shift_range = 0.2,
horizontal_flip = True,
fill_mode ='nearest')
val_datagen = keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(
trainX,
trainY,
batch_size=batch_size,
shuffle=True)
validation_generator = train_datagen.flow(
testX,
testY,
batch_size=batch_size,
shuffle=True)
When I train the model, I always get a constant validation accuracy of 0.3889 with the validation loss fluctuating.
#Training the model
history = model.fit_generator(
train_generator,
steps_per_epoch = train_samples//batch_size,
epochs = epochs,
validation_data = validation_generator,
validation_steps = validation_samples//batch_size)
Epoch 1/25
20/20 [==============================]20/20
[==============================] - 195s 49s/step - loss: 0.7677 - acc: 0.4020 - val_loss: 0.7784 - val_acc: 0.3889
Epoch 2/25
20/20 [==============================]20/20
[==============================] - 187s 47s/step - loss: 0.7016 - acc: 0.4848 - val_loss: 0.7531 - val_acc: 0.3889
Epoch 3/25
20/20 [==============================]20/20
[==============================] - 191s 48s/step - loss: 0.6566 - acc: 0.6304 - val_loss: 0.7492 - val_acc: 0.3889
Epoch 4/25
20/20 [==============================]20/20
[==============================] - 175s 44s/step - loss: 0.6533 - acc: 0.5529 - val_loss: 0.7575 - val_acc: 0.3889
predictions= model.predict(testX)
print(predictions)
Predicting the model also only returns an array of one prediction per image:
[[0.457804 ]
[0.45051473]
[0.48343503]
[0.49180537]...
Why is it that the model only predicts one of the two classes? Does this have to do with the constant val accuracy or possibly overfitting?
If you have two classes, every image is in one or the other so probabilities for one class are enough to find everything because the sum of the probabilities for each image is supposed to make 1. So if you have the probabilitie p for 1 class, the probability for the other one is 1-p.
If you want to have the possibility of classifing images not in one of those two class, then you should create a third one.
Also, this line:
model_top.add(keras.layers.Dense(1, activation = 'sigmoid'))
means that the Output is a vector of shape(nb_sample,1) and has the same shape as your training labels