image input to neural networks - python

I am doing figure extractor from scanned documents. using 1100x850 images. we use 44x34 grids of image. so that last layer will be 1496 fully connected layer.
label is 44x34 BINARY array which is 1 for figure rigion and 0 for non figure region. i.e if figure falls within (top right) (x,y)=(0,0) (bottom left) (x,y)=(50,50) then bin array has 1 at (0,0) (0,1) and (1,0) (1,1) these positions and rest 0s. so i have buit a neural network model. following is the structure.
conv(5,2,48)
maxpool(3,2)
conv(5,2,96)
maxpool(3,2)
conv(5,2,96)
maxpool(3,2)
FC-1496
The notation conv(k,d, n) denotes a convolutional layer with n filters, each of size k × k, applied with a shift of d pixels; maxpool(k, d) denotes a downsampling operation over k×k windows, applied with a shift of d pixels. FC-1496 refers to the final fully connected
layer which connects the hidden units from the previous layers to the 1496 output units (we have 1496 units for a 44x34 grid).
So my question is how to feed input ( images and labels (array) ) to this model using keras and tensor flow.
here is the model code
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Dense
from keras.models import Sequential
from keras.layers import Flatten
xtrain=#image of 850*1100 for 10 images 10 850*1100
xtest=#binary array of size 1496 for 10 images size is 10*1496
# initialize the model
model = Sequential()
model.add(Conv2D(48, 5, 2, input_shape=(1100, 850, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Conv2D(96, 5, 2))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Conv2D(96, 5, 2))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Flatten())
model.add(Dense(1496, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])

here is a working example based on your data (as I assume from your info)
I have using the label as a vector of 1 and 0 e.g [1,0,1,1,...] 1 for figure region and 0 for none figure region, for a total of 1496 regions
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.utils import np_utils
batch_size = 128
nb_epoch = 10
nb_regions = 1496
# input image dimensions
img_rows, img_cols = 850, 1100
# create random test and train sets
X_train = np.random.randint(256, size=(10, img_rows, img_cols))
Y_train = np.random.randint(2, size=(10, nb_regions))
X_test = np.random.randint(256, size=(10, img_rows, img_cols))
Y_test = np.random.randint(2, size=(10, nb_regions))
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
model = Sequential([
Dense(32, input_shape=(1, img_rows, img_cols)),
Activation('relu'),
Flatten(),
Dense(nb_regions),
Activation('softmax'),
])
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size, epochs=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

Related

Keras-viz throwing InvalidArgumentError

I am running a simple CNN model in Keras. Code:
from __future__ import print_function
import numpy as np
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Activation, Input
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import tensorflow as tf
batch_size = 128
num_classes = 10
epochs = 1
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax', name='preds'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
I want to visualize the dense layers. For this, I am using keras-vis
While running the following code:
from vis.visualization import visualize_activation
from vis.utils import utils
from keras import activations
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (18, 6)
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
# This is the output node we want to maximize.
filter_idx = 0
img = visualize_activation(model, layer_idx, filter_indices=filter_idx)
plt.imshow(img[..., 0])
I am getting the following error:
InvalidArgumentError: conv2d_2_input_2:0 is both fed and fetched.
Solutions that I have tried
Installing Keras-vis from source for the latest build
Applying Changes from PR mentioned in issues
Version:
Keras : 2.7
Keras-Vis : 0.5

Hyperpameter optimization of already trained model

I've a corpus and I divided it into 3 parts.
Training set 80%
Dev set 10%
Testing set 10%
I've the below CNN model trained on Training set and Evaluated against Dev set
model.add(SpatialDropout1D(0.1))
model.add(Conv1D(128, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Conv1D(64, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
# Fully connected (Dense layer)
model.add(Dense(64, activation='relu'))
# Output layer with sigmoid activation function
model.add(Dense(8, activation='sigmoid'))
I've saved this model using model.save('model.h5')
Now, I'd like to do the hyper parameter optimization on the saved trained model, providing my dev set as train set and test set to evaluate.
My Values are
Filters 32/64/128/192/256/512 128/64
Kernel size 2/3/4/5/7 3
Dropout rate 0.1/0.2/0.3/0.4/0.5 0.1/0.25
Dense layer size 16/32/64/128/256 32
Batch size 32/50/64/100 32
Learning rate 0.1/0.01/0.001
Any pointers how to achieve this using any library like Talos loading existing model?
Following your last comment, and from Keras documentation:
(look for "grid", the scikit-learn grid search for hyper-parameters fine tuning. The following code should be fully running as is. You can use the same method with your saved/loaded model, using the datasets you wish)
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.wrappers.scikit_learn import KerasClassifier
from keras import backend as K
from sklearn.model_selection import GridSearchCV
num_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
# load training data and do basic data normalization
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
def make_model(dense_layer_sizes, filters, kernel_size, pool_size):
'''Creates model comprised of 2 convolutional layers followed by dense layers
dense_layer_sizes: List of layer sizes.
This list has one number for each layer
filters: Number of convolutional filters in each convolutional layer
kernel_size: Convolutional kernel size
pool_size: Size of pooling area for max pooling
'''
model = Sequential()
model.add(Conv2D(filters, kernel_size,
padding='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Conv2D(filters, kernel_size))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
for layer_size in dense_layer_sizes:
model.add(Dense(layer_size))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
return model
dense_size_candidates = [[32], [64], [32, 32], [64, 64]]
my_classifier = KerasClassifier(make_model, batch_size=32)
validator = GridSearchCV(my_classifier,
param_grid={'dense_layer_sizes': dense_size_candidates,
# epochs is avail for tuning even when not
# an argument to model building function
'epochs': [3, 6],
'filters': [8],
'kernel_size': [3],
'pool_size': [2]},
scoring='neg_log_loss',
n_jobs=1)
validator.fit(x_train, y_train)
print('The parameters of the best model are: ')
print(validator.best_params_)
# validator.best_estimator_ returns sklearn-wrapped version of best model.
# validator.best_estimator_.model returns the (unwrapped) keras model
best_model = validator.best_estimator_.model
metric_names = best_model.metrics_names
metric_values = best_model.evaluate(x_test, y_test)
for metric, value in zip(metric_names, metric_values):
print(metric, ': ', value)

Why are the predictions going wrong with MNIST CNN?

I trained the CNN on MNIST dataset with training and validation accuracy of ~0.99.
I followed the exact steps from the example given at the Keras documentation of implementing CNN with MNIST dataset:
import cv2
import numpy as np
import tensorflow.keras as keras
import math
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
When I tested the following image:
using the following test code:
img = cv2.imread("m9.png", 0)
img = cv2.resize(img, (28,28))
img = img / 255.
prob = model.predict_proba(img.reshape((1,28, 28, 1)))
print(prob)
model.predict_classes(img.reshape((1,28, 28, 1)))
The class it prints out is array([1]) , denoting number 1. I could not understand the reason for it. Did I try to predict in an incorrect way?
Exactly same class array([1]) was predicted for number 8 as shown below:
It looks like I have made an error during prediction? I tried to understand what could be happening but could not understand.
There is no error, its just that your images don't look at all like the ones in the MNIST dataset. This dataset is not meant to train a general digit recognition algorithm, it will only work with similar images.
In your case the digits will be very small in a 28x28 image, so the predictions are kind of random.
You are resizing the input image to 28 X 28. Instead you should first crop the image around the digit to make it look like the data-set in MNIST. Otherwise in resized image, the digit will occupy very small portion and results will be arbitrary.

Keras : Prediction using Trained Model

I am a total beginner in keras i implemented following code in keras, i found this code on web and successfully trained it with 97 % accuracy. I am getting little bit problem during Prediction.
The following code for training:
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.optimizers import SGD, Adam
from keras.utils import np_utils
import numpy as np
#seed = 7
#np.random.seed(seed)
batch_size = 50
nb_classes = 10
nb_epoch = 150
data_augmentation = False
# input image dimensions
img_rows, img_cols = 32, 32
# the CIFAR10 images are RGB
img_channels = 3
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
# let's train the model using SGD + momentum (how original).
#sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
sgd= Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
if not data_augmentation:
print('Not using data augmentation.')
model.fit(X_train, Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
validation_data=(X_test, Y_test),
shuffle=True)
else:
print('Using real-time data augmentation.')
# this will do preprocessing and realtime data augmentation
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
# fit the model on the batches generated by datagen.flow()
model.fit_generator(datagen.flow(X_train, Y_train,
batch_size=batch_size),
samples_per_epoch=X_train.shape[0],
nb_epoch=nb_epoch,
validation_data=(X_test, Y_test))
model.save('model3.h5')
The model was saved successfully and i implemented this following Prediction code.
Code for Prediction:
import keras
import tensorflow as tf
import h5py
from keras.models import load_model
import cv2
import numpy as np
model = load_model('model3.h5')
print('Model Loaded')
dim = (32,32)
img = cv2.imread('download.jpg')
img = cv2.resize(img,dim)
Array = [np.array(img)]
Prediction = model.predict(Array)
print(Prediction)
Error generated:
Using TensorFlow backend.
Model Loaded
Traceback (most recent call last):
File "E:\Prediction\Prediction.py", line 16, in <module>
Prediction = model.predict(Array)
File "C:\Users\Dilip\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 1149, in predict
x, _, _ = self._standardize_user_data(x)
File "C:\Users\Dilip\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 751, in _standardize_user_data
exception_prefix='input')
File "C:\Users\Dilip\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training_utils.py", line 128, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (32, 32, 3)
>>>
I know here that some problem is generated for not being in a proper shape of the input image i tried to convert it into (1,32,32,3) but i failed !!
Help here please.
It appears you are missing the classes in your code for prediction. Try this instead:
import cv2
import tensorflow as tf
#write the 10 classes here nb_classes
CATEGORIES = ['1','2','3','4','5','6','7','8','9','10']
def prepare(filepath):
IMG_SIZE = 32
img_array = cv2.imread(filepath, cv2.IMREAD_COLOR)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 3) #img_channels = 3
model = tf.keras.models.load_model('model3.h5')
prediction = model.predict([prepare('download.jpg')])
print(CATEGORIES[int(prediction[0][0])])

Keras BatchNorm Layer giving weird results during training and inference

I am having an issue with Keras where evaluate function gives different training loss (way higher) and accuracy(way lower) value as compared to the value that I get during training. I am aware that this question has already been asked at several places (here, here), but I think my issue is different and still not answered in those forums.
Explanation of the Task
It is supposed to be a very simple task. All I am doing is to overfit to my own dataset of 256 images (29x29x3) with 256 output classes (one for each image).
Dataset
Case 1
x_train = All the pixel values in the image = i where i goes from 0 to 255.
y_train = i
Case 2
x_train = Centre 5*5 patch of the pixel values in the image = i where i goes from 0 to 255. All the other pixel values are same for all the images.
y_train = i
This gives me 256 images in total for the training data in each case. (It would be more clear if you just have a look at the code)
Here is my code to reproduce the issue -
from __future__ import print_function
import os
import keras
from keras.datasets import mnist
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, Activation
from keras.layers.normalization import BatchNormalization
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, Callback
from keras import backend as K
from keras.regularizers import l2
import matplotlib.pyplot as plt
import PIL.Image
import numpy as np
from IPython.display import clear_output
# The GPU id to use, usually either "0" or "1"
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="1"
# To suppress the warnings
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
## Hyperparamters
batch_size = 256
num_classes = 256
l2_reg=0.0
epochs = 500
## input image dimensions
img_rows, img_cols = 29, 29
## Train Image (I took a random image from ImageNet)
train_img_name = 'n01871265_279.JPEG'
ret = PIL.Image.open(train_img_name) #Opening the image
ret = ret.resize((img_rows, img_cols)) #Resizing the image
img = np.asarray(ret, dtype=np.uint8).astype(np.float32) #Converting it to numpy array
print(img.shape) # (29, 29, 3)
## Creating the training data
#############################
x_train = np.zeros((256, img_rows, img_cols, 3))
y_train = np.zeros((256,), dtype=int)
for i in range(len(y_train)):
temp_img = np.copy(img)
## Case1 of dataset
# temp_img[:, :, :] = i # changing all the pixel values
## Case2 of dataset
temp_img[12:16, 12:16, :] = i # changing the centre block of 5*5 pixels
x_train[i, :, :, :] = temp_img
y_train[i] = i
##############################
## Common stuff in Keras
if K.image_data_format() == 'channels_first':
print('Channels First')
x_train = x_train.reshape(x_train.shape[0], 3, img_rows, img_cols)
input_shape = (3, img_rows, img_cols)
else:
print('Channels Last')
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 3)
input_shape = (img_rows, img_cols, 3)
## Normalizing the pixel values
x_train = x_train.astype('float32')
x_train /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
## convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
## Model definition
def model_toy(mom):
model = Sequential()
model.add( Conv2D(filters=64, kernel_size=(7, 7), strides=(1,1), input_shape=input_shape, kernel_regularizer=l2(l2_reg)) )
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
#Default parameters kept same as PyTorch
#Meaning of PyTorch momentum is different from Keras momentum.
# PyTorch mom = 0.1 is same as Keras mom = 0.9
model.add( Conv2D(filters=128, kernel_size=(7, 7), strides=(1, 1), kernel_regularizer=l2(l2_reg)))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
model.add(Conv2D(filters=256, kernel_size=(5, 5), strides=(1, 1), kernel_regularizer=l2(l2_reg)))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
model.add(Conv2D(filters=512, kernel_size=(5, 5), strides=(1, 1), kernel_regularizer=l2(l2_reg)))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
model.add(Conv2D(filters=1024, kernel_size=(5, 5), strides=(1, 1), kernel_regularizer=l2(l2_reg)))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
model.add( Conv2D( filters=2048, kernel_size=(3, 3), strides=(1, 1), kernel_regularizer=l2(l2_reg) ) )
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
model.add(Conv2D(filters=4096, kernel_size=(3, 3), strides=(1, 1), kernel_regularizer=l2(l2_reg)))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
# Passing it to a dense layer
model.add(Flatten())
model.add(Dense(1024, kernel_regularizer=l2(l2_reg)))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum=mom, epsilon=0.00001))
# Output Layer
model.add(Dense(num_classes, kernel_regularizer=l2(l2_reg)))
model.add(Activation('softmax'))
return model
mom = 0.9 #0
model = model_toy(mom)
model.summary()
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(lr=0.001),
#optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.0, nesterov=True),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
shuffle=True,
)
print('Training results')
print('-------------------------------------------')
score = model.evaluate(x_train, y_train, verbose=1)
print('Training loss:', score[0])
print('Training accuracy:', score[1])
print('-------------------------------------------')
Small Note - I was able to successfully do this task in PyTorch. It is just that my actual task requires me to have a Keras model. That's why I have changed the default values of the BatchNorm layer (the root cause of the issue) according to the ones I used to train PyTorch model.
Here is the image that I used in my code.
Here are the results of training.
Case1 of the dataset
Case2 of the dataset
If you look at these two files, you would be able to notice the discrepancies in the training loss during training vs inference.
(I have set my batch size to be equal to the size of my training data so as to avoid some the reasons BatchNorm generally creates problems as mentioned here)
Next, I looked at the source code of the Keras to see if there is any way I can make the BatchNorm layer use the batch statistics instead of the running mean and variance.
Here is the update formula that Keras (backend - TF) uses to update the running mean and variance.
#running_stat -= (1 - momentum) * (running_stat - batch_stat)
So if I set the momentum value to be 0, it would mean that the value assigned to the runing_stat would always be equal to batch_stat during the training phase. Thus, the value it will use during inference mode will also be same (close) as batch/dataset statistics.
Here are the results for this little experiment with the same issue still occurring.
Case1 of the dataset
Case2 of the dataset
Programming Environment - Python-3.5.2, tensorflow-1.10.0, keras-2.2.4
I tried the same thing with tensorflow-1.12.0, keras-2.2.2 as well but it still did not solve the issue.

Categories

Resources