Keras : Prediction using Trained Model - python

I am a total beginner in keras i implemented following code in keras, i found this code on web and successfully trained it with 97 % accuracy. I am getting little bit problem during Prediction.
The following code for training:
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.optimizers import SGD, Adam
from keras.utils import np_utils
import numpy as np
#seed = 7
#np.random.seed(seed)
batch_size = 50
nb_classes = 10
nb_epoch = 150
data_augmentation = False
# input image dimensions
img_rows, img_cols = 32, 32
# the CIFAR10 images are RGB
img_channels = 3
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
# let's train the model using SGD + momentum (how original).
#sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
sgd= Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
if not data_augmentation:
print('Not using data augmentation.')
model.fit(X_train, Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
validation_data=(X_test, Y_test),
shuffle=True)
else:
print('Using real-time data augmentation.')
# this will do preprocessing and realtime data augmentation
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
# fit the model on the batches generated by datagen.flow()
model.fit_generator(datagen.flow(X_train, Y_train,
batch_size=batch_size),
samples_per_epoch=X_train.shape[0],
nb_epoch=nb_epoch,
validation_data=(X_test, Y_test))
model.save('model3.h5')
The model was saved successfully and i implemented this following Prediction code.
Code for Prediction:
import keras
import tensorflow as tf
import h5py
from keras.models import load_model
import cv2
import numpy as np
model = load_model('model3.h5')
print('Model Loaded')
dim = (32,32)
img = cv2.imread('download.jpg')
img = cv2.resize(img,dim)
Array = [np.array(img)]
Prediction = model.predict(Array)
print(Prediction)
Error generated:
Using TensorFlow backend.
Model Loaded
Traceback (most recent call last):
File "E:\Prediction\Prediction.py", line 16, in <module>
Prediction = model.predict(Array)
File "C:\Users\Dilip\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 1149, in predict
x, _, _ = self._standardize_user_data(x)
File "C:\Users\Dilip\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 751, in _standardize_user_data
exception_prefix='input')
File "C:\Users\Dilip\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training_utils.py", line 128, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (32, 32, 3)
>>>
I know here that some problem is generated for not being in a proper shape of the input image i tried to convert it into (1,32,32,3) but i failed !!
Help here please.

It appears you are missing the classes in your code for prediction. Try this instead:
import cv2
import tensorflow as tf
#write the 10 classes here nb_classes
CATEGORIES = ['1','2','3','4','5','6','7','8','9','10']
def prepare(filepath):
IMG_SIZE = 32
img_array = cv2.imread(filepath, cv2.IMREAD_COLOR)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 3) #img_channels = 3
model = tf.keras.models.load_model('model3.h5')
prediction = model.predict([prepare('download.jpg')])
print(CATEGORIES[int(prediction[0][0])])

Related

How to give file or image to model.predict as a parameter in a Keras model?

I've watched a tutorial about image recognition in Python, and used written code for training a network. It compiles and learning fine, but how to use it for prediction on new images? Maybe something like: model.predict(y)?
Here is the code:
import numpy
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Flatten, Activation
from keras.layers import Dropout
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras.optimizers import SGD
numpy.random.seed(42)
#Loading data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
batch_size = 32
nb_classes = 10
#Number of epochs
epochNumber = 25
#Image size
img_rows, img_cols = 32, 32
#RGB
img_channels = 3
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
#To catogories
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
#Creating a model
model = Sequential()
#Adding layers
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=(32, 32, 3), activation='relu'))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
#Optimization
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
#Training model
model.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochNumber,
validation_split=0.1,
shuffle=True,
verbose=2)
scores = model.evaluate(X_test, Y_test, verbose=0)
print("Accuracy on test data: %.2f%%" % (scores[1]*100))
Then, what to do to predict?
target = "C://Users//Target.png"
print(model.predict(target))
How to correctly use model.predict and how to convert result to user-friendly output?
Note: if you are using keras package instead of tf.keras, replace tf.keras with keras in all the following code snippets.
To load a single image, you can use tf.keras.preprocessing.image.load_img:
image = tf.keras.preprocessing.image.load_img(image_path, target_size=(img_rows, img_cols))
This would load the image into PIL format; therefore, we need to convert it to numpy array before feeding it to our model:
import numpy as np
input_arr = tf.keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
Now, you might make a mistake by rushing into using predict method on input_arr. However, you should first perform the same preprocessing steps of training phase in prediction phase as well:
input_arr = input_arr.astype('float32') / 255. # This is VERY important
Now, it's ready to be given to the model for prediction:
predictions = model.predict(input_arr)
Bonus: Since your model is a classifier and it's using Softmax activation at the top, the predictions variable would contain the probabilities for each class. To find out the predicted class, we use argmax from Numpy to find the index of the class with the highest probability:
predicted_class = np.argmax(predictions, axis=-1)
you can use cv2 to read in the image. You want to make sure that what ever processing you did on the input image in training you also do on the image you read in with CV2. Be careful CV2 reads images in BGR format. If you trained your model on rgb images you need to convert the cv2 image to rgb as shown in the code below.Then you want to make the image 32 X 32 X3 so if it is not that size use cv2 to resize the image. I assume you rescaled your training images so you need to rescale the cv2 image as well. Code is below
import cv2
img=cv2.imread(f_path) # where f_path is the path to the image file
img=cv2.resize(img, (32,32), interpolation = cv2.INTER_AREA)
img=img/255
# CV2 inputs images in BGR format in general when you train a model you may have
#trained it with images in rgb format. If so you need to convert the cv2 image.
#uncomment the line below if that is the case.
#img=img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
predictions=model.predict(img)
pre_class=predictions.argmax()
# this will give you an integer value

Why are the predictions going wrong with MNIST CNN?

I trained the CNN on MNIST dataset with training and validation accuracy of ~0.99.
I followed the exact steps from the example given at the Keras documentation of implementing CNN with MNIST dataset:
import cv2
import numpy as np
import tensorflow.keras as keras
import math
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
When I tested the following image:
using the following test code:
img = cv2.imread("m9.png", 0)
img = cv2.resize(img, (28,28))
img = img / 255.
prob = model.predict_proba(img.reshape((1,28, 28, 1)))
print(prob)
model.predict_classes(img.reshape((1,28, 28, 1)))
The class it prints out is array([1]) , denoting number 1. I could not understand the reason for it. Did I try to predict in an incorrect way?
Exactly same class array([1]) was predicted for number 8 as shown below:
It looks like I have made an error during prediction? I tried to understand what could be happening but could not understand.
There is no error, its just that your images don't look at all like the ones in the MNIST dataset. This dataset is not meant to train a general digit recognition algorithm, it will only work with similar images.
In your case the digits will be very small in a 28x28 image, so the predictions are kind of random.
You are resizing the input image to 28 X 28. Instead you should first crop the image around the digit to make it look like the data-set in MNIST. Otherwise in resized image, the digit will occupy very small portion and results will be arbitrary.

Code to perform an attack to a CNN with foolbox, what's wrong?

I have to perform a simple FSGM attack to a convolutional neural network. The code for the CNN works correctly, and the model is saved without a problem, but when i try to perform the attack an error is shown.
HERE'S THE CODE FOR THE CNN
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten, MaxPooling2D
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.utils import to_categorical
import json
import tensorflow as tf
#Using TensorFlow backend.
#download mnist data and split into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
#plot the first image in the dataset
plt.imshow(X_train[0])
#check image shape
X_train[0].shape
#reshape data to fit model
X_train = X_train.reshape(60000,28,28,1)
X_test = X_test.reshape(10000,28,28,1)
#one-hot encode target column
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
y_train[0]
#create model
model = Sequential()
#add model layers
model.add(Conv2D(32, kernel_size=(5,5), activation='relu', input_shape= (28,28,1)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, kernel_size=(5,5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
#compile model using accuracy as a measure of model performance
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics= ['accuracy'])
#train model
model.fit(X_train, y_train,validation_data=(X_test, y_test), epochs=5)
json.dump({'model':model.to_json()},open("model.json", "w"))
model.save_weights("model_weights.h5")
THEN I TRY TO PERFORM THE ATTACK WITH THE FOLLOWING CODE:
import json
import foolbox
import keras
import numpy as np
from keras import backend
from keras.models import load_model
from keras.datasets import mnist
from keras.utils import np_utils
from foolbox.attacks import FGSM
from foolbox.criteria import Misclassification
from foolbox.distances import MeanSquaredDistance
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
import numpy as np
import tensorflow as tf
from keras.models import model_from_json
import os
############## Loading the model and preprocessing #####################
backend.set_learning_phase(False)
model = tf.keras.models.model_from_json(json.load(open("model.json"))["model"],custom_objects={})
model.load_weights("model_weights.h5")
fmodel = foolbox.models.KerasModel(model, bounds=(0,1))
_,(images, labels) = mnist.load_data()
images = images.reshape(10000,28,28)
images= images.astype('float32')
images /= 255
######################### Attacking the model ##########################
attack=foolbox.attacks.FGSM(fmodel, criterion=Misclassification())
adversarial=attack(images[12],labels[12]) # for single image
adversarial_all=attack(images,labels) # for all the images
adversarial =adversarial.reshape(1,28,28,1) #reshaping it for model prediction
model_predictions = model.predict(adversarial)
print(model_predictions)
########################## Visualization ################################
images=images.reshape(10000,28,28)
adversarial =adversarial.reshape(28,28)
plt.figure()
plt.subplot(1,3,1)
plt.title('Original')
plt.imshow(images[12])
plt.axis('off')
plt.subplot(1, 3, 2)
plt.title('Adversarial')
plt.imshow(adversarial)
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
difference = adversarial - images[124]
plt.imshow(difference / abs(difference).max() * 0.2 + 0.5)
plt.axis('off')
plt.show()
this error is shown when the adversarial examples are generated:
c_api.TF_GetCode(self.status.status))
InvalidArgumentError: Matrix size-incompatible: In[0]: [1,639232], In[1]: [1024,10]
[[{{node dense_4_5/MatMul}}]]
[[{{node dense_4_5/BiasAdd}}]]
What could it be?
here is my solution.
First of all modify the model code as follows
import tensorflow as tf
import json
# download mnist data and split into train and test sets
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
# reshape data to fit model
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train, X_test = X_train/255, X_test/255
# one-hot encode target column
y_train = tf.keras.utils.to_categorical(y_train)
y_test = tf.keras.utils.to_categorical(y_test)
# create model
model = tf.keras.models.Sequential()
# add model layers
model.add(tf.keras.layers.Conv2D(32, kernel_size=(5, 5),
activation='relu', input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Conv2D(64, kernel_size=(5, 5), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(10, activation='softmax'))
# compile model using accuracy as a measure of model performance
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
# train model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=5)
json.dump({'model': model.to_json()}, open("model.json", "w"))
model.save_weights("model_weights.h5")
You just forgot to divide each pixel by the maximum value of RGB (255)
As for the attacker code
import json
import foolbox
from foolbox.attacks import FGSM
from foolbox.criteria import Misclassification
import numpy as np
import tensorflow as tf
############## Loading the model and preprocessing #####################
tf.enable_eager_execution()
tf.keras.backend.set_learning_phase(False)
model = tf.keras.models.model_from_json(
json.load(open("model.json"))["model"], custom_objects={})
model.load_weights("model_weights.h5")
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
_, (images, labels) = tf.keras.datasets.mnist.load_data()
images = images.reshape(images.shape[0], 28, 28, 1)
images = images/255
images = images.astype(np.float32)
fmodel = foolbox.models.TensorFlowEagerModel(model, bounds=(0, 1))
######################### Attacking the model ##########################
attack = foolbox.attacks.FGSM(fmodel, criterion=Misclassification())
adversarial = np.array([attack(images[0], label=labels[0])])
model_predictions = model.predict(adversarial)
print('real label: {}, label prediction; {}'.format(
labels[0], np.argmax(model_predictions)))
I used TensorFlowEagerModel instead of KerasModel for simplicty. The error you were encountering was due to the fact that model.predict expects a 4d matrix while you were passing a 3d matrix, so I just wrapped up the attack to the image example into a numpy array to make it 4d.
Hope it helps

image input to neural networks

I am doing figure extractor from scanned documents. using 1100x850 images. we use 44x34 grids of image. so that last layer will be 1496 fully connected layer.
label is 44x34 BINARY array which is 1 for figure rigion and 0 for non figure region. i.e if figure falls within (top right) (x,y)=(0,0) (bottom left) (x,y)=(50,50) then bin array has 1 at (0,0) (0,1) and (1,0) (1,1) these positions and rest 0s. so i have buit a neural network model. following is the structure.
conv(5,2,48)
maxpool(3,2)
conv(5,2,96)
maxpool(3,2)
conv(5,2,96)
maxpool(3,2)
FC-1496
The notation conv(k,d, n) denotes a convolutional layer with n filters, each of size k × k, applied with a shift of d pixels; maxpool(k, d) denotes a downsampling operation over k×k windows, applied with a shift of d pixels. FC-1496 refers to the final fully connected
layer which connects the hidden units from the previous layers to the 1496 output units (we have 1496 units for a 44x34 grid).
So my question is how to feed input ( images and labels (array) ) to this model using keras and tensor flow.
here is the model code
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Dense
from keras.models import Sequential
from keras.layers import Flatten
xtrain=#image of 850*1100 for 10 images 10 850*1100
xtest=#binary array of size 1496 for 10 images size is 10*1496
# initialize the model
model = Sequential()
model.add(Conv2D(48, 5, 2, input_shape=(1100, 850, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Conv2D(96, 5, 2))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Conv2D(96, 5, 2))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Flatten())
model.add(Dense(1496, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
here is a working example based on your data (as I assume from your info)
I have using the label as a vector of 1 and 0 e.g [1,0,1,1,...] 1 for figure region and 0 for none figure region, for a total of 1496 regions
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.utils import np_utils
batch_size = 128
nb_epoch = 10
nb_regions = 1496
# input image dimensions
img_rows, img_cols = 850, 1100
# create random test and train sets
X_train = np.random.randint(256, size=(10, img_rows, img_cols))
Y_train = np.random.randint(2, size=(10, nb_regions))
X_test = np.random.randint(256, size=(10, img_rows, img_cols))
Y_test = np.random.randint(2, size=(10, nb_regions))
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
model = Sequential([
Dense(32, input_shape=(1, img_rows, img_cols)),
Activation('relu'),
Flatten(),
Dense(nb_regions),
Activation('softmax'),
])
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size, epochs=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

CNN model is giving wrong predictions

I am currently working on handwritten digit recognition of regional languages. Currently, I am focusing on Oriya. I test the MNIST dataset through the CNN model and I am trying to apply the model on my Oriya dataset. Model is performing poorly. It is giving the wrong predictions. I have a dataset of 4971 samples.
How to improve the accuracy?
Here's my code:
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD,RMSprop,adam
from keras.utils import np_utils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import os
import theano
from PIL import Image
from numpy import *
# SKLEARN
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# input image dimensions
img_rows, img_cols = 28, 28
# number of channels
img_channels = 1
path2 = '/home/saumya/Desktop/Oriya/p' #path of folder of images
imlist = os.listdir(path2)
im1 = array(Image.open('/home/saumya/Desktop/Oriya/p' + '/'+ imlist[0])) # open one image to get size
m,n = im1.shape[0:2] # get the size of the images
imnbr = len(imlist) # get the number of images
# create matrix to store all flattened images
immatrix = array([array(Image.open('/home/saumya/Desktop/Oriya/p' + '/'+ im2)).flatten()
for im2 in imlist],'f')
label=np.ones((num_samples,),dtype = int)
label[1:503]=0
label[503:1000]=1
label[1000:1497]=2
label[1497:1995]=3
label[1995:2493]=4
label[2493:2983]=5
label[2983:3483]=6
label[3483:3981]=7
label[3981:4479]=8
label[4479:4972]=9
print(label[1000])
data,Label = shuffle(immatrix,label, random_state=2)
train_data = [data,Label]
img=immatrix[2496].reshape(img_rows,img_cols)
plt.imshow(img)
plt.show()
(X, y) = (train_data[0],train_data[1])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
def baseline_model():
# create model
model = Sequential()
model.add(Conv2D(32, (3,3), input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
#model.add(Conv2D(64, (5, 5), input_shape=(1, 10, 10), activation='relu'))
#model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax', name = 'first_dense_layer'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = baseline_model()
# Fit the model
hist=model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30, batch_size=100, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
score = model.evaluate(X_test, y_test, verbose=0)
print('Test Loss:', score[0])
print('Test accuracy:', score[1])
test_image = X_test[0:1]
print (test_image.shape)
print(model.predict(test_image))
print(model.predict_classes(test_image))
print(y_test[0:1])
# define the larger model
def larger_model():
# create model
model = Sequential()
model.add(Conv2D(30, (5, 5), input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(15, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(50, activation='relu', name='first_dense_layer'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = larger_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Large CNN Error: %.2f%%" % (100-scores[1]*100))
I am trying to resize my model using opencv, it is generating the following error:
/root/mc-x64-2.7/conda-bld/opencv-3_1482254119970/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize
How to improve the accuracy?
A bit hard to give a detailed answer from what you posted, and without seeing some data sample, but still will attempt a stab at this. What I can see that may help improve you accuracy is:
Get more data. In Deep Learning one usually works with big amount of data, and models almost always improve when adding more data. If you can't obtain new data you may try to generate more samples with the ones you got, by adding noise or similar modifications.
I see you currently have 30 and 10 epochs on the training of your model. I suggest you increase the number of epochs, so your model has more time to converge. This also most of the times improves performance up to a point.
I also see that your batch size is 100 and 200 on your models. You can try reducing the batch size of your training process, so your training performs gradient update more times on each epoch (remember that you can even use batch_size=1 to upgrade your model for each sample, instead of batches).
Alternatively, you can try iteratively increasing the complexity and layers of your architecture and compare your performances. It is best to start with a simple model, train and test, and then add layers and nodes until you are satisfied with the results. I also see you have tried a hybrid convolutional and non-convolutional approach; you can well try starting with just one of the approaches before increasing the complexity of your architecture.

Categories

Resources