Different results with SAME Deep learning model for same images - python

I am trying to input a deep learning model for pictures.
model = pickle.load(open(finalized_model, 'rb')) #input Learned model
def predict(v_model):
with torch.no_grad():
for data, mask in valid_dataloader:、
data = torch.autograd.Variable(data, volatile=True).cuda() #input raw image
mask = torch.autograd.Variable(mask, volatile=True).cuda() #input mask image
result = v_model(data) #disicion raw image
break
If I input image datasets [a,b,c] and [b,c,d] respectively for this code, I can get results [A,B,C] and [B',C',D'].
For some reason, B≠B', C≠C'.
Please tell me when you know why.
Using google colaboratory with pytorch for predict of pre-trained RCNN model.
But I get different results with the same models

Related

How do you apply a Tensor Flow model on a single input and obtain the actual prediction and how to implement the model in a separate script

Recently I have been learning Tensor Flow, and I have written a few machine learning programs, however, I am wondering in what way can I test the model on a single input and receive the prediction, and not just evaluate the accuracy of the model on a lot of data as you would do using the model.fit() function. I am also wondering how can I then implement the model in a script, that for example gathers data and feeds it into the model automatically to obtain the predictions and then for example plots the results on a graph.
Thanks in advance.
To use your trained model for a single input lets call it y, you must process y to have the same data format your model was trained on. For example lets assume that you trained on model on images of cats and dog. If you model trained properly you should be able to submit a picture of a cat or a dog to it and have it tell you which it is.
Now if images were the input used to train the model they had a certain image shape (height,width) and a certain channel format for example RGB or Grayscale etc. So for the image y you want to predict you must ensure its size is the same height and width the model was trained on. If the model was trained on rgb images then y must be an rgb image. one more thing. When using model.predict say for predicting the single image y you will have to account for the fact that model.predict requires that you have the first dimension of y to be the batch_size. For the case of a single image the batch size is 1. So you need to expand the dimensions of y to include the batch size. For an immage the shape of y is (height, width,channels). It doesn't have a batch dimension so you need to add it. You can do that with
the y=np.expand_dims(y,axis=0) which will now give y the shape (1, height,width,channels). For example lets assume you trained you model on images of shape (224,224,3) in rgb format. You have an image y you want to classify and say it is a directory my_pics. The code below shows how to handle doing a prediction on image y. Somewhere in your training code you need to have an ordered list called classes. For the dog example the index code for cat might be 0 and the index code for dog then will be 1. So classes would be classes=['cat', 'dog']
model=tf.keras.models.load_model(path where model is stored) # load the trained model
image_path=r'my_pics' # path to image y
y=cv2.imread(image_path) #Note cv2 reads in images as bgr
y=cv2.resize(y, (224,224) # gives y the same shape as the training images
y=cv2.cvtColor(y, cv2.COLOR_BGR2RGB)# convert from bgr to rgb
y=np.expand_dims(y, axis=0) # y has shape (1,224,224,3)
prediction = model.predict(y) # make a prediction on y
print (prediction) # is a list with a probability value for each class
class_index=np.argmax(prediction # gives index of entry in prediction with highest probability
klass=classes[class_index} #selects the class name from the ordered list of classes
print (class)

Features extraction with VGG16 for clustering

I am asking a lot but I am very stuck on this one...
I have this part of code I used to extract features with SIFT, and I am trying to adapdt it to extract features based on a VGG16 model.
No matter how hard I try, I can't get passed through and always rise errors.
So if anyone can help to get the features in a way to use it for a clustering afterwards.
Here is the code with SIFT :
# identification of key points and associated descriptors
import time, cv2
sift_keypoints = []
temps1=time.time()
sift = cv2.xfeatures2d.SIFT_create(500)
for image_num in range(len(list_photos)) :
if image_num%100 == 0 : print(image_num)
image = cv2.imread(path+list_photos[image_num],0) # convert in gray
image = cv2.GaussianBlur(image,(7,7),cv2.BORDER_DEFAULT) #apply gaussianblur filter
# image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
res = cv2.equalizeHist(image) # equalize image histogram
kp, des = sift.detectAndCompute(res, None)
sift_keypoints.append(des)
sift_keypoints_by_img = np.asarray(sift_keypoints)
sift_keypoints_all = np.concatenate(sift_keypoints_by_img, axis=0)
And here is how I use it for my clustering :
from sklearn import cluster, metrics
# Determination number of clusters
k = int(round(np.sqrt(len(sift_keypoints_all)),0))
print("Nombre de clusters estimés : ", k)
print("Création de",k, "clusters de descripteurs ...")
# Clustering
kmeans = cluster.MiniBatchKMeans(n_clusters=k, init_size=3*k, random_state=0)
kmeans.fit(sift_keypoints_all)
What should I do to be able to extract features with a VGG model?
Thanks
There is an example regarding feature extraction with VGG16 in the official Keras documentation [1].
Note the layers of a convolutional network are successive representations of varying dimensions of your picture. Depending the layer you choose as output, the results from clustering may be very different.
[1] https://keras.io/api/applications/

how to train model with batches

I trying yolo model in python.
To process the data and annotation I'm taking the data in batches.
batchsize = 50
#boxList= []
#boxArr = np.empty(shape = (0,26,5))
for i in range(0, len(box_list), batchsize):
boxList = box_list[i:i+batchsize]
imagesList = image_list[i:i+batchsize]
#to convert the annotation from VOC format
convertedBox = np.array([np.array(get_boxes_for_id(box_l)) for box_l in boxList])
#pre-process on image and annotaion
image_data, boxes = process_input_data(imagesList,max_boxes,convertedBox)
boxes = np.array(list(itertools.chain.from_iterable(boxes)))
detectors_mask, matching_true_boxes = get_detector_mask(boxes, anchors)
after this, I want to pass my data to the model to train.
when I append the list it gives memory error because of array size.
and when i append array gives dimensionality error because of shape.
how can i train the data and what shoud i use model.fit() or model.train_on_batch()
If you are using Keras to Train your model with a bunch of Images you can use Train generator and validation generator, all you have to do is put your images in there respective class folders. look at a sample code . also take a look at this link maybe it may help you https://keras.io/preprocessing/image/ . i hope i have answered your question unless i did not understand it

How to predict a single input(external) image for categorical data using CNN using Keras

I am making a project on handwritten digit recognition using MNIST database and I have trained it for 60,0000 images in the data set and tested it for the 10,000 test images and got results about 99% accurate.
Now I want to input an external image to see whether my handwritten digit is recognized by the CNN or not. So I scanned my own handwritten image, converted it into gray scale and numpy array and feed it into the CNN, but I am always getting the output predicted result as 8 as a one hot encoded vector of numpy array.
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('six.jpg', target_size = (28,28))
test_image = image.img_to_array(test_image, data_format = None)
from numpy import *
test_image= delete(test_image, np.s_[::2], 2)
test_image = np.expand_dims(test_image, axis = 0)
predicted_dig = digit_recogniser.predict(test_image,batch_size= 32)
predicted_digits = np.argmax(np.round(predicted_digits),axis=0)
Can you please help me in figuring out what is the problem with the code and how can I successfully predict the digits individually scanned by/ external inputs? My CNN is fully trained using the MNIST data set. This is a kind of single prediction I want to make with some accuracy on taking random handwritten images of my choice.
Do you match the training data preprocessing during testing?

Why does my keras model get good accuracy but bad predictions?

So, I am trying to make a model which can predict doodles. I am using google's quick draw data :https://console.cloud.google.com/storage/browser/quickdraw_dataset/full/numpy_bitmap which are images rendered into 28x28 greyscale bitmap numpy array. I only chose 10 classes and took 60,000 photos to train/evaluate. I get a test accuracy of 91% . When I try to make predictions with data from test data, it works. But when i make a drawing in paint and convert it into 28x28, it doesn't make good predictions. What sort of data do I need to have? What kind of preprocessing does the image need?
This is how i preprocessed the data from google's npy file
def load_set(name,path,resultx,resulty,label):
loaded_set = np.load(path+name+".npy")
loaded_set = loaded_set.reshape(loaded_set.shape[0],1,28,28)
# print(name,loaded_set.shape)
loaded_set = loaded_set[0:6000,0:6000,0:6000,0:6000]
resultx = np.append(resultx,loaded_set,axis=0)
resulty = createLabelArray(label,loaded_set.shape[0],resulty)
print("loaded "+name)
return resultx,resulty
def createLabelArray(label,size,result):
for i in range(0,size):
result = np.append(result,[[label]],axis=0)
return result
where label is the label i want for that category.
I shuffle them afterwards and everything.
And this is how I am trying to process new images(drawings by me):
print("[INFO] loading and preprocessing image...")
image = image_utils.load_img(os.path.join(path, name), grayscale=True,target_size=(28, 28))
image = image_utils.img_to_array(image)
print(image.shape)
image = np.expand_dims(image, axis=0)
print(image.shape)
image = image.astype('float32')
image /= 255
return image
Please help, I've been stuck on this for a while now. Thank you
Seems to be a typical case of overfitting.
Please try 10-fold cross-validation to get accuracy of model.
Further use regularization and dropout in keras to prevent overfitting.

Categories

Resources