Trouble feeding data into tensorflow graph - python

I have trained a neural network model on MNIST dataset using the script mnist_3.1_convolutional_bigger_dropout.py provided in this tutorial.
I wanted to test the trained model on the custom dataset, hence I wrote a small script predict.py which loads the trained model and feed the data to it. I tried 2 methods for preprocessing images so that they are compatible with MNIST format.
Method 1: Resizing the image to 28x28
Method 2: Technique mentioned here is used
Both of these methods result in the error
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float
predict.py
# Importing libraries
from scipy.misc import imread
import tensorflow as tf
import numpy as np
import cv2 as cv
import glob
from test import imageprepare
files = glob.glob('data2/*.*')
#print(files)
# Method 1
'''
img_data = []
for fl in files:
img = imageprepare(fl)
img = img.reshape(img.shape[0], img.shape[1], 1)
img_data.append(img)
'''
# Method 2
dig_cont = [cv.imread(fl, 0) for fl in files]
#print(len(dig_cont))
img_data = []
for i in range(len(dig_cont)):
img = cv.resize(dig_cont[i], (28, 28))
img = img.reshape(img.shape[0], img.shape[1], 1)
img_data.append(img)
print("Restoring Model ...")
sess = tf.Session()
# Step-1: Recreate the network graph. At this step only graph is created.
tf_saver = tf.train.import_meta_graph('model/model.meta')
# Step-2: Now let's load the weights saved using the restore method.
tf_saver.restore(sess, tf.train.latest_checkpoint('model'))
print("Model restored")
x = tf.get_default_graph().get_tensor_by_name('X:0')
print('x :', x.shape)
y = tf.get_default_graph().get_tensor_by_name('Y:0')
print('y :', y.shape)
dict_data = {x: img_data}
result = sess.run(y, feed_dict=dict_data)
print(result)
print(result.shape)
sess.close()

The problem is fixed, I forgot to pass the value of variable pkeep. I had to make the following changes to make it work.
dict_data = {x: img_data, pkeep: 1.0}
instead of
dict_data = {x: img_data}

Related

Run Tensorflow Keras custom model in OpenCV

So I was basically trying to figure out how to import a Tensorflow Keras CNN Model in OpenCV. The Docs I found on Github, weren't helpful and also were not clear about what to do EXACTLY. I have searched whole Youtube for tutorials, but nobody seems to have imported a custom made model before haha. I have basically tried everything...
Saving model as pickle (.p) and reading it in OpenCv (gave me this error: "Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://7b832872-99f5-4b67-8675-14f2423877df/variables/variables You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device option in tf.saved_model.LoadOptions to the io_device such as '/job:localhost'."). I can't figure out what this is...
I also tried importing with tf.keras.models.load_model("saved_model.pb"), which also didnt seem to work and throw me the following error: File "h5py\h5f.pyx", line 106, in h5py.h5f.open
OSError: Unable to open file (file signature not found). It seems like I need a .h5 file, which I dont know how to get from my current model.
The next thing I tried was using cv2.dnn.readNetFromTensorflow(). For this to work you need the Tensorflow .pb file, which I have and (i guess its optional) the .pbtxt file. So the first problem was this error, which appeared after pasing my saved_model.pb: Failed to parse GraphDef file: saved_model.pb in function 'cv::dnn::ReadTFNetParamsFromBinaryFileOrDie'. I checked on that and some people wrote you should check the file for corruption and if it the name is written correctly, which I guess it is. Model passed with 0.986 Accuracy in my tests.
No I am on the end with my energy and dont't know what to do. I can't be the only one to have these issues, but certainly, it should be easy to use a tensorflow model in opencv, according to the docs...
I will now share for you the code I am using for creating the model as also the code for reading it in OpenCv. Versions of OpenCV, Python, Tensorflow, cuda, cudnn and pickle I will include below.
Any of your help is greatly appreciated!
This is the Code for creating Tensorflow Model
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
from keras.utils.np_utils import to_categorical
from keras.layers import Dropout, Flatten
from keras.layers.convolutional import Conv2D, MaxPooling2D
import cv2
from sklearn.model_selection import train_test_split
import tensorflow as tf
import pickle
import os
import pandas as pd
import random
from keras.preprocessing.image import ImageDataGenerator
import time
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.compat.v1.Session(config=config)
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(gpu_options=gpu_options))
################# Parameters #####################
path = "dataset" # folder with all the class folders
labelFile = 'labels.csv' # file with all names of classes
batch_size_val=10 # how many to process together
steps_per_epoch_val=300
epochs_val=100
imageDimesions = (300,300,3)
testRatio = 0.2
validationRatio = 0.2
###################################################
############################### Importing of the Images
count = 0
images = []
classNo = []
myList = os.listdir(path)
print("Total Classes Detected:",len(myList))
noOfClasses=len(myList)
print("Importing Classes.....")
for x in range (0,len(myList)):
myPicList = os.listdir(path+"/"+str(count))
for y in myPicList:
curImg = cv2.imread(path+"/"+str(count)+"/"+y)
images.append(curImg)
classNo.append(count)
print(count, end =" ")
count +=1
print(" ")
images = np.array(images)
classNo = np.array(classNo)
############################### Split Data
X_train, X_test, y_train, y_test = train_test_split(images, classNo, test_size=testRatio)
X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=validationRatio)
# X_train = ARRAY OF IMAGES TO TRAIN
# y_train = CORRESPONDING CLASS ID
############################### TO CHECK IF NUMBER OF IMAGES MATCHES TO NUMBER OF LABELS FOR EACH DATA SET
print("Data Shapes")
print("Train",end = "");print(X_train.shape,y_train.shape)
print("Validation",end = "");print(X_validation.shape,y_validation.shape)
print("Test",end = "");print(X_test.shape,y_test.shape)
assert(X_train.shape[0]==y_train.shape[0]), "The number of images in not equal to the number of lables in training set"
assert(X_validation.shape[0]==y_validation.shape[0]), "The number of images in not equal to the number of lables in validation set"
assert(X_test.shape[0]==y_test.shape[0]), "The number of images in not equal to the number of lables in test set"
assert(X_train.shape[1:]==(imageDimesions))," The dimesions of the Training images are wrong "
assert(X_validation.shape[1:]==(imageDimesions))," The dimesionas of the Validation images are wrong "
assert(X_test.shape[1:]==(imageDimesions))," The dimesionas of the Test images are wrong"
############################### READ CSV FILE
data=pd.read_csv(labelFile)
print("data shape ",data.shape,type(data))
############################### DISPLAY SOME SAMPLES IMAGES OF ALL THE CLASSES
num_of_samples = []
cols = 5
num_classes = noOfClasses
fig, axs = plt.subplots(nrows=num_classes, ncols=cols, figsize=(5, 300))
fig.tight_layout()
for i in range(cols):
for j,row in data.iterrows():
x_selected = X_train[y_train == j]
axs[j][i].imshow(x_selected[random.randint(0, len(x_selected)- 1), :, :], cmap=plt.get_cmap("gray"))
axs[j][i].axis("off")
if i == 2:
axs[j][i].set_title(str(j)+ "-"+str(row["Name"]))
num_of_samples.append(len(x_selected))
############################### DISPLAY A BAR CHART SHOWING NO OF SAMPLES FOR EACH CATEGORY
print(num_of_samples)
plt.figure(figsize=(12, 4))
plt.bar(range(0, num_classes), num_of_samples)
plt.title("Distribution of the training dataset")
plt.xlabel("Class number")
plt.ylabel("Number of images")
plt.show()
############################### PREPROCESSING THE IMAGES
def grayscale(img):
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
return img
def equalize(img):
img =cv2.equalizeHist(img)
return img
def preprocessing(img):
img = grayscale(img) # CONVERT TO GRAYSCALE
img = equalize(img) # STANDARDIZE THE LIGHTING IN AN IMAGE
img = img/255 # TO NORMALIZE VALUES BETWEEN 0 AND 1 INSTEAD OF 0 TO 255
return img
X_train=np.array(list(map(preprocessing,X_train))) # TO IRETATE AND PREPROCESS ALL IMAGES
X_validation=np.array(list(map(preprocessing,X_validation)))
X_test=np.array(list(map(preprocessing,X_test)))
cv2.imshow("GrayScale Images",X_train[random.randint(0,len(X_train)-1)]) # TO CHECK IF THE TRAINING IS DONE PROPERLY
############################### ADD A DEPTH OF 1
X_train=X_train.reshape(X_train.shape[0],X_train.shape[1],X_train.shape[2],1)
X_validation=X_validation.reshape(X_validation.shape[0],X_validation.shape[1],X_validation.shape[2],1)
X_test=X_test.reshape(X_test.shape[0],X_test.shape[1],X_test.shape[2],1)
############################### AUGMENTATAION OF IMAGES: TO MAKEIT MORE GENERIC
dataGen= ImageDataGenerator(width_shift_range=0.1, # 0.1 = 10% IF MORE THAN 1 E.G 10 THEN IT REFFERS TO NO. OF PIXELS EG 10 PIXELS
height_shift_range=0.1,
zoom_range=0.2, # 0.2 MEANS CAN GO FROM 0.8 TO 1.2
shear_range=0.1, # MAGNITUDE OF SHEAR ANGLE
rotation_range=10) # DEGREES
dataGen.fit(X_train)
batches= dataGen.flow(X_train,y_train,batch_size=20) # REQUESTING DATA GENRATOR TO GENERATE IMAGES BATCH SIZE = NO. OF IMAGES CREAED EACH TIME ITS CALLED
X_batch,y_batch = next(batches)
# TO SHOW AGMENTED IMAGE SAMPLES
fig,axs=plt.subplots(1,15,figsize=(20,5))
fig.tight_layout()
for i in range(15):
axs[i].imshow(X_batch[i].reshape(imageDimesions[0],imageDimesions[1]))
axs[i].axis('off')
plt.show()
y_train = to_categorical(y_train,noOfClasses)
y_validation = to_categorical(y_validation,noOfClasses)
y_test = to_categorical(y_test,noOfClasses)
############################### CONVOLUTION NEURAL NETWORK MODEL
def myModel():
no_Of_Filters=60
size_of_Filter=(5,5) # THIS IS THE KERNEL THAT MOVE AROUND THE IMAGE TO GET THE FEATURES.
# THIS WOULD REMOVE 2 PIXELS FROM EACH BORDER WHEN USING 32 32 IMAGE
size_of_Filter2=(3,3)
size_of_pool=(2,2) # SCALE DOWN ALL FEATURE MAP TO GERNALIZE MORE, TO REDUCE OVERFITTING
no_Of_Nodes = 500 # NO. OF NODES IN HIDDEN LAYERS
model= Sequential()
model.add((Conv2D(no_Of_Filters,size_of_Filter,input_shape=(imageDimesions[0],imageDimesions[1],1),activation='relu'))) # ADDING MORE CONVOLUTION LAYERS = LESS FEATURES BUT CAN CAUSE ACCURACY TO INCREASE
model.add((Conv2D(no_Of_Filters, size_of_Filter, activation='relu')))
model.add(MaxPooling2D(pool_size=size_of_pool)) # DOES NOT EFFECT THE DEPTH/NO OF FILTERS
model.add((Conv2D(no_Of_Filters//2, size_of_Filter2,activation='relu')))
model.add((Conv2D(no_Of_Filters // 2, size_of_Filter2, activation='relu')))
model.add(MaxPooling2D(pool_size=size_of_pool))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(no_Of_Nodes,activation='relu'))
model.add(Dropout(0.5)) # INPUTS NODES TO DROP WITH EACH UPDATE 1 ALL 0 NONE
model.add(Dense(noOfClasses,activation='softmax')) # OUTPUT LAYER
# COMPILE MODEL
model.compile(Adam(lr=0.001),loss='categorical_crossentropy',metrics=['accuracy'])
return model
############################### TRAIN
model = myModel()
print(model.summary())
history=model.fit_generator(dataGen.flow(X_train,y_train,batch_size=int(batch_size_val)),steps_per_epoch=int(steps_per_epoch_val),epochs=int(epochs_val),validation_data=(X_validation,y_validation),shuffle=1)
############################### PLOT
plt.figure(1)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['training','validation'])
plt.title('loss')
plt.xlabel('epoch')
plt.figure(2)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training','validation'])
plt.title('Acurracy')
plt.xlabel('epoch')
plt.show()
score =model.evaluate(X_test,y_test,verbose=0)
print('Test Score:',score[0])
print('Test Accuracy:',score[1])
#model.save(r"C:\Path\To\My\Directory\DetectBrick")
#print("model saved!!")
pickle_out= open(r"C:\Path\To\My\Directory\DetectBrick\model_trained.p","wb") # wb = WRITE BYTE
pickle.dump(model,pickle_out)
pickle_out.close()
cv2.waitKey(0)
This is the code for opening Model in OpenCV (at least trying to :) )
import numpy as np
import cv2
import pickle
from tensorflow import keras
import tensorflow as tf
import h5py
framewidth = 640
frameheight = 480
brightness = 180
threshold = 0.7
font = cv2.FONT_HERSHEY_SIMPLEX
camera = cv2.VideoCapture(0)
camera.set(3, framewidth)
camera.set(4, framewidth)
camera.set(10, framewidth)
#pb="saved_model.pb"
#pbtxt = "" #don't know if I need it .pbtxt
#model = cv2.dnn.readNetFromTensorflow(pb) #pbtxt file would be second parameter, #but dont know if needed
pickle_in = open("model_trained.p", "rb")
model = pickle.load(pickle_in)
def grayscale(img):
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
return img
def equalize(img):
img = cv2.equalizeHist(img)
return img
def preprocessing(img):
img = grayscale(img)
img = equalize(img)
img = img/255
return img
def getClassName(classNo):
if classNo == 0: return '3003'
elif classNo == 1: return '3010'
while camera.IsOpened():
boolean, frameoriginal = camera.read()
img = np.asarray(frameoriginal)
img = cv2.resize(img, (32,32))
img = preprocessing(img)
cv2.imshow("processed image", img)
img = img.reshape(1, 32, 32, 1)
cv2.putText(frameoriginal, "Klasse: ", (20,35), font, 0.75, (0,0,255), 2, cv2.LINE_AA)
cv2.putText(frameoriginal, "Genauigkeit: ", (20,75), font, 0.75, (255,0,0), 2, cv2.LINE_AA)
predictions = model.predict([img])
classIndex = model.predict_classes([img])
probabilityValue = np.amax(predictions)
if probabilityValue > threshold:
cv2.putText(frameoriginal, str(classIndex)+ " " + str(getClassName(classIndex)), (120,35), font, 0.75, (0,0,255),2, cv2.LINE_AA)
cv2.putText(frameoriginal, str(round(probabilityValue*100, 2)) + "%", (180,75), font, 0.75, (0,0,255),2, cv2.LINE_AA)
cv2.imshow("result", frameoriginal)
if cv2.waitKey(2) & OxFF == ord('q'):
break
cv2.destroyAllWindows()
camera.realease()
And here are the Versions I am using
Python: 3.10.5
Tensorflow (GPU): 2.9.1
CUDA: 11.2
Cudnn: 8.1
Pickle: 4.0
System: Windows 11
CPU: AMD Ryzen 5 5600G
GPU: GTX 1660 Super 6GB

tensorflow keras: I am getting this error 'module "tensorflow._api.v1.keras.layers' has no attribute 'flatten'"

I am getting the above error while executing the below code.
I am trying to work out this below tutorial on tensorflow neural network implementation.
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
def load_data(data_directory):
directories = [d for d in os.listdir(data_directory)
if os.path.isdir(os.path.join(data_directory, d))]
labels = []
images = []
for d in directories:
label_directory = os.path.join(data_directory, d)
file_names = [os.path.join(label_directory, f)
for f in os.listdir(label_directory)
if f.endswith(".ppm")]
for f in file_names:
images.append(skimage.data.imread(f))
labels.append(int(d))
return images, labels
import os
import skimage
from skimage import transform
from skimage.color import rgb2gray
import numpy as np
import keras
from keras import layers
from keras.layers import Dense
ROOT_PATH = "C://Users//Jay//AppData//Local//Programs//Python//Python37//Scriptcodes//BelgianSignals"
train_data_directory = os.path.join(ROOT_PATH, "Training")
test_data_directory = os.path.join(ROOT_PATH, "Testing")
images, labels = load_data(train_data_directory)
# Print the `labels` dimensions
print(np.array(labels))
# Print the number of `labels`'s elements
print(np.array(labels).size)
# Count the number of labels
print(len(set(np.array(labels))))
# Print the `images` dimensions
print(np.array(images))
# Print the number of `images`'s elements
print(np.array(images).size)
# Print the first instance of `images`
np.array(images)[0]
images28 = [transform.resize(image, (28, 28)) for image in images]
images28 = np.array(images28)
images28 = rgb2gray(images28)
# Import `tensorflow`
import tensorflow as tf
# Initialize placeholders
x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28])
y = tf.placeholder(dtype = tf.int32, shape = [None])
# Flatten the input data
images_flat = tf.keras.layers.flatten(x)
# Fully connected layer
logits = tf.contrib.layers.dense(images_flat, 62, tf.nn.relu)
# Define a loss function
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels = y,
logits = logits))
# Define an optimizer
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
# Convert logits to label indexes
correct_pred = tf.argmax(logits, 1)
# Define an accuracy metric
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
At first, I used tf.layers.flatten(x) as in the tutorial. however, it will be depreciated in future versions. So add keras instead as suggested.
I am getting the following output in IDLE Console.
RESTART: C:\Users\Jay\AppData\Local\Programs\Python\Python37\Scriptcodes\SecondTensorFlow.py
Using TensorFlow backend.
Warning (from warnings module):
File "C:\Users\Jay\AppData\Local\Programs\Python\Python37\lib\site-packages\skimage\transform_warps.py", line 105
warn("The default mode, 'constant', will be changed to 'reflect' in "
UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
Warning (from warnings module):
File "C:\Users\Jay\AppData\Local\Programs\Python\Python37\lib\site-packages\skimage\transform_warps.py", line 110
warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
Traceback (most recent call last):
File "C:\Users\Jay\AppData\Local\Programs\Python\Python37\Scriptcodes\SecondTensorFlow.py", line 64, in
images_flat = tf.python.keras.layers.flatten(x)
AttributeError: module 'tensorflow' has no attribute 'python'
I am using,
Keras version 2.2.4
Tensorflow version 1.13.1
Either
from keras.layers import Flatten
and use
Flatten()(input)
or
simply use
tf.keras.layers.Flatten()(input)
The new ("keras as the default API") approach would have you use the keras layer tf.keras.layers.Flatten but there is a little nuance you seem to have missed (and that hasn't been mentioned in the comments).
tf.keras.layers.Flatten() actually returns a keras layer (callable) object which in turn needs to be called with your previous layer.
So something more like this:
# Flatten the input data
flatten_layer = tf.keras.layers.Flatten()
images_flat = flatten_layer(x)
or, for brevity, just:
# Flatten the input data
images_flat = tf.keras.layers.Flatten()(x)

How to use a image set in RNN model

Hello I am trying to do my first RNN using Keras and Tensorflow, but I am getting stuck on an issue or reshaping my images to fit into the model.
I have looked at this post but could not figure out about the reshaping:
Keras - Input a 3 channel image into LSTM
What I have is a bunch of images that are taken at every frame in a video. I saved all the frames outside of python so I have a very large folder of images.I separated the frames into 21 frames for a segment so 21 images per motion that I want to capture. I want to read in these 21 images as one sequence. I have the same sequence captured from multiple cameras/angles which I want to us in this model. What I want to try is to model a movement and see if a person is doing this movement or not, so it is a binary model yes or no basically. Not the most sophisticated but its a learning process to use this model and keras.
I need help figuring out how to use these images inside the keras model. I have looked at a few tutorials on MINST data set but that didnt help me figure this out.
Any help will be appreciated.
This is the error that is given to me when I try to train the model
ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (2026, 200, 200, 1)
My code is this:
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from tqdm import tqdm
import cv2
import os
import numpy as np
imageSize = 200
#create lables for each image
def labelImage(img):
wordLabel = img.split('.')[-3]
#Conversion to one hot array [lat,not]
if wordLabel == "FWAC":
return[1,0]
else:
return[0,1]
#Process images and add lables
#Convert data into an array and add its lable
def makeTrainingData():
print("Creating Training Data")
trainingData = []
for img in tqdm(os.listdir(trainDir)):
label = labelImage(img)
path = os.path.join(trainDir,img)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (imageSize,imageSize))
trainingData.append([np.array(img),np.array(label)])
#Save the array file to load it into other models if needed
np.save("trainingData.npy", trainingData)
print("Training Data Saved")
return trainingData
#process the testing data in the same manner
def processTestData():
print("Creating Testing Data")
testData = []
for img in tqdm(os.listdir(testDri)):
print("image", img)
path = os.path.join(testDri, img)
imgNum = img.split(".")[0]
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (imageSize, imageSize))
testData.append([np.array(img), imgNum])
np.save("testingData.npy", testData)
print("Testing Data Saved")
return testData
rnnSize = 512
model = Sequential()
model.add(LSTM(rnnSize, input_shape=(imageSize, imageSize)))
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dense(50))
model.add(Activation('sigmoid'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='mean_squared_error', optimizer='adam',metrics=['accuracy'])
#Data
trainDir = "D:/TrainingDataSets/TrainingSet/"
testDri = "D:/TrainingDataSets/TestingSet/"
#trainData = makeTrainingData()
#testData = processTestData()
trainData = np.load('trainingData.npy')
testData = np.load("testingData.npy")
#resize the image to this See above
train = trainData[:-500]
test = trainData[-200:]
x = []
y = []
for xi in trainData:
x.append(xi[0].reshape((-1, imageSize, imageSize)))
y.append(xi[1])
x_train = np.array([i[0] for i in train]).reshape(-1,imageSize, imageSize,1)
y_train = [i[1] for i in train]
test_x = np.array([i[0] for i in test]).reshape(-1,imageSize , imageSize,1)
test_y = [i[1] for i in test]
epoch = 5
batchSize = 100
model.fit(x_train, y_train, epochs=epoch, batch_size= batchSize, verbose=1, shuffle=False)
For the error before dense layers add this line:
model.add(Flatten())
Previously, you should import:
from keras.layers import Flatten

Keras MobileNet example yields different answers on different computers

I have a very simple example with the Keras MobileNet implementation trying to classify a minivan. I run the same code on two different computers and get different results, not just slightly different but different enough that the classifications are not the same.
(note that Tensorflow=1.7.0 and Keras=2.1.5 on both computers)
Code below
import sys
import argparse
import numpy as np
from PIL import Image
import requests
from io import BytesIO
import time
try:
import matplotlib.pyplot as plt
HAS_MATPLOTLIB = True
except:
HAS_MATPLOTLIB = False
from keras.preprocessing import image
#from keras.applications.resnet50 import ResNet50, preprocess_input, decode_predictions
from keras.applications.mobilenet import MobileNet, preprocess_input, decode_predictions
#model = ResNet50(weights='imagenet')
model = MobileNet()
target_size = (224, 224)
def predict(model, img, target_size, top_n=3):
"""Run model prediction on image
Args:
model: keras model
img: PIL format image
target_size: (w,h) tuple
top_n: # of top predictions to return
Returns:
list of predicted labels and their probabilities
"""
if img.size != target_size:
img = img.resize(target_size)
print "preprocessing input.."
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print "making predicition..."
preds = model.predict(x)
print "prediction made: %s" % preds
return decode_predictions(preds, top=top_n)[0]
if __name__=="__main__":
a = argparse.ArgumentParser()
a.add_argument("--image", help="path to image")
a.add_argument("--image_url", help="url to image")
args = a.parse_args()
if args.image is None and args.image_url is None:
a.print_help()
sys.exit(1)
if args.image is not None:
img = Image.open(args.image)
preds = predict(model, img, target_size)
if args.image_url is not None:
print "getting image from url"
response = requests.get(args.image_url)
print "image gotten from url"
img = Image.open(BytesIO(response.content))
print "predicting.."
before = time.time()
preds = predict(model, img, target_size)
print "total time to predict: %.2f" % (time.time() - before)
print preds
plot_preds(img, preds)
Now if I run this on my MacBook Pro
$ python classify_example_mobile.py --image_url http://i.imgur.com/cg37Ojo.jpg
[(u'n03770679', u'minivan', 0.39935172), (u'n02974003', u'car_wheel', 0.28071228), (u'n02814533', u'beach_wagon', 0.19400564)]
but if I then run it on another computer that I have
(venv) $ python classify_example_mobile.py --image_url http://i.imgur.com/cg37Ojo.jpg
[(u'n02974003', u'car_wheel', 0.39516035), (u'n02814533', u'beach_wagon', 0.27965376), (u'n03770679', u'minivan', 0.22706936)]
the predictions are reversed, it no longer picks minivan as the top result.
How could this be? I know that different architectures can have different floating-point math accuracy, but would that be enough to account for these results? I also know that models can vary depending on the way the weights are initialized during training, but this is a pre-trained model, so what gives?
edit - to be clear, the image is a picture of a minivan, so in this case one architecture gets it right and the other one gets it wrong - so this is a big deal for me. (http://i.imgur.com/cg37Ojo.jpg)
So I don't quite understand what is going on here, but the error appears to have gone away once I did some more preprocessing of the input, which makes me think that maybe I had different PIL versions of numpy versions or something.
I added these lines
img = img.convert("RGB")
and now the results between the two computers are identical

How to feed Cifar10 trained model with my own image and get label as output?

I am trying to use the trained model based on the Cifar10 tutorial and would like to feed
it with an external image 32x32 (jpg or png).
My goal is to be able to get the label as an output.
In other words, I want to feed the Network with a single jpeg image of size 32 x 32, 3 channels with no label as an input and have the inference process give me the tf.argmax(logits, 1).
Basically I would like to be able to use the trained cifar10 model on an external image and see what class it will spit out.
I have been trying to do that based on the Cifar10 Tutorial and unfortunately always have issues. especially with the Session concept and the batch concept.
Any help doing that with Cifar10 would be greatly appreciated.
Here is the implemented code so far with compilation issues :
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import math
import time
import tensorflow.python.platform
from tensorflow.python.platform import gfile
import numpy as np
import tensorflow as tf
import cifar10
import cifar10_input
import os
import faultnet_flags
from PIL import Image
FLAGS = tf.app.flags.FLAGS
def evaluate():
filename_queue = tf.train.string_input_producer(['/home/tensor/.../inputImage.jpg'])
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
input_img = tf.image.decode_jpeg(value)
init_op = tf.initialize_all_variables()
# Problem in here with Graph / session
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(1):
image = input_img.eval()
print(image.shape)
Image.fromarray(np.asarray(image)).show()
# Problem in here is that I have only one image as input and have no label and would like to have
# it compatible with the Cifar10 network
reshaped_image = tf.cast(image, tf.float32)
height = FLAGS.resized_image_size
width = FLAGS.resized_image_size
resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, width, height)
float_image = tf.image.per_image_whitening(resized_image) # reshaped_image
num_preprocess_threads = 1
images = tf.train.batch(
[float_image],
batch_size=128,
num_threads=num_preprocess_threads,
capacity=128)
coord.request_stop()
coord.join(threads)
logits = faultnet.inference(images)
# Calculate predictions.
#top_k_predict_op = tf.argmax(logits, 1)
# print('Current image is: ')
# print(top_k_predict_op[0])
# this does not work since there is a problem with the session
# and the Graph conflicting
my_classification = sess.run(tf.argmax(logits, 1))
print ('Predicted ', my_classification[0], " for your input image.")
def main(argv=None):
evaluate()
if __name__ == '__main__':
tf.app.run() '''
Some basics first:
First you define your graph: image queue, image preprocessing, inference of the convnet, top-k accuracy
Then you create a tf.Session() and work inside it: starting the queue runners, and calls to sess.run()
Here is what your code should look like
# 1. GRAPH CREATION
filename_queue = tf.train.string_input_producer(['/home/tensor/.../inputImage.jpg'])
... # NO CREATION of a tf.Session here
float_image = ...
images = tf.expand_dims(float_image, 0) # create a fake batch of images (batch_size=1)
logits = faultnet.inference(images)
_, top_k_pred = tf.nn.top_k(logits, k=5)
# 2. TENSORFLOW SESSION
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
top_indices = sess.run([top_k_pred])
print ("Predicted ", top_indices[0], " for your input image.")
EDIT:
As #mrry suggests, if you only need to work on a single image, you can remove the queue runners:
# 1. GRAPH CREATION
input_img = tf.image.decode_jpeg(tf.read_file("/home/.../your_image.jpg"), channels=3)
reshaped_image = tf.image.resize_image_with_crop_or_pad(tf.cast(input_img, width, height), tf.float32)
float_image = tf.image.per_image_withening(reshaped_image)
images = tf.expand_dims(float_image, 0) # create a fake batch of images (batch_size = 1)
logits = faultnet.inference(images)
_, top_k_pred = tf.nn.top_k(logits, k=5)
# 2. TENSORFLOW SESSION
with tf.Session() as sess:
sess.run(init_op)
top_indices = sess.run([top_k_pred])
print ("Predicted ", top_indices[0], " for your input image.")
The original source code in cifar10_eval.py can also be used for testing own individual images as it is shown in the following console output
nbatfai#robopsy:~/Robopsychology/repos/gpu/tensorflow/tensorflow/models/image/cifar10$ python cifar10_eval.py --run_once True 2>/dev/null
[ -0.63916457 -3.31066918 2.32452989 1.51062226 15.55279636
-0.91585422 1.26451302 -4.11891603 -7.62230825 -4.29096413]
deer
nbatfai#robopsy:~/Robopsychology/repos/gpu/tensorflow/tensorflow/models/image/cifar10$ python cifar2bin.py matchbox.png input.bin
nbatfai#robopsy:~/Robopsychology/repos/gpu/tensorflow/tensorflow/models/image/cifar10$ python cifar10_eval.py --run_once True 2>/dev/null
[ -1.30562115 12.61497402 -1.34208572 -1.3238833 -6.13368177
-1.17441642 -1.38651907 -4.3274951 2.05489922 2.54187846]
automobile
nbatfai#robopsy:~/Robopsychology/repos/gpu/tensorflow/tensorflow/models/image/cifar10$
and code snippet
#while step < num_iter and not coord.should_stop():
# predictions = sess.run([top_k_op])
print(sess.run(logits[0]))
classification = sess.run(tf.argmalogits[0], 0))
cifar10classes = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
print(cifar10classes[classification])
#true_count += np.sum(predictions)
step += 1
# Compute precision # 1.
precision = true_count / total_sample_count
# print('%s: precision # 1 = %.3f' % (datetime.now(), precision))
More details can be found in the post How can I test own image to Cifar-10 tutorial on Tensorflow?

Categories

Resources