I want to capture frames from a video with python and opencv and then classify the captured Mat images with tensorflow. The problem is that i donĀ“t know how to convert de Mat format to a 3D Tensor variable. This is how i am doing now with tensorflow (loading the image from file) :
image_data = tf.gfile.FastGFile(imagePath, 'rb').read()
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor,
{'DecodeJpeg/contents:0': image_data})
I will appreciate any help, thanks in advance
Load the OpenCV image using imread, then convert it to a numpy array.
For feeding into inception v3, you need to use the Mult:0 Tensor as entry point, this expects a 4 dimensional Tensor that has the layout: [Batch index,Width,Height,Channel]
The last three are perfectly fine from a cv::Mat, the first one just needs to be 0, as you do not want to feed a batch of images, but a single image.
The code looks like:
#Loading the file
img2 = cv2.imread(file)
#Format for the Mul:0 Tensor
img2= cv2.resize(img2,dsize=(299,299), interpolation = cv2.INTER_CUBIC)
#Numpy array
np_image_data = np.asarray(img2)
#maybe insert float convertion here - see edit remark!
np_final = np.expand_dims(np_image_data,axis=0)
#now feeding it into the session:
#[... initialization of session and loading of graph etc]
predictions = sess.run(softmax_tensor,
{'Mul:0': np_final})
#fin!
Kind regards,
Chris
Edit: I just noticed, that the inception network wants intensity values normalized as floats to [-0.5,0.5], so please use this code to convert them before building the RGB image:
np_image_data=cv2.normalize(np_image_data.astype('float'), None, -0.5, .5, cv2.NORM_MINMAX)
With Tensorflow 2.0 and OpenCV 4.2.0, you can convert by this way :
import numpy as np
import tensorflow as tf
import cv2 as cv
width = 32
height = 32
#Load image by OpenCV
img = cv.imread('img.jpg')
#Resize to respect the input_shape
inp = cv.resize(img, (width , height ))
#Convert img to RGB
rgb = cv.cvtColor(inp, cv.COLOR_BGR2RGB)
#Is optional but i recommend (float convertion and convert img to tensor image)
rgb_tensor = tf.convert_to_tensor(rgb, dtype=tf.float32)
#Add dims to rgb_tensor
rgb_tensor = tf.expand_dims(rgb_tensor , 0)
#Now you can use rgb_tensor to predict label for exemple :
#Load pretrain model, made from: https://www.tensorflow.org/tutorials/images/cnn
model = tf.keras.models.load_model('cifar10_model.h5')
#Create probability model
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
#Predict label
predictions = probability_model.predict(rgb_tensor, steps=1)
It looks like you're using the pre-trained and pre-defined Inception model, which has a tensor named DecodeJpeg/contents:0. If so, this tensor expects a scalar string containing the bytes for a JPEG image.
You have a couple of options, one is to look further down the network for the node where the JPEG is converted to a matrix. I'm not sure what the MAT format is, but this will be a [height, width, colour_depth] representation. If you can get your image in that format you can replace the DecodeJpeg... string with the name of the node you want to feed into.
The other option is to simply convert your images to JPEGs and feed them straight in.
You should be able to convert the opencv mat format to a numpy array as:
np_image_data = np.asarray(image_data)
Once you have the data as a numpy array you can pass it to tensor flow through a feeding mechanism as in the link that #thesonyman101 referenced:
feed_dict = {some_tf_input:np_image_data}
predictions = sess.run(some_tf_output, feed_dict=feed_dict)
In my case i had to read an image from file, do some processing and then inject into inception to obtain the return from a features layer, called last layer.
My solution is short but effective.
img = cv2.imread(file)
... do some processing
img_as_string = cv2.imencode('.jpg', img)[1].tostring()
features = sess.run(last_layer, {'DecodeJpeg/contents:0': img_as_string})
Related
How can I do this for multiple images (within a folder) and put them into a Dataframe?
This is the code for analysing one image:
import numpy as np
from keras.preprocessing import image
from keras.applications import resnet50
import warnings
warnings.filterwarnings('ignore')
# Load Keras' ResNet50 model that was pre-trained against the ImageNet database
model = resnet50.ResNet50()
# Load the image file, resizing it to 224x224 pixels (required by this model)
img = image.load_img("rgotunechair10.jpg", target_size=(224, 224))
# Convert the image to a numpy array
x = image.img_to_array(img)
# Add a forth dimension since Keras expects a list of images
x = np.expand_dims(x, axis=0)
# Scale the input image to the range used in the trained network
x = resnet50.preprocess_input(x)
# Run the image through the deep neural network to make a prediction
predictions = model.predict(x)
# Look up the names of the predicted classes. Index zero is the results for the first image.
predicted_classes = resnet50.decode_predictions(predictions, top=9)
image_components = []
for x,y,z in predicted_classes[0]:
image_components.append(y)
print(image_components)
This is the output:
['desktop_computer', 'desk', 'monitor', 'space_bar', 'computer_keyboard', 'typewriter_keyboard', 'screen', 'notebook', 'television']
How can I do this for multiple images (within a folder) and put them into a Dataframe?
First of all, move the code for analyzing the image to a function. Instead of printing the result, you will return it there:
import numpy as np
from keras.preprocessing import image
from keras.applications import resnet50
import warnings
warnings.filterwarnings('ignore')
def run_resnet50(image_name):
# Load Keras' ResNet50 model that was pre-trained against the ImageNet database
model = resnet50.ResNet50()
# Load the image file, resizing it to 224x224 pixels (required by this model)
img = image.load_img(image_name, target_size=(224, 224))
# Convert the image to a numpy array
x = image.img_to_array(img)
# Add a forth dimension since Keras expects a list of images
x = np.expand_dims(x, axis=0)
# Scale the input image to the range used in the trained network
x = resnet50.preprocess_input(x)
# Run the image through the deep neural network to make a prediction
predictions = model.predict(x)
# Look up the names of the predicted classes. Index zero is the results for the first image.
predicted_classes = resnet50.decode_predictions(predictions, top=9)
image_components = []
for x,y,z in predicted_classes[0]:
image_components.append(y)
return(image_components)
Then, get all images inside the desired folder (for instance, the current directory):
images_path = '.'
images = [f for f in os.listdir(images_path) if f.endswith('.jpg')]
Run the function on all images, get the result:
result = [run_resnet50(img_name) for img_name in images]
This result will be a list of lists. Then you could just move it to a DataFrame. If you want to keep the image name for each result, use a dictionary instead.
I have trained a classifier with this: https://teachablemachine.withgoogle.com/
Then I set up a python environment where I can run the model. I heard that with some tweaks such model could be turned into a deep dream like model.
Does anyone know how I could tweak the model with keras to generate pictures that it learned co classify? Is it even possible?
Here is my current code:
import tensorflow.keras
from PIL import Image, ImageOps
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = tensorflow.keras.models.load_model('C:/Users/me/Downloads/keras_model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('C:/Users/me/Downloads/0a8d8fa2c09ed00a54b6590f2fa01436.jpg')
#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
#turn the image into a numpy array
image_array = np.asarray(image)
# display the resized image
image.show()
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array
# run the inference
prediction = model.predict(data)
print(prediction)
The idea is quite simple. you need to feed the image to the model and then maximize the activation of certain layers wrt the image itself not the weights of the model (changing the layers will change the result)
tensorflow made an awesome notebook, check it out here for more information and detailed examples
I am using an exported classification model from Google AutoML Vision, hence I only have a saved_model.pb and no variables, checkpoints etc.
I want to load this model graph into a local TensorFlow installation, use it for inference and continue training with more pictures.
Main questions:
Is this plan possible, i.e. to use a single saved_model.pb without variables, checkpoints etc. and train the resulting graph with new data?
If yes: How do you get to an input shape of (?,) with images encoded as strings?
Ideally, looking ahead: Any important thing to consider for the training part?
Background infos about code:
To read the image, I use the same approach as you would when using the Docker container for inference, hence base64 encoded image.
To load the graph, I checked what tag set the graph needs via CLI (saved_model_cli show --dir input/model) which is serve.
To get input tensor names I use graph.get_operations(), which gives me Placeholder:0 for image_bytes and Placeholder:1_0 for the key (just an arbitrary string identify the image). Both have Dimension dim -1
import tensorflow as tf
import numpy as np
import base64
path_img = "input/testimage.jpg"
path_mdl = "input/model"
# input to network expected to be base64 encoded image
with io.open(path_img, 'rb') as image_file:
encoded_image = base64.b64encode(image_file.read()).decode('utf-8')
# reshaping to (1,) as the expecte dimension is (?,)
feed_dict_option1 = {
"Placeholder:0": { np.array(str(encoded_image)).reshape(1,) },
"Placeholder_1:0" : "image_key"
}
# reshaping to (1,1) as the expecte dimension is (?,)
feed_dict_option2 = {
"Placeholder:0": np.array(str(encoded_image)).reshape(1,1),
"Placeholder_1:0" : "image_key"
}
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["serve"], path_mdl)
graph = tf.get_default_graph()
sess.run('scores:0',
feed_dict=feed_dict_option1)
sess.run('scores:0',
feed_dict=feed_dict_option2)
Output:
# for input reshaped to (1,)
ValueError: Cannot feed value of shape (1,) for Tensor 'Placeholder:0', which has shape '(?,)'
# for input reshaped to (1,1)
ValueError: Cannot feed value of shape (1, 1) for Tensor 'Placeholder:0', which has shape '(?,)'
How do you get to an input shape of (?,)?
Thanks a lot.
Yes! It is possible, I have an object detection model that should be similar, I can run it as follows in tensorflow 1.14.0:
import cv2
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'encoded_image_string_tensor:0': inp})
I used netron to find my input.
In tensorflow 2.0 it is even easier:
import cv2
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
saved_model_dir = '.'
loaded = tf.saved_model.load(export_dir=saved_model_dir)
infer = loaded.signatures["serving_default"]
out = infer(key=tf.constant('something_unique'), image_bytes=tf.constant(inp))
Also saved_model.pb is not a frozen_inference_graph.pb, see: What is difference frozen_inference_graph.pb and saved_model.pb?
I have tried the tensorflow example with zalando mnist here:
https://www.tensorflow.org/tutorials/keras/basic_classification
After that I changed the clothes images with handwritten mnist database, which also works.
Now I want to train the AI with the mnist handwritten database, take a picture from my handwritten "1" and let the KI guess the number.
I appended after the trainig of the KI some lines of code.
What I tried is this:
ownPicArr = imageio.imread(filename) #it is a 28x28 PNG file
ownPicArr = ownPicArr / 255.0
pred = model.predict(ownPicArr)
I got following error:
ValueError: Error when checking input: expected flatten_input to have 3 dimensions, but got array with shape (28, 28)
How to solve this problem? Thnak you...
Even if the colours of your picture were inverted, this is how you could perform the predictions using OpenCV
import os, cv2
image=cv2.imread(imagePath)
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((28,28))
p = np.expand_dims(size_image, 0)
img = tf.cast(p, tf.float32)
pred = model.predict(img)
First we read the image using OpenCV, which stores it as an array. We then convert the array and also specify the colour channels. After Resizing the image we create a batch of a single image and then after changing the datatype to float32 to or the datatype matching your model we finally make predictions
I'm fine-tuning the GoogleNet network with Caffe to my own dataset. If I use IMAGE_DATA layers as input learning takes place. However, I need to switch to an HDF5 layer for further extensions that I require. When I use HDF5 layers no learning takes place.
I am using the exact same input images, and the labels match also. I have also checked to ensure that the data in .h5 files can be loaded correctly. It does, and Caffe is also able to find the number of examples I feed it as well as the correct number of classes (2).
This leads me to think that the issue lies in the transformations I am performing manually (since HDF5 layers do not perform any built-in transformations). The code for these is below. I do the following:
Convert image from RGB to BGR
Resize it to 256x256 so I can subtract the mean file from ImageNet (included in the Caffe library)
Since the original GoogleNet prototxt does not divide by 255, I also do not (see here)
I resize the image down to 224x224, which is the crop size required by GoogleNet
I transpose the image as needed to satisfy CxHxW, as required by Caffe
At the moment I am not performing data augmentation, which could be turned on if I let oversample=True.
Can anyone see anything wrong with this approach? Is data augmentation so critical that no learning would take place without it?
The HDF5 conversion code
IMG_RESHAPE = 224
IMG_UNCROPPED = 256
def resize_convert(img_names, path=None, oversample=False):
'''
Load images, set to BGR mode and transpose to CxHxW
and subtract the Imagenet mean. If oversample is True,
perform data augmentation.
Parameters:
---------
img_names (list): list of image names to be processed.
path (string): path to images.
oversample (bool): if True then data augmentation is performed
on each image, and 10 crops of size 224x224 are produced
from each image. If False, then a single 224x224 is produced.
'''
path = path if path is not None else ''
if oversample == False:
all_imgs = np.empty((len(img_names), 3, IMG_RESHAPE, IMG_RESHAPE), dtype='float32')
else:
all_imgs = np.empty((len(img_names), 3, IMG_UNCROPPED, IMG_UNCROPPED), dtype='float32')
#load the imagenet mean
mean_val = np.load('/path/to/imagenet/ilsvrc_2012_mean.npy')
for i, img_name in enumerate(img_names):
img = ndimage.imread(path+img_name, mode='RGB') # Read as HxWxC
#subtract the mean of Imagenet
#First, resize to 256 so we can subtract the mean of dims 256x256
img = img[...,::-1] #Convert RGB TO BGR
img = caffe.io.resize_image(img, (IMG_UNCROPPED, IMG_UNCROPPED), interp_order=1)
img = np.transpose(img, (2, 0, 1)) #HxWxC => CxHxW
#Since mean is given in Caffe channel order: 3xWxH
#Assume it also is given in BGR order
img = img - mean_val
#set to 0-1 range => I don't think googleNet requires this
#I tried both and it didn't make a difference
#img = img/255
#resize images down since GoogleNet accepts 224x224 crops
if oversample == False:
img = np.transpose(img, (1,2,0)) # CxHxW => HxWxC
img = caffe.io.resize_image(img, (IMG_RESHAPE, IMG_RESHAPE), interp_order=1)
img = np.transpose(img, (2,0,1)) #convert to CxHxW for Caffe
all_imgs[i, :, :, :] = img
#oversampling requires HxWxC order
if oversample:
all_imgs = np.transpose(all_imgs, (0, 3, 1, 2))
all_imgs = caffe.io.oversample(all_imgs, (IMG_RESHAPE, IMG_RESHAPE))
all_imgs = np.transpose(all_imgs, (0,2,3,1)) #convert to CxHxW for Caffe
return all_imgs
Relevant differences between IMAGE_DATA and HDF5 prototxt files
name: "GoogleNet"
layers {
name: "data"
type: HDF5_DATA
top: "data"
top: "label"
hdf5_data_param {
source: "/path/to/train_list.txt"
batch_size: 32
}
include: { phase: TRAIN }
}
layers {
name: "data"
type: HDF5_DATA
top: "data"
top: "label"
hdf5_data_param {
source: "/path/to/valid_list.txt"
batch_size:10
}
include: { phase: TEST }
}
Update
When I say no learning is taking place I mean that my training loss is not going down consistently when using HDF5 data compared to the IMG_Data. In the images below, the first plot is plot the change in the training loss for the IMG_DATA network, and the other is the HDF5 data network.
One possibility that I am considering is that the network is overfitting to each of the .h5 that I am feeding it. At the moment I am using data augmentation, but all of the augmented examples are stored into a single .h5 file, along with other examples. However, because all of the augmented versions of a single input image are all contained within the same .h5 file, I think this could cause the network to overfit to that specific .h5 file. However, I am not sure whether this is what the second plot suggests.
I faced the same problem and found out that for some reason doing the transformation manually as you are doing in your code causes the images to be all black (all zeros). try to debug your code and see if that is happening.
the solution is to use the same methodology explained in the Caffe tutorial here
http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb
the part where you see
# create transformer for the input called 'data'
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1)) # move image channels to outermost dimension
transformer.set_mean('data', mu) # subtract the dataset-mean value in each channel
transformer.set_raw_scale('data', 255) # rescale from [0, 1] to [0, 255]
transformer.set_channel_swap('data', (2,1,0)) # swap channels from RGB to BGR
then few lines down
image = caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')
transformed_image = transformer.preprocess('data', image)