How to turn a Classifier model into an image generator? - python

I have trained a classifier with this: https://teachablemachine.withgoogle.com/
Then I set up a python environment where I can run the model. I heard that with some tweaks such model could be turned into a deep dream like model.
Does anyone know how I could tweak the model with keras to generate pictures that it learned co classify? Is it even possible?
Here is my current code:
import tensorflow.keras
from PIL import Image, ImageOps
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = tensorflow.keras.models.load_model('C:/Users/me/Downloads/keras_model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('C:/Users/me/Downloads/0a8d8fa2c09ed00a54b6590f2fa01436.jpg')
#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
#turn the image into a numpy array
image_array = np.asarray(image)
# display the resized image
image.show()
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array
# run the inference
prediction = model.predict(data)
print(prediction)

The idea is quite simple. you need to feed the image to the model and then maximize the activation of certain layers wrt the image itself not the weights of the model (changing the layers will change the result)
tensorflow made an awesome notebook, check it out here for more information and detailed examples

Related

InvalidArgumentError: Graph execution error: TensorFlow pose estimation using OpenCV

I am trying to build pose detection using cv2, tensorflow in google colab
I am encountering with the following error..
Code:
import tensorflow as tf
import tensorflow_hub as hub
import cv2
from matplotlib import pyplot as plt
import numpy as np
from google.colab.patches import cv2_imshow
model = hub.load('https://tfhub.dev/google/movenet/multipose/lightning/1')
movenet = model.signatures['serving_default']
img_original = cv2.imread('/content/brandon-atchison-eexdeq3NleQ-unsplash.jpeg',1)
img_copy = img_original.copy()
input_img = tf.cast(img_original,dtype=tf.int32)
img_copy.shape
tensor = tf.convert_to_tensor(img_original,dtype=tf.int32)
tensor
results = movenet(tensor)
I have created the variable img_copy cuz I need to perform some operations on the image and want the original image as it is. Not sure what is the error I am facing while trying to get results from the movenet model.
edit:
Try:
results = movenet(tensor[None, ...])
since you are missing the batch dimension, which is needed to feed data to your model. You could also use tf.expand_dims:
tensor = tf.expand_dims(tensor, axis=0)
# resize
tensor = tf.image.resize(tensor, [32 * 186, 32 * 125])
Here is a working example:
import tensorflow_hub as hub
model = hub.load('https://tfhub.dev/google/movenet/multipose/lightning/1')
movenet = model.signatures['serving_default']
tensor = tf.random.uniform((1, 160, 256, 3), minval=0, maxval=255, dtype=tf.int32)
movenet(tensor)
Check the model description and make sure you have the correct shape:
A frame of video or an image, represented as an int32 tensor of dynamic shape: 1xHxWx3, where H and W need to be a multiple of 32 and the larger dimension is recommended to be 256. To prepare the input image tensor, one should resize (and pad if needed) the image such that the above conditions are hold. Please see the Usage section for more detailed explanation. Note that the size of the input image controls the tradeoff between speed vs. accuracy so choose the value that best suits your application. The channel order is RGB with values in [0, 255].

How can I use a function or loop on this resnet50 code to predict the components of multiple images (within a folder), instead of just one?

How can I do this for multiple images (within a folder) and put them into a Dataframe?
This is the code for analysing one image:
import numpy as np
from keras.preprocessing import image
from keras.applications import resnet50
import warnings
warnings.filterwarnings('ignore')
# Load Keras' ResNet50 model that was pre-trained against the ImageNet database
model = resnet50.ResNet50()
# Load the image file, resizing it to 224x224 pixels (required by this model)
img = image.load_img("rgotunechair10.jpg", target_size=(224, 224))
# Convert the image to a numpy array
x = image.img_to_array(img)
# Add a forth dimension since Keras expects a list of images
x = np.expand_dims(x, axis=0)
# Scale the input image to the range used in the trained network
x = resnet50.preprocess_input(x)
# Run the image through the deep neural network to make a prediction
predictions = model.predict(x)
# Look up the names of the predicted classes. Index zero is the results for the first image.
predicted_classes = resnet50.decode_predictions(predictions, top=9)
image_components = []
for x,y,z in predicted_classes[0]:
image_components.append(y)
print(image_components)
This is the output:
['desktop_computer', 'desk', 'monitor', 'space_bar', 'computer_keyboard', 'typewriter_keyboard', 'screen', 'notebook', 'television']
How can I do this for multiple images (within a folder) and put them into a Dataframe?
First of all, move the code for analyzing the image to a function. Instead of printing the result, you will return it there:
import numpy as np
from keras.preprocessing import image
from keras.applications import resnet50
import warnings
warnings.filterwarnings('ignore')
def run_resnet50(image_name):
# Load Keras' ResNet50 model that was pre-trained against the ImageNet database
model = resnet50.ResNet50()
# Load the image file, resizing it to 224x224 pixels (required by this model)
img = image.load_img(image_name, target_size=(224, 224))
# Convert the image to a numpy array
x = image.img_to_array(img)
# Add a forth dimension since Keras expects a list of images
x = np.expand_dims(x, axis=0)
# Scale the input image to the range used in the trained network
x = resnet50.preprocess_input(x)
# Run the image through the deep neural network to make a prediction
predictions = model.predict(x)
# Look up the names of the predicted classes. Index zero is the results for the first image.
predicted_classes = resnet50.decode_predictions(predictions, top=9)
image_components = []
for x,y,z in predicted_classes[0]:
image_components.append(y)
return(image_components)
Then, get all images inside the desired folder (for instance, the current directory):
images_path = '.'
images = [f for f in os.listdir(images_path) if f.endswith('.jpg')]
Run the function on all images, get the result:
result = [run_resnet50(img_name) for img_name in images]
This result will be a list of lists. Then you could just move it to a DataFrame. If you want to keep the image name for each result, use a dictionary instead.

how to predict with pre-trained model in tensorflow?

I am new to TensorFlow. I am trying to run a pre-trained NN for number recognition 'wide_resnet_28_10' from github- https://github.com/Curt-Park/handwritten_digit_recognition. When I try to predict an image it says expected input to have 4D. This is what I tried-
from tensorflow.keras.models import load_model
import tensorflow as tf
import cv2
import numpy
model = load_model(r'C:\Users\sesha\Desktop\python\Deep learning NN\handwritten_digit_recognition-master\models\WideResNet28_10.h5')
image = cv2.imread(r'C:\Users\sesha\Desktop\python\Deep learning NN\test_org01.png')
img = tf.convert_to_tensor(image)
predictions = model.predict([img])
print(np.argmax(predictions))
most tutorial are vague, i did try np.reshape(1,X,X,-1) that didn't work.
For the 4D input, it expects batches of data. You can make it a 4D tensor by doing:
predictions = model.predict(tf.expand_dims(img, 0))
if this does not work, try predict_on_batch instead of predict.
Also:
I don't think that your image reading is correct. It will probably give you a tensor of the byte string.
This should work
path = tf.constant(img_path)
image = tf.io.read_file(path)
image = tf.io.decode_image(image)
image = tf.image.resize(image, (X, Y)) # if necessary

How to load an image into tensorflow to use with a model?

I have just begun learning Machine learning and am using Tensorflow 1.14. I have just created my first model using tensorflow.keras using the inbuilt tensorflow.keras.datasets.mnist dataset. Here is the code for my model:
import tensorflow as tf
from tensorflow import keras
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
class Stopper(keras.callbacks.Callback):
def on_epoch_end(self, epoch, log={}):
if log.get('acc') >= 0.99:
self.model.stop_training = True
print('\nReached 99% Accuracy. Stopping Training...')
model = keras.Sequential([
keras.layers.Flatten(),
keras.layers.Dense(1024, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(
optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
x_train, x_test = x_train / 255, x_test / 255
model.fit(x_train, y_train, epochs=10, callbacks=[Stopper()])
Now that the model is trained, I can feed the x_test images into model.predict() and that works fine. But I was wondering how to feed my own images (JPG and PNG) into my model's predict() method?
I have looked at the documentation and their method results in an error for me. In particular I tried the following:
img_raw = tf.read_file(<my file path>)
img_tensor = tf.image.decode_image(img_raw)
img_final = tf.image.resize(img_tensor, [192, 192])
^^^ This line throws error 'ValueError: 'images' contains no shape.'
Please provide a step by step guide for getting an image (JPG and PNG) into my model for a prediction. Thank you very much.
from PIL import Image
img = Image.open("image_file_path").convert('L').resize((28, 28), Image.ANTIALIAS)
img = np.array(img)
model.predict(img[None,:,:])
You have trained your model with images of size (28 X 28), so have to resize your image to the same. You cannot use the images of a different dimension.
Predict requires a batch of images but since you want to make a prediction on a single image you have to add an additional dimension of the batch for this single image. This is done by expand_dim or reshape or img[None,:,:]
Every image fundamentally is made of pixels, you can pass these pixel values over to your neural network.
To convert the image into an array of pixels you can use libraries like skimage as follows.
from skimage.io import imread
imagedata=imread(imagepath)
#you can pass this image to the model
To read group of images loop them over and store that data in an array.
Also you will have to resize to normalise all the pictures to load them into your NN.
resized_image = imagedata.resize(preferred_width, preferred_height, Image.ANTIALIAS)
You can also choose to convert the image to black and white to reduce the number of computations, I am using pillow library, a common image preprocessing library here to apply the black and white filter
from PIL import Image
# load the image
image = Image.open('opera_house.jpg')
# convert the image to grayscale
gs_image = image.convert(mode='L')
The order of preprocessing can be
1. convert images to black and white
2. resize the images
3. convert them into numpy array using imread

Convert python opencv mat image to tensorflow image data

I want to capture frames from a video with python and opencv and then classify the captured Mat images with tensorflow. The problem is that i donĀ“t know how to convert de Mat format to a 3D Tensor variable. This is how i am doing now with tensorflow (loading the image from file) :
image_data = tf.gfile.FastGFile(imagePath, 'rb').read()
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor,
{'DecodeJpeg/contents:0': image_data})
I will appreciate any help, thanks in advance
Load the OpenCV image using imread, then convert it to a numpy array.
For feeding into inception v3, you need to use the Mult:0 Tensor as entry point, this expects a 4 dimensional Tensor that has the layout: [Batch index,Width,Height,Channel]
The last three are perfectly fine from a cv::Mat, the first one just needs to be 0, as you do not want to feed a batch of images, but a single image.
The code looks like:
#Loading the file
img2 = cv2.imread(file)
#Format for the Mul:0 Tensor
img2= cv2.resize(img2,dsize=(299,299), interpolation = cv2.INTER_CUBIC)
#Numpy array
np_image_data = np.asarray(img2)
#maybe insert float convertion here - see edit remark!
np_final = np.expand_dims(np_image_data,axis=0)
#now feeding it into the session:
#[... initialization of session and loading of graph etc]
predictions = sess.run(softmax_tensor,
{'Mul:0': np_final})
#fin!
Kind regards,
Chris
Edit: I just noticed, that the inception network wants intensity values normalized as floats to [-0.5,0.5], so please use this code to convert them before building the RGB image:
np_image_data=cv2.normalize(np_image_data.astype('float'), None, -0.5, .5, cv2.NORM_MINMAX)
With Tensorflow 2.0 and OpenCV 4.2.0, you can convert by this way :
import numpy as np
import tensorflow as tf
import cv2 as cv
width = 32
height = 32
#Load image by OpenCV
img = cv.imread('img.jpg')
#Resize to respect the input_shape
inp = cv.resize(img, (width , height ))
#Convert img to RGB
rgb = cv.cvtColor(inp, cv.COLOR_BGR2RGB)
#Is optional but i recommend (float convertion and convert img to tensor image)
rgb_tensor = tf.convert_to_tensor(rgb, dtype=tf.float32)
#Add dims to rgb_tensor
rgb_tensor = tf.expand_dims(rgb_tensor , 0)
#Now you can use rgb_tensor to predict label for exemple :
#Load pretrain model, made from: https://www.tensorflow.org/tutorials/images/cnn
model = tf.keras.models.load_model('cifar10_model.h5')
#Create probability model
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
#Predict label
predictions = probability_model.predict(rgb_tensor, steps=1)
It looks like you're using the pre-trained and pre-defined Inception model, which has a tensor named DecodeJpeg/contents:0. If so, this tensor expects a scalar string containing the bytes for a JPEG image.
You have a couple of options, one is to look further down the network for the node where the JPEG is converted to a matrix. I'm not sure what the MAT format is, but this will be a [height, width, colour_depth] representation. If you can get your image in that format you can replace the DecodeJpeg... string with the name of the node you want to feed into.
The other option is to simply convert your images to JPEGs and feed them straight in.
You should be able to convert the opencv mat format to a numpy array as:
np_image_data = np.asarray(image_data)
Once you have the data as a numpy array you can pass it to tensor flow through a feeding mechanism as in the link that #thesonyman101 referenced:
feed_dict = {some_tf_input:np_image_data}
predictions = sess.run(some_tf_output, feed_dict=feed_dict)
In my case i had to read an image from file, do some processing and then inject into inception to obtain the return from a features layer, called last layer.
My solution is short but effective.
img = cv2.imread(file)
... do some processing
img_as_string = cv2.imencode('.jpg', img)[1].tostring()
features = sess.run(last_layer, {'DecodeJpeg/contents:0': img_as_string})

Categories

Resources