Using the .h5 file is not giving the expected output? - python

I'm currently using Flask in order to combine backend and frontend form image classification. I'm also using .h5 file in order to predict the output. The output is different and is fully wrong. The output should be the prediction probability. Here is the code:
def upload():
if request.method == 'POST':
# Get the file from post request
f = request.files['file']
# Save the file to ./uploads
basepath = os.path.dirname(__file__)
file_path = os.path.join(
basepath, 'uploads', secure_filename(f.filename))
f.save(file_path)
MODEL_ARCHITECTURE = 'model_adam_01.json'
MODEL_WEIGHTS = 'model_50_eopchs_adam_01.h5'
json_file = open(MODEL_ARCHITECTURE)
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
model.load_weights(MODEL_WEIGHTS)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
prediction = model_predict(file_path, model)
print("I think that is ")
print(prediction)
# print('I think that is {}.'.format(predicted_class.lower()))
return str(prediction)
Following is the model_predict function where I have passed the image path and the model
def model_predict(img_path, model):
'''
Args:
-- img_path : an URL path where a given image is stored.
-- model : a given Keras CNN model.
'''
IMG = image.load_img(img_path)
print(type(IMG))
IMG_ = np.asarray(IMG)
print(type(IMG_))
print(IMG_.shape)
IMG_ = prepare(IMG_)
print(IMG_.shape)
#print(model)
prediction = model.predict(IMG_)
print(prediction.shape)
return str(prediction)
Following is the output that I am getting:
I think that is
[[0.]]
Why does this problem occur? I am using keras 2.3.1 and tesorflow 1.15.2

When doing the predictions you have to apply the same preprocessing steps that you did to your training data just before training the model. I think that should be the problem rather than the code.

Related

What is the reason for major difference between predictions for Tensorflow in Python and for Tensorflow JS in browser?

I have created a model for object detection in Python Tensorflow and then converted it in Tensorflow JS so as to use in browser. The model works perfectly in python. Now, when I give an input image to browser, there is major difference between prediction results in python and in Tensorflow JS. I am sharing the prediction results for both python and JS.
Results for Python :
And Results for JS :
I have given the same image as input to both python and JS but still the big difference specially for Scores where python predicts with 99% and JS predicts with just 16%.
What could be the reason for this ? Have I inadvertently committed some mistake while converting to Tensorflow JS or is there some other reason for this ?
I went through this and other resources on the internet but couldn't find any specific reason for the difference in results.
Any help will be grateful. Thanks a lot.
Update 1 :
Here is my Python Code :
def load_image_into_numpy_array(image_path):
return np.array(Image.open(image_path))
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)
input_tensor = tf.convert_to_tensor(
np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
(detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
#Set min_score_thresh accordingly to display the boxes
min_score_thresh=.5,
agnostic_mode=False
)
plt.figure(figsize=(12,25))
plt.imshow(image_np_with_detections)
plt.show()
And here is model call in JS :
async function run() {
//Loading the Model :
model = await tf.loadGraphModel(MODEL_URL);
console.log("SUCCESS");
let img = document.getElementById("myimg");
console.log("Predicting....");
//Image PreProcessing
var example = tf.browser.fromPixels(img);
example = example.expandDims(0);
//model call
const output = await model.executeAsync(example);
console.log(output);
const boxes = output[4].arraySync();
const scores = output[5].arraySync();
const classes = output[1].arraySync();
console.log(boxes);
console.log(scores);
console.log(classes);
}
Update 2 :
import pathlib
filenames = list(pathlib.Path('/content/train/').glob('*.index'))
filenames.sort()
print(filenames)
#recover our saved model
pipeline_config = pipeline_file
#generally you want to put the last ckpt from training in here
model_dir = str(filenames[-1]).replace('.index','')
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
detection_model = model_builder.build(
model_config=model_config, is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(
model=detection_model)
ckpt.restore(os.path.join(str(filenames[-1]).replace('.index','')))
def get_model_detection_function(model):
"""Get a tf.function for detection."""
#tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
detect_fn = get_model_detection_function(detection_model)
You're missing preprocessing. When exporting your model, you are exporting the default serve tag, so your call to model.executeAsync in JS is equivalent to model.predict in python. However, in your python code, you are also preprocessing the inputs with a call to model.preprocess.
You should replicate the python preprocessing in JS.

Why do i get "expected string or bytes-like object" while testing Fashion-mnist model in django with a local image?

I'm trying to deply Fashion-mnist model in a django project and found this error :"expected string or bytes-like object" when trying to test the model using predict function after doing some changes on the image (after loading it )with a local image from my computer .
Here is the predict function in Views.py:
#csrf_exempt
def predict(request):
path = "D:/desktop/pullover.jpg"
im = Image.open(os.path.join(path))
convertImage(im)
x = Image.open(OUTPUT, mode='L')
x = np.invert(x)
x = Image.resize(x, (28, 28))
x = x.reshape(1, 28, 28, 1)
with graph.as_default():
out = model.predict(x)
print(out)
print(np.argmax(out, axis=1))
response = np.array_str(np.argmax(out, axis=1))
return JsonResponse({"output": response})
other functions i used in predict function in Views.py:
def getI420FromBase64(codec):
base64_data = re.sub('^data:image/.+;base64,', '', codec)
byte_data = base64.b64decode(base64_data)
image_data = BytesIO(byte_data)
img = Image.open(image_data)
img.save(OUTPUT)
def convertImage(imgData):
getI420FromBase64(imgData)
utils.py
from keras.models import model_from_json
import tensorflow as tf
import os
JSONpath = os.path.join(os.path.dirname(__file__), 'models', 'model.json')
MODELpath = os.path.join(os.path.dirname(__file__), 'models', 'mnist.h5')
def init():
json_file = open(JSONpath, 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights(MODELpath)
print("Loaded Model from disk")
loaded_model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
# graph = tf.get_default_graph()
graph = tf.compat.v1.get_default_graph()
return loaded_model, graph
Any kind of help would be appreciated .

How to share a Tensorflow Keras model with a Flask route function?

Does tensorflow maintains its own internal global state, which is broken by loading the model in one function and trying to use it in another?
Using singleton for storing model:
class Singleton(object):
_instances = {}
def __new__(class_, *args, **kwargs):
if class_ not in class_._instances:
class_._instances[class_] = super(Singleton, class_).__new__(class_, *args, **kwargs)
return class_._instances[class_]
class Context(Singleton):
pass
When I do:
#app.route('/file', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
file = request.files['file']
if file and allowed_file(file.filename):
# filename = secure_filename(file.filename)
filename = file.filename
filepath = os.path.join(app.config['UPLOAD_FOLDER'], filename)
file.save(filepath)
context = Context()
if context.loaded:
img = cv2.imread(filepath)
img = cv2.resize(img, (96, 96))
img = img.astype("float") / 255.0
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
classes = context.model.predict(img)
def api_run():
context = Context()
context.model = load_model('model.h5')
context.loaded = True
I'm getting some error: ValueError: Tensor Tensor("dense_1/Softmax:0", shape=(?, 2), dtype=float32) is not an element of this graph.
However if I will move context.model = load_model('model.h5') inside upload_file function then everything will work. Why is that happening? How to store model for later use?
Yes, Tensorflow in graph mode has its own internal global state.
You don't want to reload your model at every prediction, that's really inefficient.
The right strategy is to load the model at the start of your web app and then reference the global state.
Use a global variable for the model and graph and do something like this:
loaded_model = None
graph = None
def load_model(export_path):
# global variables
global loaded_model
global graph
loaded_model = load_model('model.h5'))
graph = tf.get_default_graph()
then, in your prediction function you do:
#app.route('/', methods=["POST"])
def predict():
if request.method == "POST":
data = request.data
with graph.as_default():
probas = loaded_model.predict(data)
An complete short example for how to do this can be found here.
Alternatively, if you use Tensorflow 2.0, which defaults to Eager mode, you have no graph and therefore no problem.
I had similar issue. Everything solved when
from tensorflow.python.keras import backend as K
and then before loading model called
K.clear_session()

Trying to wrap up a keras model in a flask REST app but getting a ValueError

I can create a simple keras model by running
python create-flask-model.py
create-flask-model.py
##points in square that are in or out of a quarter circle
import random
import math
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
training_size = 8000
testing_size = 2000
batch_size = 10
epoch_no = 30
modelStructureFileName = 'simple-flask.json'
modelWeightFileName = 'simple-flask.h5'
def get_model():
model = Sequential()
model.add(Dense(4, input_dim=2, activation='tanh'))
model.add(Dense(4, activation='tanh'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='rmsprop')
return model
def get_data_instances(size):
result = []
for i in range(0, size):
number_1 = random.uniform(0,1)
number_2 = random.uniform(0,1)
squares = math.pow(number_1,2) + math.pow(number_2,2)
target = 0
if squares < 0.49:
target = 1
line = number_1,number_2,target
result.append(line)
return np.array(result)
##create data and split in to training and test, features and targets
data_instances = get_data_instances(training_size+testing_size)
train_x, train_y = data_instances[:training_size,0:2], data_instances[:training_size,-1]
test_x, test_y = data_instances[training_size:,0:2], data_instances[training_size:,-1]
##load model and train
model = get_model()
history = model.fit(train_x, train_y, batch_size=batch_size, epochs=epoch_no, validation_data=(test_x, test_y))
##save the model
model_json = model.to_json()
with open(modelStructureFileName, 'w') as json_file:
json_file.write(model_json)
model.save_weights(modelWeightFileName)
##how to get prediction for an instance
#instance = np.array([0.3, 0.6])
#instance = instance.reshape(1,2)
#yhat = model.predict(instance)
#print(yhat)
I wish to load the resulting model in to a flask app and be able to pass instances as json objects and have predictions made and returned. Running
python flask-app.py
in the same directory as the model json and h5 files.
flask-app.py
import json
import numpy as np
from flask import Flask
from keras.models import model_from_yaml
app = Flask(__name__)
model = None
modelStructureFileName = 'simple-flask.json'
modelWeightFileName = 'simple-flask.h5'
def load_model():
yaml_file = open(modelStructureFileName, 'r')
loaded_model_yaml = yaml_file.read()
yaml_file.close()
global model
model = model_from_yaml(loaded_model_yaml)
model.load_weights(modelWeightFileName)
#app.route('/flask/<input>', methods=['GET'])
def predict(input):
input_array = json.loads(input)
instance = np.array(input_array)
instance = instance.reshape(1,2)
yhat = model.predict(instance)
return str(yhat)
if __name__ == '__main__':
load_model()
app.run(port = 9000, debug = True)
If I navigate to http://localhost:9000/flask/[0.3,0.6] I get an error
builtins.ValueError
ValueError: Tensor Tensor("dense_3/Sigmoid:0", shape=(?, 1), dtype=float32) is not an element of this graph.
I think it's something to do with the scope of the model in the app, but can't figure it out. If I load the model in the request method it works once, but then fails with another error. I only want to load the model once. How can I get the flask app to work as expected?
EDIT: I ended up using bottle instead of flask and it worked no problem.
bottle-app.py
from bottle import route, run
import json
import numpy as np
from keras.models import model_from_yaml
modelStructureFileName = 'simple-flask.json'
modelWeightFileName = 'simple-flask.h5'
yaml_file = open(modelStructureFileName, 'r')
loaded_model_yaml = yaml_file.read()
yaml_file.close()
model = model_from_yaml(loaded_model_yaml)
model.load_weights(modelWeightFileName)
print('model loaded')
#route('/bottle/<input>')
def predict(input):
input_array = json.loads(input)
instance = np.array(input_array)
instance = instance.reshape(1,2)
yhat = model.predict(instance)
print(input_array, yhat)
return str(yhat[0][0])
run(host='localhost', port=9000, debug=True)
This happens because, you have multiple threads enabled in flask by default. Tensorflow models are not working well with multiple threads. You can read more about this in the below links
https://github.com/keras-team/keras/issues/5640
https://github.com/tensorflow/tensorflow/issues/14356
The following workaround worked for me
global graph
graph = tf.get_default_graph()
with graph.as_default():
model.compile()
model.fit()
with graph.as_default():
model.predict()
This answer is with respect to flask API.
The problem is Flask API works only once and then after it gives errors. So, in that case, you should write K.clear_session() at the end of the API before the return statement.
And do not forget to write from keras import backend as K line at the top.

Tensorflow: define placeholders/operation name in image pipeline

I would like to save my trained Tensorflow model, so it can be deployed by restoring the model file (I'm following this example, which seems to make sense). To do this, however, I need to have named tensors, so that I can do reload the variables with something like:
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("my_tensor:0")
I am queuing images from a list of filenames using string_input_producer (code below), but how do I name the tensors so that I can reload them at a later stage?
import tensorflow as tf
flags = tf.app.flags
conf = flags.FLAGS
class ImageDataSet(object):
def __init__(self, img_list_path, num_epoch, batch_size):
# Build the record list queue
input_file = open(images_list_path, 'r')
self.record_list = []
for line in input_file:
line = line.strip()
self.record_list.append(line)
filename_queue = tf.train.string_input_producer(self.record_list, num_epochs=num_epoch)
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
image = tf.image.decode_jpeg(image_file, conf.img_colour_channels)
# preprocess
# ...
min_after_dequeue = 1000
capacity = min_after_dequeue + 400 * batch_size
self.images = tf.train.shuffle_batch(image, batch_size=batch_size, capacity=capacity,
min_after_dequeue=min_after_dequeue)
I assume that you want to restore the graph for testing or deploying.
For these purposes, you can edit your graph by insert a placeholder as an entrance of the testing data.
To edit the graph, you can use tf's graph editor, or build an new graph with placeholder and save it.

Categories

Resources