How to use Handwriting Recognition model as an API? - python

I have created one machine learning model using Tensorflow and Keras by using IAM dataset. How to load this model as an API to predict an image? When I was trying to integrate It shows error
return self.function(inputs, **arguments)
File "test2.py", line 136, in resize_image
return tf.image.resize_images(image,[56,56])
NameError: name 'tf' is not defined
I have load model using from keras.models import load_model and trying to predict image handwriting. low_loss.hdf5 is model which I try to integrate.
def testmodel(image_path):
global model
# load the pre-trained Keras model
model = load_model('low_loss.hdf5')
model.summary()
img = Image.open(image_path).convert("L")
img = np.resize(image_path, (28,28,1))
im2arr = np.array(img)
im2arr = im2arr.reshape(1,28,28,1)
y_pred = model.predict_classes(im2arr)
return y_pred
I wish to predict image Handwritten data.

your error is about tf which is not loaded.
try:
import tensorflow as tf

You were getting an error because you have not imported TensorFlow in your code or if you have imported you have not given an alias.
import tensorflow as tf

Related

Cannot plot model graph with pytorch HiddenLayer - module 'torch.onnx' has no attribute '_optimize_trace'

Using the basic test code form hidden layer, I am getting the error in the title:
import torch
import torchvision.models
import hiddenlayer as hl
# VGG16 with BatchNorm
model = torchvision.models.vgg16()
# Build HiddenLayer graph
# Jupyter Notebook renders it automatically
hl.build_graph(model, torch.zeros([1, 3, 224, 224]))
Versions:
hiddenlayer-0.3
pytorch=1.13.0+cu117
python=3.10.6
I followed error recommendation and changed _optimize_trace to _optimize_graph in pytorch_builder.py line 71. After that It worked correctly.

Save Tensorflow model to tf.keras model

I used the tensorflow2 object detection API. I received a saved_model.pb which is a TensorFlow graph and not a tf.keras model. So it can be loaded with tf.saved_model.load() but not with tf.keras.load_model(). The model is saved via the tf.saved_model.save() in the export_lib_v2.py of the object detection API in line 271.
I tried to build the model from the config file and load the checkpoints, to then save it as a tf.keras model:
import tensorflow as tf
from object_detection.utils import config_util
from object_detection.builders import model_builder
import os
def save_in_tfkeras(save_filepath,label_map_path, config_file_path, checkpoint_path):
configs = config_util.get_configs_from_pipeline_file(config_file_path)
model_config = configs['model']
detection_model = model_builder.build(model_config=model_config, is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(checkpoint_path).expect_partial()
detection_model.built(input_shape=(320,320))
tf.keras.models.save_model(detection_model, save_filepath)
print('modelsaved as tf.keras in ' + save_filepath)
if __name__ == "__main__":
PATH_TO_LABELMAP = './models/face_model/face_label.pbtxt'
PATH_TO_CONFIG = './models/face_model/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config'
PATH_TO_CHECKPOINT = './models/face_model/v2_model_50k/ckpt-51'
save_filepath='./Kmodels/mobileNet_V2'
if not os.path.exists(save_filepath):
os.makedirs(save_filepath)
save_in_tfkeras(save_filepath,PATH_TO_LABELMAP,PATH_TO_CONFIG,PATH_TO_CHECKPOINT)
However, this does not seem to work. There are errors which origin, in my opinion, in mixing the tf and tf.keras model. The last error message:
ValueError: Weights for model
ssd_mobile_net_v2fpn_keras_feature_extractor_1 have not yet been
created. Weights are created when the Model is first called on inputs
or build() is called with an input_shape.
The model was saved with TensorFlow loaded as tensorflow.compat.v2
Question: Is there a way to build the model, load the checkpoint weights and then save as tf.keras model?

Keras and PlaidML errors appear despite successful setup

I have the up-to-date versions of Keras and PlaidML installed. I ran the file plaidml-setup and configured plaidml to use my AMD GPU:
C:\WinPython\python-3.6.1.amd64\Scripts>plaidml-setup
PlaidML Setup (0.7.0)
(...)
Default Config Devices:
llvm_cpu.0 : CPU (via LLVM)
Experimental Config Devices:
llvm_cpu.0 : CPU (via LLVM)
opencl_amd_gfx902.0 : Advanced Micro Devices, Inc. gfx902 (OpenCL)
Using experimental devices can cause poor performance, crashes, and other nastiness.
Enable experimental device support? (y,n)[n]:y
Multiple devices detected (You can override by setting PLAIDML_DEVICE_IDS).
Please choose a default device:
1 : llvm_cpu.0
2 : opencl_amd_gfx902.0
Default device? (1,2)[1]:2
Selected device:
opencl_amd_gfx902.0
Almost done. Multiplying some matrices...
Tile code:
function (B[X,Z], C[Z,Y]) -> (A) { A[x,y : X,Y] = +(B[x,z] * C[z,y]); }
Whew. That worked.
Save settings to C:\Users\jsupi\.plaidml? (y,n)[y]:y
Success!
I succesfully tested the installation by running plaidbench keras mobilenet:
C:\WinPython\python-3.6.1.amd64\Scripts>plaidbench keras mobilenet
Running 1024 examples with mobilenet, batch size 1, on backend plaid
INFO:plaidml:Opening device "opencl_amd_gfx902.0"
Compiling network... Warming up... Running...
Example finished, elapsed: 7.484s (compile), 26.724s (execution)
-----------------------------------------------------------------------------------------
Network Name Inference Latency Time / FPS
-----------------------------------------------------------------------------------------
mobilenet 26.10 ms 11.90 ms / 84.02 fps
Correctness: PASS, max_error: 1.8053706298815086e-05, max_abs_error: 9.760260581970215e-07, fail_ratio: 0.0
Then I wanted to run some python module on my GPU. I read in this answer that I need to set os.environ["RUNFILES_DIR"] and os.environ["PLAIDML_NATIVE_PATH"] to correct paths, for example:
os.environ["RUNFILES_DIR"] = "/Library/Frameworks/Python.framework/Versions/3.7/share/plaidml"
os.environ["PLAIDML_NATIVE_PATH"] = "/Library/Frameworks/Python.framework/Versions/3.7/lib/libplaidml.dylib"
The problem is that I can't find anything resembling the last one in my system. I ran the Windows search function, but it couldn't find the libplaidml.dylib file anywhere. So I tried the following:
import os
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
os.environ["RUNFILES_DIR"] = "C://Users/jsupi/.plaidml"
#os.environ["PLAIDML_NATIVE_PATH"] = "C:/Windows/WinPython/python-3.6.1.amd64/Lib/site-packages"
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import keras
from keras.datasets import mnist #to import our dataset
from keras.models import Sequential, Model # imports our type of network
from keras.layers import Dense, Flatten, Input # imports our layers we want to use
from keras.losses import categorical_crossentropy #loss function
from keras.optimizers import Adam, SGD #optimisers
from keras.utils import to_categorical #some function for data preparation
batch_size = 128
num_classes = 10
epochs = 50
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
#Neural network with single dense hidden layer
model = Sequential()
#model.add(Input(input_shape=(28,28)))
model.add(Flatten(input_shape=(28,28)))
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
and got the error message:
Traceback (most recent call last):
File "D:\Kuba\Machine Learning\DigitRecognitionKeras.py", line 51, in <module>
model.add(Dense(128, activation='relu'))
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\keras\engine\sequential.py", line 181, in add
output_tensor = layer(self.outputs[0])
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\keras\engine\base_layer.py", line 431, in __call__
self.build(unpack_singleton(input_shapes))
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\keras\layers\core.py", line 866, in build
constraint=self.kernel_constraint)
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\keras\engine\base_layer.py", line 249, in add_weight
weight = K.variable(initializer(shape),
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\keras\initializers.py", line 218, in __call__
dtype=dtype, seed=self.seed)
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\keras\backend.py", line 59, in wrapper
return func(*args, **kwargs)
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\keras\backend.py", line 1305, in random_uniform
rng_state = _make_rng_state(seed)
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\keras\backend.py", line 205, in _make_rng_state
rng_state = variable(rng_init, dtype='uint32')
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\keras\backend.py", line 59, in wrapper
return func(*args, **kwargs)
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\keras\backend.py", line 1935, in variable
_device(), plaidml.Shape(_ctx, ptile.convert_np_dtype_to_pml(dtype), *value.shape))
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\keras\backend.py", line 102, in _device
devices = plaidml.devices(_ctx)
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\__init__.py", line 1075, in devices
plaidml.settings.start_session()
File "C:\WinPython\python-3.6.1.amd64\lib\site-packages\plaidml\settings.py", line 77, in start_session
raise plaidml.exceptions.PlaidMLError('PlaidML is not configured. Run plaidml-setup.')
plaidml.exceptions.PlaidMLError: PlaidML is not configured. Run plaidml-setup.
Note the last line, which says that PlaidML is not configured even though I have just done that and succesfully tested it. The program runs fine if I comment out the first 3 lines (thus running it without plaidml) and write tensorflow.keras instead of keras in all the "import" lines (it seems to be necessary without plaidml).
Do you have any ideas how to resolve this issue? I have Windows 10 and Python 3.6.
UPDATE 08/11/2021:
I have recently solved the problem after a suggestion made by a friend. First of all, 'libplaidml.dylib' is a Linux library file, and I have Windows, so I had to set the path to an analogous .dll file instead (I also use r"" to make sure there are no problems with backslashes):
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
os.environ["RUNFILES_DIR"] = r"C:\\Users\jsupi\.plaidml"
os.environ["PLAIDML_NATIVE_PATH"] = r"C:\\WinPython\python-3.6.1.amd64\Library\bin\plaidml.dll"
That done, I also created a virtual environment with all the necessary python libraries installed (but that's probably not necessary) and I ran the python script from the command line rather than from the GUI.
I hope I didn't forget to write any essential step here. Oh, and one thing that confused me - after the fixes described above - was how little of my GPU was used during some computations. When I switched plaidml back to using the CPU, the runtime of a script increased 100-fold, and only that did convince me the GPU had been working after all.
I am also working with plaidml started recently.
I took your code and commented one import statement
import os
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
os.environ["RUNFILES_DIR"] = "C://Users/jsupi/.plaidml"
#os.environ["PLAIDML_NATIVE_PATH"] = "C:/Windows/WinPython/python-3.6.1.amd64/Lib/site-packages"
import numpy as np
#import tensorflow as tf <- commented this line, as I did not install tensorflow
import matplotlib.pyplot as plt
import keras
from keras.datasets import mnist #to import our dataset
from keras.models import Sequential, Model # imports our type of network
from keras.layers import Dense, Flatten, Input # imports our layers we want to use
from keras.losses import categorical_crossentropy #loss function
from keras.optimizers import Adam, SGD #optimisers
from keras.utils import to_categorical #some function for data preparation
batch_size = 128
num_classes = 10
epochs = 50
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
#Neural network with single dense hidden layer
model = Sequential()
#model.add(Input(input_shape=(28,28)))
model.add(Flatten(input_shape=(28,28)))
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
It worked and printed the output
Using plaidml.keras.backend backend.
x_train shape: (60000, 28, 28)
60000 train samples
10000 test samples
INFO:plaidml:Opening device "opencl_amd_ellesmere.0"
May be your plaidml may not be setup properly.
set the environment variable for verbose log as export PLAIDML_VERBOSE=1. This will print out if there is any error while running plailml-setup.
I did not install tensorflow framework and only using plaidml, keras. Though, I see some installation guide plaidml with tensorflow.
I am running with ubuntu 20.04

OSError: SavedModel file does not exist at: ../dnn/mpg_model.h5/{saved_model.pbtxt|saved_model.pb}

**
code editor: vscode
cmd: anaconda prompt
I followed the tutorial but why this error?
**
first error was ModuleNotFoundError: No module named 'tensorflow'
but i make env and install it
second error was ModuleNotFoundError: No module named 'flask'
but i make env and install it
i fix them and they work on python
How can I solve this?
# T81-558: Applications of Deep Neural Networks
# Module 13: Advanced/Other Topics
# Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Deploy simple Keras tabular model with Flask only.
from flask import Flask, request, jsonify
import uuid
import os
from tensorflow.keras.models import load_model
import numpy as np
app = Flask(__name__)
# Used for validation
EXPECTED = {
"cylinders":{"min":3,"max":8},
"displacement":{"min":68.0,"max":455.0},
"horsepower":{"min":46.0,"max":230.0},
"weight":{"min":1613,"max":5140},
"acceleration":{"min":8.0,"max":24.8},
"year":{"min":70,"max":82},
"origin":{"min":1,"max":3}
}
# Load neural network when Flask boots up
model = load_model(os.path.join("../dnn/","mpg_model.h5"))
#app.route('/api/mpg', methods=['POST'])
def calc_mpg():
content = request.json
errors = []
# Check for valid input fields
for name in content:
if name in EXPECTED:
expected_min = EXPECTED[name]['min']
expected_max = EXPECTED[name]['max']
value = content[name]
if value < expected_min or value > expected_max:
errors.append(f"Out of bounds: {name}, has value of: {value}, but should be between {expected_min} and {expected_max}.")
else:
errors.append(f"Unexpected field: {name}.")
# Check for missing input fields
for name in EXPECTED:
if name not in content:
errors.append(f"Missing value: {name}.")
if len(errors) <1:
# Predict
x = np.zeros( (1,7) )
x[0,0] = content['cylinders']
x[0,1] = content['displacement']
x[0,2] = content['horsepower']
x[0,3] = content['weight']
x[0,4] = content['acceleration']
x[0,5] = content['year']
x[0,6] = content['origin']
pred = model.predict(x)
mpg = float(pred[0])
response = {"id":str(uuid.uuid4()),"mpg":mpg,"errors":errors}
else:
# Return errors
response = {"id":str(uuid.uuid4()),"errors":errors}
print(content['displacement'])
return jsonify(response)
if __name__ == '__main__':
app.run(host= '0.0.0.0',debug=True)
#conda
(tf-gpu) (HelloWold) C:\Users\ASUS\t81_558_deep_learning\py>python mpg_server_1.py
2020-05-09 17:25:38.498181: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
File "mpg_server_1.py", line 26, in <module>
model = load_model(os.path.join("../dnn/","mpg_model.h5"))
File "C:\Users\ASUS\Envs\HelloWold\lib\site-packages\tensorflow\python\keras\saving\save.py", line 189, in load_model
loader_impl.parse_saved_model(filepath)
File "C:\Users\ASUS\Envs\HelloWold\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 113, in parse_saved_model
constants.SAVED_MODEL_FILENAME_PB))
OSError: SavedModel file does not exist at: ../dnn/mpg_model.h5/{saved_model.pbtxt|saved_model.pb}
from
https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_13_01_flask.ipynb
https://www.youtube.com/watch?v=H73m9XvKHug&t=1056s
The error occurs because your code is trying to load a model that does not exist. From the Notebook file you linked, you will most likely have to run the following:
from werkzeug.wrappers import Request, Response
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == '__main__':
from werkzeug.serving import run_simple
run_simple('localhost', 9000, app)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import EarlyStopping
import pandas as pd
import io
import os
import requests
import numpy as np
from sklearn import metrics
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv",
na_values=['NA', '?'])
cars = df['name']
# Handle missing value
df['horsepower'] = df['horsepower'].fillna(df['horsepower'].median())
# Pandas to Numpy
x = df[['cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'year', 'origin']].values
y = df['mpg'].values # regression
# Split into validation and training sets
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Build the neural network
model = Sequential()
model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(10, activation='relu')) # Hidden 2
model.add(Dense(1)) # Output
model.compile(loss='mean_squared_error', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto',
restore_best_weights=True)
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)
pred = model.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"After load score (RMSE): {score}")
model.save(os.path.join("./dnn/","mpg_model.h5"))
This will train and save the model that your code is loading.
It also looks like you have a small typo on the line: model = load_model(os.path.join("../dnn/","mpg_model.h5")) which should be changed to model = load_model(os.path.join("./dnn/","mpg_model.h5"))
I was getting the same error trying to load a .h5 model on a raspberry pi.
OSError: SavedModel file does not exist at: ... {saved_model.pbtxt|saved_model.pb}
sudo apt install python3-h5py
Seemed to have solved the issue.
reference
If on windows, the path to the model can cause the error.
For a sanity check, try placing the model in the same folder as the file that you are calling. Then fix your path to call the model from the same folder. This fixed my error.
If this works, then you can figure out how to fix the path issue (perhaps try providing an absolute path).
I got the same error, and I solved it by running the model again and save it with .pb extension instead with the .h5 or .hdf5 extension
Then use the tf.keras.models.load_model('D:\\model_name.pb') using double backslash
I had that error in windows and could solve it

AttributeError: 'Sequential' object has no attribute 'output_names'

I have got some problem for the below code
of the following line
new_model = load_model('124446.model', custom_objects=None, compile=True)
Here is the code:
import tensorflow as tf
from tensorflow.keras.models import load_model
mnist = tf.keras.datasets.mnist
(x_train,y_train), (x_test,y_test) = mnist.load_data()
x_train = tf.keras.utils.normalize(x_train,axis=1)
x_test = tf.keras.utils.normalize(x_test,axis=1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train,y_train,epochs=3)
tf.keras.models.save_model(model,'124446.model')
val_loss, val_acc = model.evaluate(x_test,y_test)
print(val_loss, val_acc)
new_model = load_model('124446.model', custom_objects=None, compile=True)
prediction = new_model.predict([x_test])
print(prediction)
Errors are:
Traceback (most recent call last): File
"C:/Users/TanveerIslam/PycharmProjects/DeepLearningPractice/1.py",
line 32, in
new_model = load_model('124446.model', custom_objects=None, compile=True) File
"C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\keras\engine\saving.py",
line 262, in load_model
sample_weight_mode=sample_weight_mode) File "C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\training\checkpointable\base.py",
line 426, in _method_wrapper
method(self, *args, **kwargs) File "C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\keras\engine\training.py",
line 525, in compile
metrics, self.output_names)
AttributeError: 'Sequential' object has no attribute 'output_names'
So can any one give me ant solution.
Note: I use pycharm as IDE.
As #Shinva said to set the "compile" attribute of the load_model function to "False".
Then after loading the model, compile it separately.
from tensorflow.keras.models import save_model, load_model
save_model(model,'124446.model')
Then for loading the model again do:
saved_model = load_model('124446.model', compile=False)
saved_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
saved_model.predict([x_test])
Update: For some unknown reasons, I started to get the same errors as the question states. After trying to find different solutions it seems using the "keras" library directly instead of "tensorflow.keras" works properly.
My setup is on "Windows 10" with python:'3.6.7', tensorflow:'1.11.0' and keras:'2.2.4'
As per my knowledge, there are three different ways in which you can save and restore your model; provided you have used keras directly to make your model.
Option1:
import json
from keras.models import model_from_json, load_model
# Save Weights + Architecture
model.save_weights('model_weights.h5')
with open('model_architecture.json', 'w') as f:
f.write(model.to_json())
# Load Weights + Architecture
with open('model_architecture.json', 'r') as f:
new_model = model_from_json(f.read())
new_model.load_weights('model_weights.h5')
Option2:
from keras.models import save_model, load_model
# Creates a HDF5 file 'my_model.h5'
save_model(model, 'my_model.h5') # model, [path + "/"] name of model
# Deletes the existing model
del model
# Returns a compiled model identical to the previous one
new_model = load_model('my_model.h5')
Option 3
# using model's methods
model.save("my_model.h5")
# deletes the existing model
del model
# load the saved model back
new_model = load_model('my_model.h5')
Option 1 requires the new_model to be compiled before using.
Option 2 and 3 are almost similar in syntax.
Codes used from:
1. Saving & Loading Keras Models
2. https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model
I was able to load the model by setting compile=False in load_model()
import tensorflow as tf
tf.keras.models.save_model(
model,
"epic_num_reader.model",
overwrite=True,
include_optimizer=True
)
new_model = tf.keras.models.load_model('epic_num_reader.model', custom_objects=None, compile=False)
predictions = new_model.predict(x_test)
print(predictions)
import numpy as np
print(np.argmax(predictions[0]))
plt.imshow(x_test[0],cmap=plt.cm.binary)
plt.show()
If this is run on Windows then the issue is that currently toco is not supported on Windows - https://github.com/tensorflow/tensorflow/issues/20975

Categories

Resources