I have an .h5 model that was built with tensorflow==1.13.1 and Keras==2.2.4 on a host to which I don't have access. I'm trying to load that model using keras.models.load_model as follows:
model.py:
from keras.models import load_model
import numpy as np
encoder = load_model('encoder.h5')
encoder.summary()
This throws a stacktrace that points to a source file (implicit_delta.py) I cannot open:
Duhaime:web doug$ python model.py
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W0620 09:18:29.064763 140735739011968 deprecation_wrapper.py:119] From /Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0620 09:18:29.130089 140735739011968 deprecation_wrapper.py:119] From /Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
Traceback (most recent call last):
File "model.py", line 8, in <module>
encoder = load_model('../pose-enc-raymond.h5')
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 225, in _deserialize_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 458, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/layers/__init__.py", line 55, in deserialize
printable_module_name='layer')
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/network.py", line 1032, in from_config
process_node(layer, node_data)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/network.py", line 991, in process_node
layer(unpack_singleton(input_tensors), **kwargs)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/layers/core.py", line 687, in call
return self.function(inputs, **arguments)
File "/home/cshimmin/jupyter/dance/implicit_delta.py", line 95, in <lambda>
IndexError: tuple index out of range
I have tried installing other versions of tensorflow and keras but so far haven't had luck working around this. Is there any trick I can do to figure out how to load this model? Any suggestions or hacks are appreciated!
This thread helped me realize I needed to load the model using the version of python that was used to create the model:
conda create -n 3.7.3 python=3.7.3
conda activate 3.7.3
Then pip install everything and the model will boot!
Related
I am new to StackOverflow.I am trying to export a model by using export_inference_graph.py.
I trained my model locally using faster_rcnn_inception_v2.I am following this tutorial.
When in command prompt I type
python export_inference_graph.py --input_type image_tensor --pipeline_config_path CAPTCHA_training/faster_rcnn_inception_v2_coco.config --trained_checkpoint_prefix "CAPTCHA_training_dir/model.ckpt-51272" --output_directory CAPTCHA_inference_graph
All with correct paths I get following error.
File "export_inference_graph.py", line 206, in <module>
tf.app.run()
File "C:\Users\Jatin\anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\Jatin\anaconda3\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\Jatin\anaconda3\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "export_inference_graph.py", line 194, in main
exporter.export_inference_graph(
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\exporter.py", line 604, in export_inference_graph
detection_model = model_builder.build(pipeline_config.model,
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\builders\model_builder.py", line 1116, in build
return build_func(getattr(model_config, meta_architecture), is_training,
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\builders\model_builder.py", line 583, in _build_faster_rcnn_model
_check_feature_extractor_exists(frcnn_config.feature_extractor.type)
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\builders\model_builder.py", line 249, in _check_feature_extractor_exists
raise ValueError('{} is not supported. See `model_builder.py` for features '
ValueError: faster_rcnn_inception_v2 is not supported. See `model_builder.py` for features extractors compatible with different versions of Tensorflow
I am using Python 3.8.5 and tensorflow version 2.4.1
Thanks in advance
Looks like a Tensorflow version problem according to error. Looking into source faster_rcnn_inception_v2 exists under if tf_version.is_tf1():. Try using to TF 1.
https://github.com/tensorflow/models/blob/5a89897396aa8ecc7b3ef8919f987e96fc8d74db/research/object_detection/builders/model_builder.py#L70
https://github.com/tensorflow/models/blob/5a89897396aa8ecc7b3ef8919f987e96fc8d74db/research/object_detection/models/faster_rcnn_inception_resnet_v2_feature_extractor.py#L33
faster_rcnn_inception_v2_coco exists here in Tensorflow 1 Detection model zoo.
https://github.com/tensorflow/models/blob/5a89897396aa8ecc7b3ef8919f987e96fc8d74db/research/object_detection/g3doc/tf1_detection_zoo.md#coco-trained-models
I trained a CNN model in a local machine, saved the model using model.save('./models/my_model') and I am able to load the model (new_model = tensorflow.keras.models.load_model('./models/my_model')) and classify an image that I pass through a browser by using the web framework Flask.
Now, I want to run my code hosted in pythonanywhere.com. However, when loading the model I got this error:
ValueError: Error converting shape to a TensorShape: invalid literal for int() with base 10: 'class_name'.
I don't know if has to do with versions. First I trained with python 3.8 and the latest tensorflow version, but since Flask does not allow all the required libraries in 3.8, I used 3.7 in flask with tensorflow 2.0.0. So I retrained the model in my computer with 3.7 and tf 2.0.0 and uploaded the newer model files. However, the same error persists.
--update--
I put here the log error
Error running WSGI application
ValueError: Error converting shape to a TensorShape: invalid literal for int() with base 10: 'class_name'.
File "/var/www/user_pythonanywhere_com_wsgi.py", line 16, in
from main import app as application # noqa
File "/home/user/mysite/main.py", line 11, in
model = tensorflow.keras.models.load_model('/home/user/mysite/models/modelo')
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/save.py", line 150, in load_model
return saved_model_load.load(filepath, compile)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/load.py", line 86, in load
model = tf_load.load_internal(path, loader_cls=KerasObjectLoader)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/saved_model/load.py", line 541, in load_internal
export_dir)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/load.py", line 103, in init
self._finalize()
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/load.py", line 127, in _finalize
node.add(layer)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/sequential.py", line 174, in add
batch_shape=batch_shape, dtype=dtype, name=layer.name + '_input')
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/input_layer.py", line 263, in Input
input_tensor=tensor)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/input_layer.py", line 125, in init
ragged=ragged)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py", line 1057, in placeholder
x = array_ops.placeholder(dtype, shape=shape, name=name)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 2630, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 6669, in placeholder
shape = _execute.make_shape(shape, "shape")
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 211, in make_shape
e))
The error comes from a tensorflow file, and I am just passing the path where is the saved model
TensorFlow is not currently working in PythonAnywhere web apps. If you're using Keras, you can try switching to Theano backend which is confirmed to work. There's a short help page on that as well.
Python has a problem converting your class_name to int.
Refer to : ValueError: invalid literal for int() with base 10: ''
I am trying to load a trained keras model. The training was done in Google Colaboratory, and I`m trying to load it on my computer, but an error occurs.
Keras version - 2.2.4
Tensorflow version - 1.14.0
These versions match for my computer and Colab.
I tried to make h5py versions matching, but after downgrading it on my computer, Tensorflow stopped working as a whole, therefor I undid that.
How can I fix this?
import os
from tensorflow.python.keras import models
import UNet3Deep as UNet
working_dir = os.path.join('C:', os.sep, 'Users', 'Peteris.Zvejnieks', 'Data')
model_path = os.path.join(working_dir, 'tmp', 'real_deal.hdf5')
img_shape = (512, 512, 1)
model = UNet.gib_model(img_shape)
models.load_model(model_path)
The error message:
Traceback (most recent call last):
File "<ipython-input-14-7e079df22f50>", line 1, in <module>
runfile('C:/Users/Peteris.Zvejnieks/Data/model_tester.py', wdir='C:/Users/Peteris.Zvejnieks/Data')
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Peteris.Zvejnieks/Data/model_tester.py", line 37, in <module>
models.load_model(model_path)
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\tensorflow\python\keras\engine\saving.py", line 249, in load_model
optimizer_config, custom_objects=custom_objects)
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\tensorflow\python\keras\optimizers.py", line 838, in deserialize
printable_module_name='optimizer')
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 194, in deserialize_keras_object
return cls.from_config(cls_config)
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\tensorflow\python\keras\optimizers.py", line 159, in from_config
return cls(**config)
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\tensorflow\python\keras\optimizers.py", line 471, in __init__
super(Adam, self).__init__(**kwargs)
File "C:\ProgramData\Anaconda3\envs\tf_build_env\lib\site-packages\tensorflow\python\keras\optimizers.py", line 68, in __init__
'passed to optimizer: ' + str(k))
TypeError: Unexpected keyword argument passed to optimizer: name
I am trying to quantize my model (specifically pretrained faster_rcnn_inception_v2 on coco, that was downloaded from the model zoo), in hopes to speedup inference time.
I use the following code from here:
import tensorflow as tf
converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir)
converter.post_training_quantize = True
tflite_quantized_model = converter.convert()
open("quantized_model.tflite", "wb").write(tflite_quantized_model)
Models directory didnt have saved_model.pb file. So i renamed frozen_inference_graph.pb to saved_model.pb.
Running the code above produce the following runtime error:
Traceback (most recent call last):
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1664, in <module>
main()
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/hdd/motorola/motorola_heads/tensorflow_face_detection/quantize.py", line 5, in <module>
converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 318, in new_func
return func(*args, **kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/lite.py", line 587, in from_saved_model
tag_set, signature_key)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/lite.py", line 376, in from_saved_model
output_arrays, tag_set, signature_key)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/convert_saved_model.py", line 254, in freeze_saved_model
meta_graph = get_meta_graph_def(saved_model_dir, tag_set)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/convert_saved_model.py", line 61, in get_meta_graph_def
return loader.load(sess, tag_set, saved_model_dir)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 318, in new_func
return func(*args, **kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 269, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 420, in load
**saver_kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 347, in load_graph
meta_graph_def = self.get_meta_graph_def_from_tags(tags)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 323, in get_meta_graph_def_from_tags
" could not be found in SavedModel. To inspect available tag-sets in"
RuntimeError: MetaGraphDef associated with tags set(['serve']) could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
What does it mean and what should i do?
Please refer to this issue. They seem to have the same issue as you.
This may be fixed in a more recent version of Tensorflow (perhaps the tag has switched from 'serve' to 'serving' in the meantime).
You should use tf.saved_model.simple_save to save the pb model.
I am trying to run the code from here which is an implementatino of Generative Adversarial Networks using keras python. I followed the instructions and install all the requirements. Then i tried to run the code for DCGAN. However, it seems that there is some issue with the compatibility of the libraries. I am receiving the following message when i am running the code:
AttributeError: 'module' object has no attribute 'leaky_relu'
File "main.py", line 176, in <module>
dcgan = DCGAN()
File "main.py", line 25, in __init__
self.discriminator = self.build_discriminator()
File "main.py", line 84, in build_discriminator
model.add(LeakyReLU(alpha=0.2))
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/models.py", line 492, in add
output_tensor = layer(self.outputs[0])
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 617, in __call__
output = self.call(inputs, **kwargs)
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/layers/advanced_activations.py", line 46, in call
return K.relu(inputs, alpha=self.alpha)
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2918, in relu
x = tf.nn.leaky_relu(x, alpha)
I am using kerasVersion: 2.1.3 while tensorflowVersion: 1.2.1
and TheanoVersion: 1.0.1+40.g757b4d5
Any idea why am I receiving that issue?
EDIT:
The error is located in the line 84 in the build_discriminator:
function:`model.add(LeakyReLU(alpha=0.2))`
According to this answer, leaky_relu was added to tensorflow on version 1.4. So you might wanna check if your tensorflow installation is at least on version 1.4.