I'm trying to use my Keras model alongside my TF object detection model. But I am getting this error:
ValueError: Calling `Model.predict` in graph mode is not supported when the `Model` instance was constructed with eager mode enabled. Please construct your `Model` instance in graph mode or call `Model.predict` with eager mode enabled.
It errors on print(np.around(model1.predict(datatest)))
I believe it is because with TF object detection I have to run it within
with detection_graph.as_default():
with tf.compat.v1.Session(graph=detection_graph) as sess:
I'm not trying to run my Keras model in graph mode. Any idea around this?
I had initially loaded the model outside the with statements, but loading within the with statements seems to fix the issue.
Related
I'm following this tutorial trying to convert a .h5 model to a tensorRT model
link to the tutorial.
I'm using tensorflow 2.6.0 so I have changed some lines, but I'm stuck in the model freeze block. I'm having this error in these lines:
input_names = [t.op.name for t in model.inputs]
output_names = [t.op.name for t in model.outputs]
The problem is the same in both lines:
TypeError: Keras symbolic inputs/outputs do not implement op. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.
As I've said, I'm following this tutorial, but I have an efficientNetB5 model that I trained yesterday, I though it would be enough if I load it into this part of the notebook and start by freezing it, but now I can't continue.
Any idea of what's happening here?
I read several discussions about this and still cannot make it work for my case
Have a classification model trained using Google Tables.
Exported the model and download the directory with cli.
My goal is to get a better understanding of the model trained by google, study it, understand its decisions. And later try to prune it to improve performance.
I'm using this code, just to start:
import tensorflow as tf
from tensorflow import keras
import struct2tensor
location = "model_dir"
model = tf.saved_model.load(location)
model.summary()
I get this error:
AttributeError: 'AutoTrackable' object has no attribute 'summary'
the variable model is of type:
<tensorflow.python.training.tracking.tracking.AutoTrackable at 0x7fa8eaa7ed30>
And I stuck there, don't know how to continue. Using Python 3.8 and the last version of those libraries. Any idea of how can I proceed?
Thanks!
The proper method to load your model depends on your file formatting.
You can see in the Tensorflow documentation that "The object returned by tf.saved_model.load is not a Keras object (i.e. doesn't have .fit, .predict, etc. methods)" and "Use tf.keras.models.load_model to restore the Keras model".
I'm not sure if you want to use the keras module or not, but since you have imported it I assume you do. In that case I would recommend checking this other Stackoverflow thread where it is explained how to use the tf.keras.models.load_model method depending if your model is saved as .pb or .h5.
If the model is saved as .pb you should use it with the string pointing to the directory where the model is saved, as you did in your code snippet but in this case using the keras method:
model = tf.keras.models.load_model('model_dir')
If instead it's saved as .h5 you should use it specifying it:
model = tf.keras.models.load_model('my_model_in_h5.h5')
I have upgraded with tf_upgrade_v2 TF1 code to TF2. I'm a noob with both. I got the next error:
RuntimeError: tf.placeholder() is not compatible with eager execution.
I have some tf.compat.v1.placeholder().
self.temperature = tf.compat.v1.placeholder_with_default(1., shape=())
self.edges_labels = tf.compat.v1.placeholder(dtype=tf.int64, shape=(None, vertexes, vertexes))
self.nodes_labels = tf.compat.v1.placeholder(dtype=tf.int64, shape=(None, vertexes))
self.embeddings = tf.compat.v1.placeholder(dtype=tf.float32, shape=(None, embedding_dim))
Could you give me any advice about how to proceed? Any "fast" solutions? or should I to recode this?
I found an easy solution here: disable Tensorflow eager execution
Basicaly it is:
tf.compat.v1.disable_eager_execution()
With this, you disable the default activate eager execution and you don't need to touch the code much more.
tf.placeholder() is meant to be fed to the session that when run receive the values from feed dict and perform the required operation.
Generally, you would create a Session() with 'with' keyword and run it. But this might not favour all situations due to which you would require immediate execution. This is called eager execution.
Example:
generally, this is the procedure to run a Session:
import tensorflow as tf
def square(num):
return tf.square(num)
p = tf.placeholder(tf.float32)
q = square(num)
with tf.Session() as sess:
print(sess.run(q, feed_dict={num: 10})
But when we run with eager execution we run it as:
import tensorflow as tf
tf.enable_eager_execution()
def square(num):
return tf.square(num)
print(square(10))
Therefore we need not run it inside a session explicitly and can be more intuitive in most of the cases. This provides more of an interactive execution.
For further details visit:
https://www.tensorflow.org/guide/eager
If you are converting the code from tensorflow v1 to tensorflow v2, You must implement tf.compat.v1 and Placeholder is present at tf.compat.v1.placeholder but this can only be executed in eager mode off.
tf.compat.v1.disable_eager_execution()
TensorFlow released the eager execution mode, for which each node is immediately executed after definition. Statements using tf.placeholder are thus no longer valid.
In TensorFlow 1.X, placeholders are created and meant to be fed with actual values when a tf.Session is instantiated. However, from TensorFlow2.0 onwards, Eager Execution has been enabled by default, so the notion of a "placeholder" does not make sense as operations are computed immediately (rather than being differed with the old paradigm).
Also see Functions, not Sessions,
# TensorFlow 1.X
outputs = session.run(f(placeholder), feed_dict={placeholder: input})
# TensorFlow 2.0
outputs = f(input)
If you are getting this error while doing object detection using TensorFlow model then use exporter_main_v2.py instead of export_inference_graph.py for exporting the model. This is right method to do. If you just off eager_execution then it will solve this error but generate other.
Also note that there are some parameter change like hear you will specify the path to directory of checkpoint instead of path to checkpoint. refer to this document for how to do object detection with TensorFlow V2
To solve this, you have to disable the default activate eager execution. So add the following code line.
tf.compat.v1.disable_eager_execution() #<--- Disable eager execution
Before the error fixing
After the error fixing
I retrained ssd_mobilenet_v2_coco_2018_03_29 from Tensorflow detection model zoo by following the procedures defined in Tensorflow Object Detection API using my own dataset. When I try to export the model by
python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=.../pipeline.config --trained_checkpoint_prefix=.../model.ckpt-200000 --output_directory=.../export-model-200000 --input_shape=1,300,300,3
it says "20 ops no flops stats due to incomplete shapes." although I overwrite input_shape with 1,300,300,3.
How can I overcome this problem?
My final goal is to convert this model into IR representation, in which I got problems related to shape inference during model conversion whereas I have no problem with converting the original model, which I used for transfer learning, into IR.
Thanks
I'm using Keras with Cern ROOT and its analysis package TMVA. The way it works is that I use Keras to initialize the NN, then save it to a file, then TMVA loads that file in. The problem is that I am using custom metrics when setting up the neural network, and when doing this, Keras wants you to do something like
models.load_model(model_path, custom_objects={"my_object":my_object})
Unfortunately, the way that TMVA takes arguments requires that I only supply the filename of the model file being used. However, based on the error messages that I am getting, it is clear that it is simply using Keras to load the model in. My question is, how do I force Keras to automatically load my custom objects in without having to use the above line, as this is incompatible with the package I'm trying to use.