I'm following this tutorial trying to convert a .h5 model to a tensorRT model
link to the tutorial.
I'm using tensorflow 2.6.0 so I have changed some lines, but I'm stuck in the model freeze block. I'm having this error in these lines:
input_names = [t.op.name for t in model.inputs]
output_names = [t.op.name for t in model.outputs]
The problem is the same in both lines:
TypeError: Keras symbolic inputs/outputs do not implement op. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.
As I've said, I'm following this tutorial, but I have an efficientNetB5 model that I trained yesterday, I though it would be enough if I load it into this part of the notebook and start by freezing it, but now I can't continue.
Any idea of what's happening here?
Related
frozen_graph = freeze_session(sess,output_names=[out.op.name for out in model.outputs])
this error appears:
Keras symbolic inputs/outputs do not implement op. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.
how should I change that.
Question:
I have created and trained a keras model in tf 2.3.0 and I need to load this model in tf 1.12.0 in order to be used with a library that requires an older version of tf. Is there any way to either convert the models from the format of the new version of tf to an older version so I can load the model with tf 1.12.0?
What I have tried so far:
A similar discussion showed how to convert models from tf 1.15 - 2.1 to tf.10, but when I tried this solution I got an error "Unknown layer: functional". Link: Loading the saved models from tf.keras in different versions
I tried to fix this by using the following line suggested by another question:
new_model = tf.keras.models.model_from_json(json_config, custom_objects {'Functional':tf.keras.models.Model})
Link: ValueError: Unknown layer: Functional
However, if I use this the I get an error: ('Unrecognized keyword arguments:', dict_keys(['ragged'])) , which is the same error discussed in the first discussion I linked above.
Another method I tried was using the Onnx library to convert the keras model to an Onnx model and then back to a keras model of a different version. However, I soon realized that the keras2onnx library required tf 2.x.
Links: https://github.com/onnx/tensorflow-onnx and https://github.com/gmalivenko/onnx2keras
Any suggestions about how to get around this without having to retrain my models in a older version of tensorflow would be greatly appreciated! Thanks
Here is the simple code that I tried to implement to load my model:
Save in tf 2.3.0
import tensorflow as tf
CNN_model=tf.keras.models.load_model('Real_Image_XAI_Models/Test_10_DC_R_Image.h5')
CNN_model.save_weights("Real_Image_XAI_Models/weights_only.h5")
json_config = CNN_model.to_json()
with open('Real_Image_XAI_Models/model_config.json', 'w') as json_file:
json_file.write(json_config)
Load in tf 1.12.0
with open('Real_Image_XAI_Models/model_config.json') as json_file:
json_config = json_file.read()
new_model = tf.keras.models.model_from_json(json_config)
#or implement the line to acount for the functional class
#new_model = tf.keras.models.model_from_json(json_config, custom_objects={'Functional':tf.keras.models.Model})
new_model.load_weights('Real_Image_XAI_Models/weights_only.h5')
There are breaking changes in the model config from tf-1.12.0 to tf-2.3.0 including, but not limited to, following:
The root class Model is now Functional
The support for Ragged tensors was added in tf-1.15
You can try to edit the model config json file once saved from tf-2.3.0 to reverse the effects of these changes as follows:
Replace the root class definition "class_name": "Functional" by "class_name": "Model". This will reverse the effect of change #1 above.
Delete all occurrences of "ragged": false, (and of "ragged": true, if present). This will reverse the effect of change #2 above.
Note the trailing comma and space along with the "ragged" fields above
You may try to find a way to make these changes programmatically in the json dictionary or at the model load time, but I find it easier to make these one-time changes to the json file itself.
I trained a Pix2Pix generator from the Tensorflow 2.0 tutorial and I exported it in tflite this way :
converter = tf.lite.TFLiteConverter.from_keras_model(generator)
tflite_model = converter.convert()
open("facades.tflite", "wb").write(tflite_model)
Unfortunately, I have problems that seem to come from tf.keras.layers.BatchNormalization when I try to infer it.
First, the result of an inference only returns Nan values. This can be resolved by disabling the fused implementation.
Secondly, the BatchNormalization layer behaves differently depending on whether we are in training or prediction. The tutorial explicitly states to make a prediction in training=True mode. I don't know how to do this with the tflite model.
One solution talks about replacing the BatchNormalization layer by an InstanceNormalization, which can be found in the tensorflow_addons.
The conversion to tflite is done without any problem, but there is still a problem with the inference.
when I call invoke on the interpreter it crashes by returning me a SEGFAULT. According to the stackcall it would come from SquaredDifference operator of the InstanceNormalization layer.
Has anyone managed to convert this TensorFlow 2.0 model into a tflite and infer it correctly ? How ? Thank you.
PS : I would prefer a solution with BatchNormalization because it is a standard layer in Keras and can therefore also work with TensorFlow javascript.
I have an old model defined and trained using tensorflow, and now I would like to work on it but I'm currently using Keras for everything.
So the question is: is it possible to load a tf cehckpoint (with *.index, *.meta etc..) into a Keras model?
I am aware of old questions like: How can I convert a trained Tensorflow model to Keras?.
I am hoping that after 2 years, and with keras being included into tf, there would be a easier way to do it now.
Unfortunately I don't have the original model definition in tf; I may be able to find it, but it would be nicer if it wasn't necessary.
Thanks!
In the below link, which is the official TensorFlow tutorial, the trained model is saved and it has .ckpt extension. After, it is loaded and is used with Keras model.
I think it might help you.
https://www.tensorflow.org/tutorials/keras/save_and_restore_models
I have been working with Keras for a week or so. I know that Keras can use either TensorFlow or Theano as a backend. In my case, I am using TensorFlow.
So I'm wondering: is there a way to write a NN in Keras, and then print out the equivalent version in TensorFlow?
MVE
For instance suppose I write
#create seq model
model = Sequential()
# add layers
model.add(Dense(100, input_dim = (10,), activation = 'relu'))
model.add(Dense(1, activation = 'linear'))
# compile model
model.compile(optimizer = 'adam', loss = 'mse')
# fit
model.fit(Xtrain, ytrain, epochs = 100, batch_size = 32)
# predict
ypred = model.predict(Xtest, batch_size = 32)
# evaluate
result = model.evaluate(Xtest)
This code might be wrong, since I just started, but I think you get the idea.
What I want to do is write down this code, run it (or not even, maybe!) and then have a function or something that will produce the TensorFlow code that Keras has written to do all these calculations.
First, let's clarify some of the language in the question. TensorFlow (and Theano) use computational graphs to perform tensor computations. So, when you ask if there is a way to "print out the equivalent version" in Tensorflow, or "produce TensorFlow code," what you're really asking is, how do you export a TensorFlow graph from a Keras model?
As the Keras author states in this thread,
When you are using the TensorFlow backend, your Keras code is actually building a TF graph. You can just grab this graph.
Keras only uses one graph and one session.
However, he links to a tutorial whose details are now outdated. But the basic concept has not changed.
We just need to:
Get the TensorFlow session
Export the computation graph from the TensorFlow session
Do it with Keras
The keras_to_tensorflow repository contains a short example of how to export a model from Keras for use in TensorFlow in an iPython notebook. This is basically using TensorFlow. It isn't a clearly-written example, but throwing it out there as a resource.
Do it with TensorFlow
It turns out we can actually get the TensorFlow session that Keras is using from TensorFlow itself, using the tf.contrib.keras.backend.get_session() function. It's pretty simple to do - just import and call. This returns the TensorFlow session.
Once you have the TensorFlow session variable, you can use the SavedModelBuilder to save your computational graph (guide + example to using SavedModelBuilder in the TensorFlow docs). If you're wondering how the SavedModelBuilder works and what it actually gives you, the SavedModelBuilder Readme in the Github repo is a good guide.
P.S. - If you are planning on heavy usage of TensorFlow + Keras in combination, have a look at the other modules available in tf.contrib.keras
So you want to use instead of WX+b a different function for your neurons. Well in tensorflow you explicitly calculate this product, so for example you do
y_ = tf.matmul(X, W)
you simply have to write your formula and let the network learn. It should not be difficult to implement.
In addition what you are trying to do (according to the paper you link) is called batch normalization and is relatively standard. The idea being you normalize your intermediate steps (in the different layers). Check for example https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFggyMAE&url=https%3A%2F%2Farxiv.org%2Fabs%2F1502.03167&usg=AOvVaw1nGzrGnhPhNGEczNwcn6WK or https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFghCMAM&url=https%3A%2F%2Fbcourses.berkeley.edu%2Ffiles%2F66022277%2Fdownload%3Fdownload_frd%3D1%26verifier%3DoaU8pqXDDwZ1zidoDBTgLzR8CPSkWe6MCBKUYan7&usg=AOvVaw0AHLwD_0pUr1BSsiiRoIFc
Hope that helps,
Umberto