In Tensorflow how to freeze saved model - python

This is probably a very basic question...
But how do I convert checkpoint files into a single .pb file.
My goal is to serve the model using probably C++
These are the files that I'm trying to convert.
As a side note I'm using tflearn with tensorflow.
Edit 1:
I found an article that explains how to do this: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
The problem is that I'm stuck with the following error
KeyError: "The name 'Adam' refers to an Operation not in the graph."
How do I fix this?
Edit 2:
Maybe this will shed some light on the problem.
The error that I get comes from the regression layer, if I use: sgd.
I'll get
KeyError: "The name 'SGD' refers to an Operation not in the graph."

The tutorial on https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc works just fine.
The problem was that I was loading the model using tensorflow instead of using tflearn.
So... instead of:
tf.train.import_meta_graph(...)
We do:
model.load(...)
TFLearn knows how to parse the graph properly.

Related

Cannot export QNN brevitas to ONNX

I have trained my model as QNN with brevitas. Basically my input shape is:
torch.Size([1, 3, 1024])
I have exported the .pt extended file. As I try my model and generate a confusion matrix I was able to observe everything that I want.
So I believe that there is no problem about the model.
On the other hand as I try to export the .onnx file to implement this brevitas trained model on FINN, I wrote the code given below:
from brevitas.export import FINNManager
FINNManager.export(my_model, input_shape=(1, 3, 1024), export_path='myfinnmodel.onnx')
But as I do that I get the error as:
torch.onnx.export(module, input_t, export_target, **kwargs)
TypeError: export() got an unexpected keyword argument
'enable_onnx_checker'
I do not think this is related with the version. But if you want me to be sure about the version, I can check these too.
If you can help me I will be really appreciated.
Sincerely;
The problem is related to pytorch version > 1.10. Where "enable_onnx_checker" is no more a parameter of torch.onnx.export function.
This is the official solution from the repository.
https://github.com/Xilinx/brevitas/pull/408/files
The fix is not yet release. Is in dev branch.
You need to compile brevitas by yourself or simply change the code in brevitas/export/onnx/manager.py following official solution.
After that i am able to get onnx converted model.

How to open and analyze Google Tables model using python/tf?

I read several discussions about this and still cannot make it work for my case
Have a classification model trained using Google Tables.
Exported the model and download the directory with cli.
My goal is to get a better understanding of the model trained by google, study it, understand its decisions. And later try to prune it to improve performance.
I'm using this code, just to start:
import tensorflow as tf
from tensorflow import keras
import struct2tensor
location = "model_dir"
model = tf.saved_model.load(location)
model.summary()
I get this error:
AttributeError: 'AutoTrackable' object has no attribute 'summary'
the variable model is of type:
<tensorflow.python.training.tracking.tracking.AutoTrackable at 0x7fa8eaa7ed30>
And I stuck there, don't know how to continue. Using Python 3.8 and the last version of those libraries. Any idea of how can I proceed?
Thanks!
The proper method to load your model depends on your file formatting.
You can see in the Tensorflow documentation that "The object returned by tf.saved_model.load is not a Keras object (i.e. doesn't have .fit, .predict, etc. methods)" and "Use tf.keras.models.load_model to restore the Keras model".
I'm not sure if you want to use the keras module or not, but since you have imported it I assume you do. In that case I would recommend checking this other Stackoverflow thread where it is explained how to use the tf.keras.models.load_model method depending if your model is saved as .pb or .h5.
If the model is saved as .pb you should use it with the string pointing to the directory where the model is saved, as you did in your code snippet but in this case using the keras method:
model = tf.keras.models.load_model('model_dir')
If instead it's saved as .h5 you should use it specifying it:
model = tf.keras.models.load_model('my_model_in_h5.h5')

Connecting to invalid output 1 of source node PartitionedCall which has 1 outputs

I am trying to use bilinear interpolation in tensorflow, a function is available for this in the Tensorflow Addons library here, however it only has Tf2.x support. I need to get this function working in Tf1.15, so I made few small modifications and things seem fine until actually training when it throws this error that I can't understand (see collab notebook with demo code). What is this PartitionedCall? I tried another simpler model with conv2D layers instead of conv3D and that doesn't throw such errors, which part of conv3D layers is the culprit here? Also no such error with Tf2 of course, what is causing this?
Suggestions for other ways to get bilinear interpolation working in Tf1.15 should also be helpful. Thanks!

How can I "see" the model/network when loading a model from tfhub?

I'm new to this topic, so forgive me my lack of knowledge. There is a very good model called inception resnet v2 that basically works like this, the input is an image and outputs a list of predictions with their positions and bounded rectangles. I find this very useful, and I thought of using the already worked model in order to recognize things that it now can't (for example if a human is wearing a mask or not). Yes, I wanted to add a new recognition class to the model.
import tensorflow as tf
import tensorflow_hub as hub
mod = hub.load("https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1")
mod is an object of type
tensorflow.python.training.tracking.tracking.AutoTrackable, reading the documentation (that was only available on the source code was a bit hard to understand without context)
and I tried to inspect some of it's properties in order to see if I could figure it out by myself.
And well, I didn't. How can I see the network, the layers, the weights? the fit methods, Is it's all abstracted away?. Can I convert it to keras? I want to experiment with it, see if I can modify it, and see if I could export the model to another representation, for example pytorch.
I wanted to do this because I thought it'd be better to modify an already working model instead of creating one from scratch. Also because I'm not good at training models myself.
I've run into this issue too. Tensorflow hub guide says:
This error frequently arises when loading models in TF1 Hub format with the hub.load() API in TF2. Adding the correct signature should fix this problem.
mod = hub.load(handle).signatures['default']
As an example, you can see this notebook.
You can dir the loaded model asset to see what's defined on it
m = hub.load(handle)
dir(model)
As mentioned in the other answer, you can also look at the signatures with print(m.signatures)
Hub models are SavedModel assets and do not have a keras .fit method on them. If you want to train the model from scratch, you'll need to go to the source code.
Some models have more extensive exported interfaces including access to individual layers, but this model does not.

keras problems loading custom model from yolov2

I've searched around for a couple of answers regarding the load_model from keras but I still have a question.
I am following this model really closely (https://github.com/experiencor/keras-yolo2), and am training on a custom dataset.
I have done the training which gives me a yolov2.h5 file, basically model weights to fit into the keras model. But I am encountering some problems with the loading of the model.
After loading the model (in a separate.py file)
model = load_model('file_dir/yolov2.h5')
First I encounter the issue
NameError: name 'tf' is not defined
Which I then searched up to modify my code to add custom objects as such:
model = load_model('file_dir/yolov2.h5', custom_objects={'tf':tf})
This clears the first error but results in another
ValueError: Unknown loss function : custom_loss
I used the custom_loss function from the yolov2 (https://github.com/experiencor/keras-yolo2/blob/master/frontend.py), so i tried to solve it by
from frontend import YOLO
model = load_model('file_dir/yolov2.h5' custom_objects={'tf':tf, 'custom_loss':YOLO.custom_loss)
But ran into another error:
TypeError: custom_loss() missing 1 required positional argument
I got rather stuck here because I have no idea how to fit in the parameters for custom_loss. Seek some help regarding this (Don't particularly understand this part since I'm loading my model in a different python script separate.py). Thank you so much!
(Edit: This fix doesn't work for me either)
model = load_model('file_dir/yolov2.h5', compile = False)
To resolve this problem, as you already have the network at hand, only save trained weights (like what keras trainer does in callback).
For testing, make model, no need to compile, and then load trained weights using model.load_weights(path/to/saved/weights).
You also can use "by_name=True" if you make the network in a different way, this time you should keep layer names.
Another option id to manually set weights; for this you will load .h5 file bu "h5py" (h5py.File(path/to/weights, mode='r')) for example (have look how keras do that), then try to correspond layer names of the model and loaded weights.

Categories

Resources