How to add post-processing to a tflite model? - python

I know how to create a freezed graph and use the export_tflite_ssd_graph - however, is it possible to add post-processing to a tflite graph?
It is not possible to freeze a tflite graph since r1.9, so the usual function won't work. So is there any workaround to add post-processing to a tflite graph when you only have the .tflite File?

Related

Differentiate ONNX objects

I would like to make an onnx model differentiable. As I understand exporting to ONNX does not export the AutoGrad graph, is there anyway to reconstruct if after loading?
I am aware of torch-ort but to me it looks like it only works with nn.Module objects, i.e. original python pytorch models ? (see example here, here and here)
Can I in any way load an ONNX exported model and get pytorch or onnx-runtime to reconstruct the backward graph?
Alternatively, can I get onnx to export backward graph of a PyTorch nn.Module model? So that I can run it with onnx runtime?
Background: I want to work with physics based models, where I could easily write forward "energy" function, and can use their gradient ("forces") in my simulations. At present we need either numerical diff, or analytic expressions to be derived before hand.

Segmentation Fault when exporting to onnx a quantized Pytorch model

I am trying to export a model to the onnx format. The architecture is complicated so I won't share it here, but basically, I have the network weights in a .pth file. I'm able to load them, create the network and perform inference with it.
It's important to note that I have adapted the code to be able to quantize the network. I have added quantize and dequantize operators as well as some torch.nn.quantized.FloatFunctional() operators.
However, whenever I try to export it with
torch.onnx.export(torch_model, # model being run
input_example, # model input
model_name, # where to save the model
export_params=True, # store the trained parameter
opset_version=11, # the ONNX version to export
# the model to
do_constant_folding=True, # whether to execute constant
# folding for optimization
)
I get Segmentation fault (core dumped)
I am working on Ubuntu 20, with the following packages installed :
torch==1.6.0
torchvision==0.7.0
onnx==1.7.0
onnxruntime==1.4.0
Note that the according to some prints I have left in the code, the inference part of the exporting completes. The segmentation fault happens afterward.
Does anyone see any reason why this may happen ?
[Edit] : I can export my network when it is not adapted for quantized operations. Therefore, the problem is not a broken installation but more a problem of some quantized operators for onnx saving.
Well, it turns out that ONNX does not support quantized models (but does not warn you in anyway when running, it just throws out a segfault). It does not seem to be on the agenda yet, so a solution can be to use TensorRT.

Wrong frozen graph export from Tensorflow Object Detection API zoo model

I am working with Mask R CNN model using Tensorflow Object Detection API project (https://github.com/tensorflow/models/tree/r1.12.0/research/object_detection). I stick to r1.12.0 release (but this is not a must but I do not think it influences my problem.) My plan is to modify some "static" parts of the model and export it again into the frozen graph format.
As a first step, I meant to regenerate the frozen graph from the checkpoints file and the pipeline.config using export_inference_graph.py script (https://github.com/tensorflow/models/blob/r1.12.0/research/object_detection/export_inference_graph.py). I downloaded the inception V2 model (http://download.tensorflow.org/models/object_detection/mask_rcnn_inception_v2_coco_2018_01_28.tar.gz) and executed the script using Tensorflow 1.12.0. It does the job and creates a frozen graph.
The issue is that if I compare the original frozen graph with the generated one, they are different. If I visualize them using Tensorboard there are obvious differences between them. Some nodes are missing, some nodes are different etc.
I have tried other models as well (normal Fast R CNN), I had the same issue always.
How can this be? How should I use the checkpoint files and the pipeline.config file to regenerate exactly the same frozen graph which is originally attached?
As far as I understood it, your steps of generating frozen graph is fine.
One thing about frozen graph is that optimizations can be performed on it, for example fuse some layers together. Optimization may cause your frozen graph to look different as different optimizations could be performed or non at all. But different frozen graph does not necessarily mean the graph is wrongly generated.
Here is a tutorial on optimizations on frozen graph to make a faster serving model. Just listed here to show that there are several optimization options.
Here What does freezing a graph in TensorFlow mean? is another problem that is also kind of related to this problem.

export graph prototxt from tensorflow summary

When using tensorflow, the graph is logged in the summary file, which I "abuse" to keep track of the architecture modifications.
But that means every time I need to use tensorboard to visualise and view the graph.
Is there a way to write out such a graph prototxt in code or export this prototxt from summary file from tensorboard?
Thanks for your answer!
The graph is available by calling tf.get_default_graph. You can get it in GraphDef format by doing graph.as_graph_def().

Saving model in tensorflow

Tensorflow allows us to save/load model's structure, using method tf.train.write_graph, so that we can restore it in the future to continue our training session. However, I'm wondering that if this is necessary because I can create a module, e.g GraphDefinition.py, and use this module to re-create the model.
So, which is the better way to save the model structure or are there any rule of thumb that suggest which way should I use when saving a model?
First of all you have to understand, that tensorflow graph does not have current weights in it (until you save them manually there) and if you load model structure from graph.pb, you will start you train from the very beginning. But if you want to continue train or use your trained model, you have to save checkpoint (using tf Saver) with the values of the variables in it, not only the structure.
Check out this tread: Tensorflow: How to restore a previously saved model (python)

Categories

Resources