Differentiate ONNX objects - python

I would like to make an onnx model differentiable. As I understand exporting to ONNX does not export the AutoGrad graph, is there anyway to reconstruct if after loading?
I am aware of torch-ort but to me it looks like it only works with nn.Module objects, i.e. original python pytorch models ? (see example here, here and here)
Can I in any way load an ONNX exported model and get pytorch or onnx-runtime to reconstruct the backward graph?
Alternatively, can I get onnx to export backward graph of a PyTorch nn.Module model? So that I can run it with onnx runtime?
Background: I want to work with physics based models, where I could easily write forward "energy" function, and can use their gradient ("forces") in my simulations. At present we need either numerical diff, or analytic expressions to be derived before hand.

Related

How to convert ONNX back to pth format

I have a model in onnx format, and I want to run it in fastai learner. possibly something like this
learn = learn.load('model.onnx')
another way is to convert back to pth format, but I dont see any proper library on this task. I need your help in either one of this approach. Thanks.
There is no native solution and some people are currently working on it : https://github.com/ENOT-AutoDL/onnx2torch
Also to be clear, a .pth checkpoint , usually only contains the parameters such as weights, biases... not the operations like conv2d, batchnorm2d, pooling. An onnx model, in another hand, contains both operations and parameters that's why you can infer them. If, from an onnx, you only need the weights & biases in order to load a state into a torch model already implemented, it might be quite easy, if you want to automatically build a torch model from an onnx, that's the hard part.

How to add post-processing to a tflite model?

I know how to create a freezed graph and use the export_tflite_ssd_graph - however, is it possible to add post-processing to a tflite graph?
It is not possible to freeze a tflite graph since r1.9, so the usual function won't work. So is there any workaround to add post-processing to a tflite graph when you only have the .tflite File?

How to convert torchscript model in PyTorch to ordinary nn.Module?

I am loading the torchscript model in the following way:
model = torch.jit.load("model.pt").to(device)
The children modules of this model are identified as RecursiveScriptModule. I would like to finetune the uploaded weights and in order to make it simplier and cast them to torch.float32 It is preferable to convert all this stuff to ordinary PyTorch nn.Module.
In the official docs https://pytorch.org/docs/stable/jit.html it is told how to convert nn.Module to torchscript, but I have not found any examples in doing this in the opposite direction. Is there a way to do this?
P.S the example of loading model pretrained model is given here: https://github.com/openai/CLIP/blob/main/notebooks/Interacting_with_CLIP.ipynb
You may try to load it as it e.g. state_dict = torch.load(src).state_dict().
Then manually convert every key and value new_v = state_dict[k].cpu().float().

What is the difference between Tensorflow.js Layers model and Graph model?

Wanted to know what are the differences between this and this?
Is it just the ways the inputs vary?
The main differences between LayersModel and GraphModels are:
LayersModel can only be imported from tf.keras or keras HDF5 format model types. GraphModels can be imported from either the aforementioned model types, or TensorFlow SavedModels.
LayersModels support further training in JavaScript (through its fit() method). GraphModel supports only inference.
GraphModel usually gives you higher inference speed (10-20%) than LayersModel, due to its graph optimization, which is possible thanks to the inference-only support.
Hope this helps.
Both are doing the same task i.e. converting a NN model to tfjs format. It's just that in the 1st link model stored in h5 format (typically format in which keras model are saved) is used, while in another it's TF saved model.

Turn trained TensorFlow model into fixed operation

Is there a way to take a trained TensorFlow model and convert all the tf.Variables and their respective weights (either from within a running tf.Session or from a checkpoint) into tf.constants with that value, such that one can run the model on a new input tensor without initializing or restoring the weights in a session? So can I basically condense a trained model into a fixed and immutable TensorFlow operation?
Yes, there is a freeze_graph.py tool just for that purpose.
It is described (a bit) in the Tool Developer's Guide. And you can find usage example in the Preparing models for mobile deployment section.

Categories

Resources