How to convert ONNX back to pth format - python

I have a model in onnx format, and I want to run it in fastai learner. possibly something like this
learn = learn.load('model.onnx')
another way is to convert back to pth format, but I dont see any proper library on this task. I need your help in either one of this approach. Thanks.

There is no native solution and some people are currently working on it : https://github.com/ENOT-AutoDL/onnx2torch
Also to be clear, a .pth checkpoint , usually only contains the parameters such as weights, biases... not the operations like conv2d, batchnorm2d, pooling. An onnx model, in another hand, contains both operations and parameters that's why you can infer them. If, from an onnx, you only need the weights & biases in order to load a state into a torch model already implemented, it might be quite easy, if you want to automatically build a torch model from an onnx, that's the hard part.

Related

Is it possible to load only weights to TF-TRT model?

I have two models with the exact same architecture, but different weights as the same network is used for two different problems. We're using TF-TRT to optimize the model in order to use it on edge devices.
We'd like to be able to switch from one model to the other as fast as possible. As of now, we load the next model using tf.saved_model.load(), however, this reloads the entire model including the architecture. In order to speed up the process, we'd like to simply load the weights & switch them in the model architecture.
From what I've seen, it is possible in Keras by loading a .w1 file, but we don't have such file after converting to TF-TRT.
I've found out that TRT has a Refitter object but I don't think we can use it in this case.
I'd like to know if it is possible to switch weights of a TF-TRT model, perhaps there is something I'm missing out.
Thank you for your help.

Differentiate ONNX objects

I would like to make an onnx model differentiable. As I understand exporting to ONNX does not export the AutoGrad graph, is there anyway to reconstruct if after loading?
I am aware of torch-ort but to me it looks like it only works with nn.Module objects, i.e. original python pytorch models ? (see example here, here and here)
Can I in any way load an ONNX exported model and get pytorch or onnx-runtime to reconstruct the backward graph?
Alternatively, can I get onnx to export backward graph of a PyTorch nn.Module model? So that I can run it with onnx runtime?
Background: I want to work with physics based models, where I could easily write forward "energy" function, and can use their gradient ("forces") in my simulations. At present we need either numerical diff, or analytic expressions to be derived before hand.

How can I "see" the model/network when loading a model from tfhub?

I'm new to this topic, so forgive me my lack of knowledge. There is a very good model called inception resnet v2 that basically works like this, the input is an image and outputs a list of predictions with their positions and bounded rectangles. I find this very useful, and I thought of using the already worked model in order to recognize things that it now can't (for example if a human is wearing a mask or not). Yes, I wanted to add a new recognition class to the model.
import tensorflow as tf
import tensorflow_hub as hub
mod = hub.load("https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1")
mod is an object of type
tensorflow.python.training.tracking.tracking.AutoTrackable, reading the documentation (that was only available on the source code was a bit hard to understand without context)
and I tried to inspect some of it's properties in order to see if I could figure it out by myself.
And well, I didn't. How can I see the network, the layers, the weights? the fit methods, Is it's all abstracted away?. Can I convert it to keras? I want to experiment with it, see if I can modify it, and see if I could export the model to another representation, for example pytorch.
I wanted to do this because I thought it'd be better to modify an already working model instead of creating one from scratch. Also because I'm not good at training models myself.
I've run into this issue too. Tensorflow hub guide says:
This error frequently arises when loading models in TF1 Hub format with the hub.load() API in TF2. Adding the correct signature should fix this problem.
mod = hub.load(handle).signatures['default']
As an example, you can see this notebook.
You can dir the loaded model asset to see what's defined on it
m = hub.load(handle)
dir(model)
As mentioned in the other answer, you can also look at the signatures with print(m.signatures)
Hub models are SavedModel assets and do not have a keras .fit method on them. If you want to train the model from scratch, you'll need to go to the source code.
Some models have more extensive exported interfaces including access to individual layers, but this model does not.

Most space/memory efficient way to save Tensorflow model for prediction only?

I have a huge Tensorflow model (the checkpoint file is 4-5 gbs). I was wondering if there's a different way to save Tensorflow models, besides the checkpoint way, that is space/memory efficient.
I know that a checkpoint file also saves all the optimizer gradients, so maybe those can be cut out too.
My model is very simple, just two matrices of embeddings, perhaps I can only save those matrices to .npy directly?
What you want to do with the checkpoint is to freeze it. Check out this page from tensorflow's official documentation.
The freezing process strips off all extraneous information from the checkpoint that isn't used for forward inference. Tensorflow provides an easy to use script for it called freeze_graph.py.

Run inference using Onnx model in python?

I am trying to check if my .onnx model is correct, and need to run inference to verify the output for the same.
I know we can run validation on .mlmodel using coremltools in Python - basically load the model and input and get the prediction. I am trying to do a similar thing for the .onnx model.
I found the MXNet framework but I can't seem to understand how to import the model - I just have the .onnx file and MXNet requires some extra input besides the onnx model.
Is there any other simple way to do this in Python? I am guessing this is a common problem but can't seem to find any relevant libraries/frameworks to do this as easily as coremltools for .mlmodel.
I do not wish to convert .onnx to another type of model (like say PyTorch) as I want to check the .onnx model as is, not worrying if the conversion was correct. Just need a way to load the model and input, run inference and print the output.
This is my first time encountering these formats, so any help or insight would be appreciated.
Thanks!
I figured out a way to do this using Caffe2 - just posting in case someone in the future tries to do the same thing.
The main code snippet is:
import onnx
import caffe2.python.onnx.backend
from caffe2.python import core, workspace
import numpy as np
# make input Numpy array of correct dimensions and type as required by the model
modelFile = onnx.load('model.onnx')
output = caffe2.python.onnx.backend.run_model(modelFile, inputArray.astype(np.float32))
Also it is important to note that the input to run_model can only be a numpy array or a string. The output will be an object of the Backend.Outputs type. I was able to extract the output numpy array from it.
I was able to execute inference on the CPU, and hence did not need the Caffe2 installation with GPU (requiring CUDA and CDNN).

Categories

Resources