Deep learning: save and load a universal machine model through different libraries - python

My questions can be divided into two parts.
Is there a format of machine learning model file that can be used through different libraries? For example, I saved a model by pytorch, then load it using tensorflow?
If not, is there a library that can help transfer the formats so that a pytorch machine learning model can be used directly in keras?
The reason why I ask this question is that recently I need to adjust some of my previous trained models in tensorflow to pytorch.
An update for this question:
Facebook and Microsoft are going to launch a model standard called ONNX, which is used for transferring models between different framworks, for example between Pytorch to Caffe2. Link in the following:
https://research.fb.com/facebook-and-microsoft-introduce-new-open-ecosystem-for-interchangeable-ai-frameworks/
An further update for this question:
Tensorflow itself use Protocol Buffer format to store model file, which can be used for transfer between different models. Link in the following:
https://www.tensorflow.org/extend/tool_developers/

Very interesting question. A neural network is a mathematical abstraction consisting of a network of layers (convolution, recurrent, ...), operations (dot product, non-linearity, ...) and their respective parameters (weights, biases).
AFAIK, there's not an universal model file. Nonetheless, different libraries allow users to save their models in a binary format.
There's no library for conversion but there's effort on github repo that addresses this question.

Predictive Markup Modeling Language (PMML) is an XML-based representation language for many machine learning models. It's an open standard that's used by many companies for serializing and deserializing models. I've used libraries that support PMML for machine learning models like SVM and decision trees but have not used it for deep learning models. However, there are open source projects that will work with Tensorflow and Keras, but these libraries seem to be for serializing and deserializing for use with the same library. You might want to check if PMML is making progress for serializing and deserializing between libraries.

If not, is there a library that can help transfer the formats so that a pytorch machine learning model can be used directly in keras?
You can try a Pytorch2Keras converter.
In that moment, it supports base layers like Conv2d, Linear, Activations, Element-wise operations. So, I converted ResNet50 with error 1e-6.

Related

What is "The supported operations" means in TensorFlow Lite for Microcontrollers?

I want to create an image classification model for facial recognition with a OpenMV Cam H7 and tensorflow. It's explained in the tensorflow documentation that "TensorFlow Lite for Microcontrollers currently supports a limited subset of TensorFlow operations, which impacts the model architectures that it is possible to run"
"The supported operations can be seen in the file all_ops_resolver.cc"
so what are supported operations?, and how do I know which supported operations I'm using in my model
If you open the link to the all_ops_resolver.cc file you just shared you will be able to see the list of supported operations. The list includes the basic building blocks needed to design a facial recognition model, like the layers Conv2D/DepthwiseConv2D and FullyConnected, and activations as Relu and Softmax.
To see the layers being used in a model you can just call model.summary() in Tensorflow/Keras.
I recommend you to start by looking for a simple Tensorflow example showing how to build a face recognition model and try to build a similar one just using operations supported by Tensorflow-lite for Microcontrollers.

TensorFlow Federated: Keras model with custom learning algorithm

This tutorial describes how to build a TFF computation from keras model.
This tutorial describes how to build a custom TFF computation from scratch, possibly with a custom federated learning algorithm.
What I need is a combination of these: I want to build a custom federated learning algorithm, and I want to use an existing keras model. Q. How can it be done?
The second tutorial requires MODEL_TYPE which is based on MODEL_SPEC, but I don't know how to get it. I can see some variables in model.trainable_variables (where model = tff.learning.from_keras_model(keras_model, ...), but I doubt it's what I need.
Of course, I can implement the model by hand (as in the second tutorial), but I want to avoid it.
I think you have the correct pointers for writing a custom federated computation, as well as converting a Keras model to a tff.learning.Model. So we'll focus on pulling a TFF type signature from an existing tff.learning.Model.
Once you have your hands on such a model, you should be able to use tff.learning.framework.weights_type_from_model to pull out the appropriate TFF type to use for your custom algorithm.
There is an interesting caveat here: how precisely you use a tff.learning.Model in your custom algorithm is pretty much up to you, and this could affect your desired model weights type. This is unlikely to be the case (likely you will simply be assigning values from incoming tensors to the model variables), so I think we should prefer to avoid going deeper into this caveat.
Finally, a few pointers of end-to-end custom algorithm implementations in TFF:
One of the simplest complete examples TFF has is simple_fedavg, which is totally self-contained and contains instructions for running.
The code for a paper on Adaptive Federated Optimization contains a handwritten implementation of learning rate decay on the clients in TFF.
A similar implementation of adaptive learning rate decay (think Keras' functions to decay learning rate on plateaus) is right next door to the code for AFO.

What are all the formats to save machine learning model in scikit-learn, keras, tensorflow and mxnet?

There are many ways to save a model and its weights. It is confusing when there are so many ways and not any source where we can read and compare their properties.
Some of the formats I know are:
1. YAML File - Structure only
2. JSON File - Structure only
3. H5 Complete Model - Keras
4. H5 Weights only - Keras
5. ProtoBuf - Deployment using TensorFlow serving
6. Pickle - Scikit-learn
7. Joblib - Scikit-learn - replacement for Pickle, for objects containing large data.
Discussion:
Unlike scikit-learn, Keras does not recommend you save models using pickle. Instead, models are saved as an HDF5 file. The HDF5 file contains everything you need to not only load the model to make predictions (i.e., architecture and trained parameters) but also to restart training (i.e., loss and optimizer settings and the current state).
What are other formats to save the model for Scikit-learn, Keras, Tensorflow, and Mxnet? Also what info I am missing about each of the above-discussed formats?
There are also formats like onnx which basically supports most of the frameworks and helps in removing the confusion of using different formats for different frameworks.
There exists also TFJS format, which enables you to use the model on web or node.js environments. Additionally, you will need TF Lite format to make inference on mobile and edge devices. Most recently, TF Lite for Microcontrollers exports the model as a byte array in C header file.
Your question on formats for saving a model has multiple possible answers, based on why you want to save your model:
Save your model to resume training it later
Save your model to load it for inference later
These scenarios give you a couple of options:
You could save your model using the library-specific saving functions; if you want to resume training, make sure that you have saved all the information you need to really be able to resume training. Formats here will vary by library, and indeed are not aimed at being formats that you would inspect or read in any way - they are just files. If you are looking for a library that wraps all of these save functions behind a common API, you should check out the modelstore Python library.
You can also want to use a common format like ONNX; there are converters from Keras to ONNX and scikit-learn to ONNX available; but it is uncommon to use this format to later resume training. The benefit here is that they are all saved to a common format, which may streamline the process of loading them later.

Cannot save scikit-learn model using joblib?

I have the Ensemble model that combines both tensorflow and scikit-learn. And I would like to save this Ensemble model as a box to feed data in and generate the output. My code is as below
def model_base_LSTM(***):
***
model = model_base_LSTM(***)
ensem_model = BaggingRegressor(base_estimator=model, n_estimators=15)
ensem_model.fit(x_train, y_train)
bag_mod_pred = ensem_model.predict(x_test_bag)
from joblib import dump, load
dump(ensem_model, 'LSTM_Ensemble.joblib')
TypeError: can't pickle _thread._local objects
So, how to solve this problem??
You can save your TensorFlow (and even PyTorch) models with Scikit-Learn, but only if you use Neuraxle and its saving mechanics.
Neuraxle is an extension of Scikit-Learn to make it more compatible with all deep learning libraries.
The trick is performed by using Neuraxle-TensorFlow or Neuraxle-PyTorch.
Why so?
Using one of Neuraxle-TensorFlow or Neuraxle-PyTorch will provide you with a saver to allow your thing to be serialized correctly. You want it to be serialized correctly to be able to ensure compatibility between scikit-learn and your Deep Learning framework when it comes time to save or parallelize things and so forth. You can read how Neuraxle solves this with savers here.
Code Examples
Here is a full project example from A to Z where TensorFlow is used with Neuraxle as if it was used with Scikit-Learn.
Here is another practical example where TensorFlow is used within a scikit-learn-like pipeline

Implementing a basic CNN using tensorflow in python

I have a 2-class classification problem in hand. I have extracted a set of 3 features for each training example .. I am planning to use a very simple CNN to learn the weights. My model looks like
I am planning to use tensorflow for implementing this CNN in python. The official tutorial https://www.tensorflow.org/tutorials/deep_cnn/ seems to be somewhat abstract . Can I get a basic code to train this?
You seem to be missing the point of CNN, which require signal with spatial relations (such as raw images, audio, etc.). Convolving signal with three features makes pretty much no sense (pretty much the only option would be a 2x1 filter convolving among the only axis, leading to nearly regular MLP). What you are looking for is rather basic classifier, and in general - neural nets are probably not the good choice (they are not good models for small, low-dimensional problems), you should be fine with models like kernelized SVM, and other classifiers which are available in scikit-learn. For basic TF code look at its basic tutorial, since as said before - this is not a problem for CNN. Furthermore, TF is not a simple library which trains a model in few lines of code, if you are looking for this kind of things you should rather take a look at keras, tf-slim or other libraries built on top of TF.

Categories

Resources