How to convert model with .json extension to onnx - python

I have a model with me named "model.json" and I want to use that trained model in my python code so so you tell me how to convert the code or how can I load the "model.json" file in python to use that for any use.

you must of course also save the model weights in h5 format.
If you want to load the model from json do this
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
#load weights into new model
loaded_model.load_weights("model.h5")
From your code i read you upload a dict so try this:
from keras.models import model_from_config
model = model_from_config(model_dict)
the model dict is the json.
For the placeholder problem try:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Let me know if you've solved it

Related

can't pickle _thread.RLock objects - Pyspark model

I created a RandomForest model with PySpark.
I need to save this model as a file with .pkl extension, for this I used the pickle library, but when I go to use it I get the following error:
TypeError Traceback (most recent call last)
<ipython-input-76-bf32d5617a63> in <module>()
2
3 filename = "drive/My Drive/Progetto BigData/APPOGGIO/Modelli/SVM/svm_sentiment_analysis"
----> 4 pickle.dump(model, open(filename, "wb"))
TypeError: can't pickle _thread.RLock objects
Is it possible to use PICKLE with a PySPark model like RandomForest or can it only be used with a Scikit-learn model ???
This is my code:
from pyspark.ml.classification import RandomForestClassifier
rf = RandomForestClassifier(labelCol = "label", featuresCol = "word2vect", weightCol = "classWeigth", seed = 0, maxDepth=10, numTrees=100, impurity="gini")
model = rf.fit(train_df)
# Save our model into a file with the help of pickle library
filename = "drive/My Drive/Progetto BigData/APPOGGIO/Modelli/SVM/svm_sentiment_analysis"
pickle.dump(model, open(filename, "wb"))
My environment is Google Colab
I need to transform the model into a PICKLE file to create a webapp, to save it I normally use the .save(path) method, in this case I don't need the .save .
Is it possible that a PySpark model cannot be transformed into a file?
Thanks in advance!!

How to load a saved model defined in a function in PyTorch in Colab?

Here is a sample code of my training function(unnecessary parts are deleted):
I am trying to save my model data_gen in the torch.save(), and after running the train_dmc function, I can find the checkpoint file in the directory.
def train_dmc(loader,loss):
data_gen = DataGenerator().to(device)
data_gen_optimizer = optim.Rprop(para_list, lr=lrate)
savepath='/content/drive/MyDrive/'+loss+'checkpoint.t7'
state = {
'epoch': epoch,
'model_state_dict': data_gen.state_dict(),
'optimizer_state_dict': data_gen_optimizer.state_dict(),
'data loss': data_loss,
'latent_loss':latent_loss
}
torch.save(state,savepath)
My question is that how to load the checkpoint file to continue training if Google Colab disconnects.
Should I load data_gen or train_dmc(), it is my first time using this and I am really confused because the data_gen is defined inside another function. Hope someone can help me with explanation
data_gen.load_state_dict(torch.load(PATH))
data_gen.eval()
#or
train_dmc.load_state_dict(torch.load(PATH))
train_dmc.eval()
As the state variable is a dictionary, So try saving it as:
with open('/content/checkpoint.t7', 'wb') as handle:
pickle.dump(state, handle, protocol=pickle.HIGHEST_PROTOCOL)
Initiate your model class as data_gen = DataGenerator().to(device).
And load the checkpoint file as:
import pickle
file = open('/content/checkpoint.t7', 'rb')
loaded_state = pickle.load(file)
Then you can load the state_dict using data_gen = loaded_state['model_state_dict']. This will load the state_dict to the model class!

Error loading .h5 model from Google Drive

I am confused. The file exists in the directory, I have checked it with 2 methods from Python. Why can't I load the model? Is there any other method to load the .h5 file? I think this screenshot will explain this all.
Code:
from keras.models import Sequential, load_model
import os.path
model_path = "./drive/MyDrive/1117002_Code Skripsi/Epoch-Train/300-0.0001-train-file.h5"
print(os.path.exists(model_path))
if os.path.isfile(model_path):
print ("File exist")
else:
print ("File not exist")
model = load_model(model_path)
File in the Drive folder:
In response to Experience_In_AI's answer, I made the file look like this:
and this is the structure:
The problem reproduced and solved:
import tensorflow as tf
from tensorflow import keras
from keras.models import load_model
try:
#model_path="drive/MyDrive/1117002_Code_Skripsi/Epoch-Train/300-0.001-train-file.h5"
model_path=r".\drive\MyDrive\1117002_Code_Skripsi\Epoch-Train\300-0.001-train-file.h5"
model=load_model(model_path)
except:
model_path=r".\drive\MyDrive\1117002_Code_Skripsi\Epoch-Train\experience_in_ai.h5"
model=load_model(model_path)
print("...it seems to be better to use more simple naming with the .h5 file!")
model.summary()
...note that the .h5 files in the simulated location are exact copies but having only different name.
I think this will work :
model = keras.models.load_model('path/to/location')

OSError: SavedModel file does not exist tflite

I am trying to convert my saved model to a tflite model, the saved model is saved on my desktop, however when I try and run this code:
I gen an error -
OSError: SavedModel file does not exist at: C:/Users/Omar/Desktop/model00000014.h5/{saved_model.pbtxt|saved_model.pb}.
Not sure what the problem is.
import tensorflow as tf
saved_model_dir = "r"C:/Users/Omar/Desktop/model00000014.h5""
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
If you're trying to convert a .h5 Keras model to a TFLite model, make sure you use TFLiteConverter.from_keras_model() method, as described in the docs,
model = tf.keras.models.load( "C:/Users/Omar/Desktop/model00000014.h5" )
converter = tf.lite.TFLiteConverter.from_keras_model( model )
open( 'model.tflite' , 'wb' ).write( converter.convert() )
In case of a SavedModel, use TFLiteConverter.from_saved_model() and mention the file path of the directory of the SavedModel,
saved_model_dir = 'path/to/savedModelDir'
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
You're providing a Keras Model to the TFLiteConverter.from_saved_model() method, which might be causing an error.

Using pre-trained Inception_v4 model

https://github.com/tensorflow/models/tree/master/slim
This gives download link for checkpoints for Inception v1-4 pretrained models. However, the tar.gz contains only the .ckpt file.
In the tutorial on using Inception v3 2012 [This link], the tar.gz contains .pb and .pbtxt files which are used for classification.
How can i use just the .ckpt file to generate respective .pb and .pbtxt files?
OR
Is there any alternate way of using the .ckpt file for classification?
Even i am also trying inception_v4 model. During my search i could able to find the the checkpoint files contains the weights. So inorder to use this, inception_v4 graph needed to be loaded from inception_v4.py and the session needed to be restored from the checkpoint file. Following code will read the checkpoint file and create the protobuf file.
import tensorflow as tf
slim = tf.contrib.slim
import tf_slim.models.slim.nets as net
# inception_v3_arg_scope
import tf_slim
import inception_v4 as net
import cv2
# checkpoint file
checkpoint_file = '/home/.../inception_v4.ckpt'
# Load Session
sess = tf.Session()
arg_scope = net.inception_v4_arg_scope()
input_tensor = tf.placeholder(tf.float32, [None, 299, 299, 3])
with slim.arg_scope(arg_scope):
logits, end_points = net.inception_v4(inputs=input_tensor)
saver = tf.train.Saver()
saver.restore(sess, checkpoint_file)
f = tf.gfile.FastGFile('./mynet.pb', "w")
f.write(sess.graph_def.SerializeToString())
f.close()
# reading the graph
#
with tf.gfile.FastGFile('./mynet.pb', 'rb') as fp:
graph_def = tf.GraphDef()
graph_def.ParseFromString(fp.read())
with tf.Session(graph=tf.import_graph_def(graph_def, name='')) as sess:
# op = sess.graph.get_operations()
# with open('./tensors.txt', mode='w') as fp:
# for m in op:
# # print m.values()
# fp.write('%s \n' % str(m.values()))
cell_patch = cv2.imread('./car.jpg')
softmax_tensor = sess.graph.get_tensor_by_name('InceptionV4/Logits/Predictions:0')
predictions = sess.run(softmax_tensor, {'Placeholder:0': cell_patch})
But the above code wont give you the predictions. Because I am facing problem in giving the input to the graph. But It can be of good starting point to work with checkpoint files.
Checkpoint is downloaded from following link checkpoints

Categories

Resources