Convert onnx model to keras - python

I try to convert an ONNX model to Keras, but when I call the conversion function I receive the following error message "TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'"
ONNX Model Input: input_1
You can see the ONNX Model here: https://ibb.co/sKnbxWY
import onnx2keras
from onnx2keras import onnx_to_keras
import keras
import onnx
onnx_model = onnx.load('onnxModel.onnx')
k_model = onnx_to_keras(onnx_model, ['input_1'])
keras.models.save_model(k_model,'kerasModel.h5',overwrite=True,include_optimizer=True)
File "C:/../onnx2Keras.py", line 7, in <module>
k_model = onnx_to_keras(onnx_model, ['input_1'])
File "..\site-packages\onnx2keras\converter.py", line 80, in onnx_to_keras
weights[onnx_extracted_weights_name] = numpy_helper.to_array(onnx_w)
TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'

The problem was resolved in the new version of the onnx2keras library
You can see the issue on the GitHub https://github.com/nerox8664/onnx2keras/issues/23

Related

AttributeError: 'Sequential' object has no attribute 'model'

from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.optimizers import Adam
def build_dqn(lr, n_actions, input_dims, fc1_dims, fc2_dims):
model = Sequential([
Dense(fc1_dims, input_shape=(input_dims,)),
Activation('relu'),
Dense(fc2_dims),
Activation('relu'),
Dense(n_actions)])
model.compile(optimizer=Adam(lr=lr), loss='mse')
return model
I am trying to understand Double Deep Q-Learning. There is a pretty good lecture here: https://github.com/philtabor/Youtube-Code-Repository/tree/master/ReinforcementLearning/DeepQLearning
But when I tried to run the code, I got following errors:
Traceback (most recent call last):
File "/home/panda/PycharmProjects/ddqn/main.py", line 33, in <module>
ddqn_agent.learn()
File "/home/panda/PycharmProjects/ddqn/ddqn_keras.py", line 118, in learn
self.update_network_parameters()
File "/home/panda/PycharmProjects/ddqn/ddqn_keras.py", line 121, in update_network_parameters
self.q_target.model.set_weights(self.q_eval.model.get_weights())
AttributeError: 'Sequential' object has no attribute 'model'
And I have no clue on how to fix this. I guess keras has been updated to not allow this?
The different lines are respectively:
line 33:
ddqn_agent.learn()
line 118 (in def learn(self):):
self.update_network_parameters()
line 121 (in def update_network_parameters(self):):
self.q_target.model.set_weights(self.q_eval.model.get_weights())
line 76:
self.q_target = build_dqn(alpha, n_actions, input_dims, 256, 256)
EDIT: updated the problem based on suggestions in the comment section. The suggestion was that I put a tensforflow. in front of keras in the imports. I get the same error as before (as you can see). Here is how the imports look like now:
For solving your error you can go through the below steps:
1. Install dependency for can run the env:
!pip install https://github.com/pybox2d/pybox2d/archive/refs/tags/2.3.10.tar.gz
!pip install box2d-py
!pip install gym[all]
!pip install gym[box2d]
2. Change imports like below:
from keras.layers import Dense, Activation
from keras import Sequential
from keras.models import load_model
from tensorflow.keras.optimizers import Adam
3. Install tf-nightly: (what is tf-nightly)
!pip install tf-nightly

Cannot load my YOLOv3 model into readNetFromDarknet

So I have trained a YOLOv3 model and want to test the accuracy of the model. I am trying to load the model by using the command 'cv2.dnn.readNetFromDarknet'. Every time I try I receive the error
Traceback (most recent call last):
File "C:\Users\Philip\PycharmProjects\Scriptie\venv\tester.py", line 9, in <module>
net = cv2.dnn.readNetFromDarknet(modelConfiguration, modelWeights)
cv2.error: OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\darknet\darknet_io.cpp:660: error: (-215:Assertion failed) separator_index < line.size() in function 'cv::dnn::darknet::ReadDarknetFromCfgStream'
The python code that I am using is shown below:
import cv2
import numpy as np
import matplotlib.pyplot as pl
classes = ['TEETH']
modelConfiguration = r"C:\Users\Philip\PycharmProjects\Scriptie\venv\yolov3_custom.cfg"
modelWeights = r"C:\Users\Philip\PycharmProjects\Scriptie\venv\yolov3_custom_2000.weights"
net = cv2.dnn.readNetFromDarknet(modelConfiguration, modelWeights)
I have tried changing the paths to absolute paths but it didn't work, anyone that can help me with this?

Huggingface error: AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'

I am trying to tokenize some numerical strings using a WordLevel/BPE tokenizer, create a data collator and eventually use it in a PyTorch DataLoader to train a new model from scratch.
However, I am getting an error
AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'
when running the following code
from transformers import DataCollatorForLanguageModeling
from tokenizers import ByteLevelBPETokenizer
from tokenizers.pre_tokenizers import Whitespace
from torch.utils.data import DataLoader, TensorDataset
data = ['4814 4832 4761 4523 4999 4860 4699 5024 4788 <unk>']
# Tokenizer
tokenizer = ByteLevelBPETokenizer()
tokenizer.pre_tokenizer = Whitespace()
tokenizer.train_from_iterator(data, vocab_size=1000, min_frequency=1,
special_tokens=[
"<s>",
"</s>",
"<unk>",
"<mask>",
])
# Data Collator
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False
)
train_dataset = TensorDataset(torch.tensor(tokenizer(data, ......)))
# DataLoader
train_dataloader = DataLoader(
train_dataset,
collate_fn=data_collator
)
Is this error due to not having configured the pad_token_id for the tokenizer? If so, how can we do this?
Thanks!
Error trace:
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/transformers/data/data_collator.py", line 351, in __call__
if self.tokenizer.pad_token_id is not None:
AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'
Conda packages
pytorch 1.7.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch
pytorch-lightning 1.2.5 pyhd8ed1ab_0 conda-forge
tokenizers 0.10.1 pypi_0 pypi
transformers 4.4.2 pypi_0 pypi
The error tells you that the tokenizer needs an attribute called pad_token_id. You can either wrap the ByteLevelBPETokenizer into a class with such an attribute (... and met other missing attributes down the road) or use the wrapper class from the transformers library:
from transformers import PreTrainedTokenizerFast
#your code
tokenizer.save(SOMEWHERE)
tokenizer = PreTrainedTokenizerFast(tokenizer_file=tokenizer_path)

How to use Handwriting Recognition model as an API?

I have created one machine learning model using Tensorflow and Keras by using IAM dataset. How to load this model as an API to predict an image? When I was trying to integrate It shows error
return self.function(inputs, **arguments)
File "test2.py", line 136, in resize_image
return tf.image.resize_images(image,[56,56])
NameError: name 'tf' is not defined
I have load model using from keras.models import load_model and trying to predict image handwriting. low_loss.hdf5 is model which I try to integrate.
def testmodel(image_path):
global model
# load the pre-trained Keras model
model = load_model('low_loss.hdf5')
model.summary()
img = Image.open(image_path).convert("L")
img = np.resize(image_path, (28,28,1))
im2arr = np.array(img)
im2arr = im2arr.reshape(1,28,28,1)
y_pred = model.predict_classes(im2arr)
return y_pred
I wish to predict image Handwritten data.
your error is about tf which is not loaded.
try:
import tensorflow as tf
You were getting an error because you have not imported TensorFlow in your code or if you have imported you have not given an alias.
import tensorflow as tf

AttributeError: 'Sequential' object has no attribute 'output_names'

I have got some problem for the below code
of the following line
new_model = load_model('124446.model', custom_objects=None, compile=True)
Here is the code:
import tensorflow as tf
from tensorflow.keras.models import load_model
mnist = tf.keras.datasets.mnist
(x_train,y_train), (x_test,y_test) = mnist.load_data()
x_train = tf.keras.utils.normalize(x_train,axis=1)
x_test = tf.keras.utils.normalize(x_test,axis=1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train,y_train,epochs=3)
tf.keras.models.save_model(model,'124446.model')
val_loss, val_acc = model.evaluate(x_test,y_test)
print(val_loss, val_acc)
new_model = load_model('124446.model', custom_objects=None, compile=True)
prediction = new_model.predict([x_test])
print(prediction)
Errors are:
Traceback (most recent call last): File
"C:/Users/TanveerIslam/PycharmProjects/DeepLearningPractice/1.py",
line 32, in
new_model = load_model('124446.model', custom_objects=None, compile=True) File
"C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\keras\engine\saving.py",
line 262, in load_model
sample_weight_mode=sample_weight_mode) File "C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\training\checkpointable\base.py",
line 426, in _method_wrapper
method(self, *args, **kwargs) File "C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\keras\engine\training.py",
line 525, in compile
metrics, self.output_names)
AttributeError: 'Sequential' object has no attribute 'output_names'
So can any one give me ant solution.
Note: I use pycharm as IDE.
As #Shinva said to set the "compile" attribute of the load_model function to "False".
Then after loading the model, compile it separately.
from tensorflow.keras.models import save_model, load_model
save_model(model,'124446.model')
Then for loading the model again do:
saved_model = load_model('124446.model', compile=False)
saved_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
saved_model.predict([x_test])
Update: For some unknown reasons, I started to get the same errors as the question states. After trying to find different solutions it seems using the "keras" library directly instead of "tensorflow.keras" works properly.
My setup is on "Windows 10" with python:'3.6.7', tensorflow:'1.11.0' and keras:'2.2.4'
As per my knowledge, there are three different ways in which you can save and restore your model; provided you have used keras directly to make your model.
Option1:
import json
from keras.models import model_from_json, load_model
# Save Weights + Architecture
model.save_weights('model_weights.h5')
with open('model_architecture.json', 'w') as f:
f.write(model.to_json())
# Load Weights + Architecture
with open('model_architecture.json', 'r') as f:
new_model = model_from_json(f.read())
new_model.load_weights('model_weights.h5')
Option2:
from keras.models import save_model, load_model
# Creates a HDF5 file 'my_model.h5'
save_model(model, 'my_model.h5') # model, [path + "/"] name of model
# Deletes the existing model
del model
# Returns a compiled model identical to the previous one
new_model = load_model('my_model.h5')
Option 3
# using model's methods
model.save("my_model.h5")
# deletes the existing model
del model
# load the saved model back
new_model = load_model('my_model.h5')
Option 1 requires the new_model to be compiled before using.
Option 2 and 3 are almost similar in syntax.
Codes used from:
1. Saving & Loading Keras Models
2. https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model
I was able to load the model by setting compile=False in load_model()
import tensorflow as tf
tf.keras.models.save_model(
model,
"epic_num_reader.model",
overwrite=True,
include_optimizer=True
)
new_model = tf.keras.models.load_model('epic_num_reader.model', custom_objects=None, compile=False)
predictions = new_model.predict(x_test)
print(predictions)
import numpy as np
print(np.argmax(predictions[0]))
plt.imshow(x_test[0],cmap=plt.cm.binary)
plt.show()
If this is run on Windows then the issue is that currently toco is not supported on Windows - https://github.com/tensorflow/tensorflow/issues/20975

Categories

Resources