I'm trying to load with cv.dnn.readNetFromONNX a pre-trained torch model (U2Net to be precise) saved as onnx.
But I'm receiving the error:
error: OpenCV(4.1.2) /io/opencv/modules/dnn/include/opencv2/dnn/dnn.inl.hpp:349:
error (-204:Requested object was not found) Required argument "starts" not found
into dictionary in function 'get'
This is the code to reproduce the error with Google Colab:
### get U2Net implementation ###
%cd /content
!git clone https://github.com/shreyas-bk/U-2-Net
### download pre-trained model ###
!gdown --id 1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ -O /content/U-2-Net/u2net.pth
###
%cd /content/U-2-Net
### imports ###
from google.colab import files
from model import U2NET
import torch
import os
### create U2Net model from state
model_dir = '/content/U-2-Net/u2net.pth'
net = U2NET(3, 1)
net.load_state_dict(torch.load(model_dir, map_location='cpu'))
net.eval()
### pass to it a dummy input and save to onnx ###
img = torch.randn(1, 3, 320, 320, requires_grad=False)
img = img.to(torch.device('cpu'))
output_dir = os.path.join('/content/u2net.onnx')
torch.onnx.export(net, img, output_dir, opset_version=11, verbose=True)
### load the model in OpenCV ###
import cv2 as cv
net = cv.dnn.readNetFromONNX('/content/u2net.onnx')
[ OpenCV => 4.1.2, Platform => Google Colab, Torch => 1.11.0+cu113]
As #berak suggestet, the issue was related to the OpenCV version (was 4.1.2). Updating to 4.5.5 solved the issue.
Related
Using the basic test code form hidden layer, I am getting the error in the title:
import torch
import torchvision.models
import hiddenlayer as hl
# VGG16 with BatchNorm
model = torchvision.models.vgg16()
# Build HiddenLayer graph
# Jupyter Notebook renders it automatically
hl.build_graph(model, torch.zeros([1, 3, 224, 224]))
Versions:
hiddenlayer-0.3
pytorch=1.13.0+cu117
python=3.10.6
I followed error recommendation and changed _optimize_trace to _optimize_graph in pytorch_builder.py line 71. After that It worked correctly.
I'm currently trying to create a model using transfer learning, but I'm getting an error
NameError: name 'scipy' is not defined
I'm going to learn from the video. We have loaded some datasets to the computer and I am trying to convert these datasets into '.json' and '.h5' files. I had to run the code you saw in the first part and create the model. There was supposed to be a download like in the video, but instead I got an error and I can't solve it.
Here are my codes:
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense
from keras.applications.vgg16 import VGG16
import matplotlib.pyplot as plt
from glob import glob
from keras.utils import img_to_array
from keras.utils import load_img
train_path = "/Users/atakansever/Desktop/CNNN/fruits-360_dataset/fruits-360/Training/"
test_path = "/Users/atakansever/Desktop/CNNN/fruits-360_dataset/fruits-360/Test/"
# img = load_img(train_path + "Tangelo/0_100.jpg")
# plt.imshow(img)
# plt.axes("off")
# plt.show()
numberOfClass = len(glob(train_path + "/*"))
# print(numberOfClass)
vgg = VGG16()
# print(vgg.summary())
vgg_layer_list = vgg.layers
# print(vgg_layer_list)
model = Sequential()
for i in range(len(vgg_layer_list)-1):
model.add(vgg_layer_list[i])
# print(model.summary())
for layers in model.layers:
layers.trainable = False
model.add(Dense(numberOfClass, activation="softmax"))
# print(model.summary())
model.compile(loss = "categorical_crossentropy",optimizer = "rmsprop",metrics = ["accuracy"])
#train
train_data = ImageDataGenerator().flow_from_directory(train_path, target_size=(224,224))
test_data = ImageDataGenerator().flow_from_directory(test_path, target_size=(224,224))
batch_size = 32
hist = model.fit_generator(train_data,
steps_per_epoch=1600//batch_size,
epochs=25,
validation_data= test_data,
validation_steps=800//batch_size)
and here is the error
pyenv shell 3.9.7
atakansever#atakan-Air CNNN % pyenv shell 3.9.7
pyenv: shell integration not enabled. Run `pyenv init' for instructions.
atakansever#atakan-Air CNNN % /Users/atakansever/.pyenv/versions/3.9.7/bin/python /Users/atakansever/Desktop/CNNN/fruits.py
Metal device set to: Apple M1
systemMemory: 8.00 GB
maxCacheSize: 2.67 GB
2022-07-10 11:17:50.428036: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-07-10 11:17:50.428259: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 67692 images belonging to 131 classes.
Found 22688 images belonging to 131 classes.
/Users/atakansever/Desktop/CNNN/fruits.py:53: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
hist = model.fit_generator(train_data, steps_per_epoch=1600//batch_size,epochs=25,validation_data= test_data,validation_steps=800//batch_size)
Traceback (most recent call last):
File "/Users/atakansever/Desktop/CNNN/fruits.py", line 53, in <module>
hist = model.fit_generator(train_data, steps_per_epoch=1600//batch_size,epochs=25,validation_data= test_data,validation_steps=800//batch_size)
File "/Users/atakansever/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/engine/training.py", line 2260, in fit_generator
return self.fit(
File "/Users/atakansever/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/atakansever/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/preprocessing/image.py", line 2244, in apply_affine_transform
if scipy is None:
NameError: name 'scipy' is not defined
try pip install scipy or pip3 install scipy would solve the problem
First, install the scipy package if it isn't already installed:
pip install scipy
and then add scipy to your imports:
import scipy # This is new!
from keras.preprocessing.image import ImageDataGenerator
# ... all your imports
I clicked on the error message and it directed you to the source code.
Comment that two line and save the python script.
# if scipy is None:
# raise ImportError('Image transformations require SciPy. '
# 'Install SciPy.')
Commect code image
Then it will work perfectly.
You have to:
Install scipy pip install scipy
Restart VS code to your IDE or perhaps restart Python Kernel and rerun the code.
I used the tensorflow2 object detection API. I received a saved_model.pb which is a TensorFlow graph and not a tf.keras model. So it can be loaded with tf.saved_model.load() but not with tf.keras.load_model(). The model is saved via the tf.saved_model.save() in the export_lib_v2.py of the object detection API in line 271.
I tried to build the model from the config file and load the checkpoints, to then save it as a tf.keras model:
import tensorflow as tf
from object_detection.utils import config_util
from object_detection.builders import model_builder
import os
def save_in_tfkeras(save_filepath,label_map_path, config_file_path, checkpoint_path):
configs = config_util.get_configs_from_pipeline_file(config_file_path)
model_config = configs['model']
detection_model = model_builder.build(model_config=model_config, is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(checkpoint_path).expect_partial()
detection_model.built(input_shape=(320,320))
tf.keras.models.save_model(detection_model, save_filepath)
print('modelsaved as tf.keras in ' + save_filepath)
if __name__ == "__main__":
PATH_TO_LABELMAP = './models/face_model/face_label.pbtxt'
PATH_TO_CONFIG = './models/face_model/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config'
PATH_TO_CHECKPOINT = './models/face_model/v2_model_50k/ckpt-51'
save_filepath='./Kmodels/mobileNet_V2'
if not os.path.exists(save_filepath):
os.makedirs(save_filepath)
save_in_tfkeras(save_filepath,PATH_TO_LABELMAP,PATH_TO_CONFIG,PATH_TO_CHECKPOINT)
However, this does not seem to work. There are errors which origin, in my opinion, in mixing the tf and tf.keras model. The last error message:
ValueError: Weights for model
ssd_mobile_net_v2fpn_keras_feature_extractor_1 have not yet been
created. Weights are created when the Model is first called on inputs
or build() is called with an input_shape.
The model was saved with TensorFlow loaded as tensorflow.compat.v2
Question: Is there a way to build the model, load the checkpoint weights and then save as tf.keras model?
I have created one machine learning model using Tensorflow and Keras by using IAM dataset. How to load this model as an API to predict an image? When I was trying to integrate It shows error
return self.function(inputs, **arguments)
File "test2.py", line 136, in resize_image
return tf.image.resize_images(image,[56,56])
NameError: name 'tf' is not defined
I have load model using from keras.models import load_model and trying to predict image handwriting. low_loss.hdf5 is model which I try to integrate.
def testmodel(image_path):
global model
# load the pre-trained Keras model
model = load_model('low_loss.hdf5')
model.summary()
img = Image.open(image_path).convert("L")
img = np.resize(image_path, (28,28,1))
im2arr = np.array(img)
im2arr = im2arr.reshape(1,28,28,1)
y_pred = model.predict_classes(im2arr)
return y_pred
I wish to predict image Handwritten data.
your error is about tf which is not loaded.
try:
import tensorflow as tf
You were getting an error because you have not imported TensorFlow in your code or if you have imported you have not given an alias.
import tensorflow as tf
I've fine-tuned a model (using TF 1.9) from Object Detection Zoo Model and right now I am trying to freeze the graph for TensorFlowSharp using TF 1.9.
import tensorflow as tf
import os
from tensorflow.python.tools import freeze_graph
from tensorflow.core.protobuf import saver_pb2
#print("current tensorflow version: ", tf.version)
sess=tf.Session()
model_path = 'latest_cp/'
saver = tf.train.import_meta_graph('model.ckpt.meta')
saver.restore(sess,tf.train.latest_checkpoint('.')) #current dir of the checkpoint file
tf.train.write_graph(sess.graph_def, '.', 'test.pbtxt') #output in pbtxt format
freeze_graph.freeze_graph(input_graph = 'test.pbtxt',
input_binary = False,
input_checkpoint = model_path + 'model.ckpt',
output_node_names = "num_detections,detection_boxes,detection_scores,detection_classes",
output_graph = 'test.bytes' ,
clear_devices = True, initializer_nodes = "",input_saver = "",
restore_op_name = "save/restore_all", filename_tensor_name = "save/Const:0")
It worked but then after I imported it to Unity it returned the following error:
TFException: Op type not registered 'NonMaxSuppressionV3' in binary running on AK38713. Make sure the Op and Kernel are registered in the binary running in this process.
I find out that TensorFlowSharp works with TensorFlow 1.4 and when I tried to freeze graph with 1.4 it returns the same NonMaxSuppressionV3 error.
Do you know any way to solve this issue? Thank you so much for the support.