I have trained a model using yolov5 and it is working just fine:
My ultimate goal is to use a model that I have trained on custom data (to detect the hook and bucket) in the Openvino framework.
To achieve this, I first exported the best version of the model to the appropriate Openvino format, using the following command:
!python export.py --weights runs/train/yolov5s24/weights/best.pt --include openvino --dynamic --simplify
The export performed successfully generated me 3 files: best.xml, best.bin, best.mapping;
Now I would like to load it using the Openvino framework and to to that I am following this pipeline:
Create Core object
1.1. (Optional) Load extensions
Read a model from a drive
2.1. (Optional) Perform model preprocessing
Load the model to the device
Create an inference request
Fill input tensors with data
Start inference
Process the inference results
1 Create Core
import numpy as np
import openvino.inference_engine as ie
core = ie.IECore()
2 Read a model from a drive
path_to_xml_file = 'models/best_openvino_model/best.xml'
path_to_bin_file = 'models/best_openvino_model/best.bin'
network = core.read_network(model=path_to_xml_file, weights=path_to_bin_file)
3 Load the Model to the Device
# Load network to the device and create infer requests
exec_network = core.load_network(network, "CPU", num_requests=4)
And here I am getting an error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-21-7c9ba5f53484> in <module>
1 # Load network to the device and create infer requests
----> 2 exec_network = core.load_network(network, "CPU", num_requests=4)
ie_api.pyx in openvino.inference_engine.ie_api.IECore.load_network()
ie_api.pyx in openvino.inference_engine.ie_api.IECore.load_network()
RuntimeError: Check 'std::get<0>(valid)' failed at inference/src/ie_core.cpp:1414:
InferenceEngine::Core::LoadNetwork doesn't support inputs having dynamic shapes. Use ov::Core::compile_model API instead. Dynamic inputs are :{ input:'images,images', shape={?,3,?,?}}
I am using the OpenVINO™ Development Tools - release 2022.1;
The files to reproduce the error are here;
This error is expected as the model contains a dynamic shape. This model can be executed using the ov::Core:compile_model API in OpenVINO 2022.1.
You can refer to class ov::CompiledModel and Dynamic Shape for more information.
You use API 1.0 that does not support dynamic shapes.
You need to use API 2.0 for dynamic shapes.
Here an example of how to infer for dynamic shapes using API 2.0:
from openvino.runtime import AsyncInferQueue, Core, InferRequest, Layout, Type
core = Core()
model = core.read_model("model.xml")
compiled_model = core.compile_model(model, "CPU")
output = compiled_model.infer_new_request({0: input_data})
yolov5 on openvino #
yolov5-openvino
enjoy.....
Related
After successful converting my model detectron2 model to ONNX format I cant make predictions.
I am getting the following error:
failed: Fatal error: AliasWithName is not a registered function/op
My code:
import onnx
import onnxruntime as ort
import numpy as np
import glob
import cv2
onnx_model = onnx.load("test.onnx")
onnx.checker.check_model(onnx_model)
im = cv2.imread('img.png')
print(im.shape)
ort_sess = ort.InferenceSession('test.onnx',providers=[ 'CPUExecutionProvider'])
outputs = ort_sess.run(None, {'input': im})
print(outputs)
I am doing something wrong?
In documentation: https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer.export_onnx
They say:
"Export the model to ONNX format. Note that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by another runtime (such as onnxruntime or TensorRT). Post-processing or transformation passes may be applied on the model to accommodate different runtimes, but we currently do not provide support for them."
What is that "Post-processing or transformation" that I should do?
I am unable to load saved pytorch model from the outputs folder in my other scripts.
I am using following lines of code to save the model:
os.makedirs("./outputs/model", exist_ok=True)
torch.save({
'model_state_dict': copy.deepcopy(model.state_dict()),
'optimizer_state_dict': optimizer.state_dict()
}, './outputs/model/best-model.pth')
new_run.upload_file("outputs/model/best-model.pth", "outputs/model/best-model.pth")
saved_model = new_run.register_model(model_name='pytorch-model', model_path='outputs/model/best-model.pth')
and using the following code to access it:
global model
best_model_path = 'outputs/model/best-model.pth'
model_checkpoint = torch.load(best_model_path)
model.load_state_dict(model_checkpoint['model_state_dict'], strict = False)
but when I run the above mentioned code, I get this error: No such file or directory: './outputs/model/best-model.pth'
Also I want to know is there a way to get the saved model from Azure Models? I have tried to get it by using following lines of code:
from azureml.core.model import Model
model = Model(ws, "Pytorch-model")
but it returns Model type object which returns error on model.eval() (error: Model has no such attribute eval()).
There is no global output folder. If you want to use a Model in a new script you need to give the script the model as an input or register the model and download the model from the new script.
The Model object form from azureml.core.model import Model is not your pytorch Model. 1
You can use model.register(...) to register your model. And model.download(...) to download you model. Than you can use pytorch to load you model. 2
I have successfully trained a Keras model like:
import tensorflow as tf
from keras_segmentation.models.unet import vgg_unet
# initaite the model
model = vgg_unet(n_classes=50, input_height=512, input_width=608)
# Train
model.train(
train_images=train_images,
train_annotations=train_annotations,
checkpoints_path="/tmp/vgg_unet_1", epochs=5
)
And saved it in hdf5 format with:
tf.keras.models.save_model(model,'my_model.hdf5')
Then I load my model with
model=tf.keras.models.load_model('my_model.hdf5')
Finally I want to make a segmentation prediction on a new image with
out = model.predict_segmentation(
inp=image_to_test,
out_fname="/tmp/out.png"
)
I am getting the following error:
AttributeError: 'Functional' object has no attribute 'predict_segmentation'
What am I doing wrong ?
Is it when I am saving my model or when I am loading it ?
Thanks !
predict_segmentation isn't a function available in normal Keras models. It looks like it was added after the model was created in the keras_segmentation library, which might be why Keras couldn't load it again.
I think you have 2 options for this.
You could use the line from the code I linked to manually add the function back to the model.
model.predict_segmentation = MethodType(keras_segmentation.predict.predict, model)
You could create a new vgg_unet with the same arguments when you reload the model, and transfer the weights from your hdf5 file to that model as suggested in the Keras documentation.
model = vgg_unet(n_classes=50, input_height=512, input_width=608)
model.load_weights('my_model.hdf5')
I am trying to use the beta Google Custom Prediction Routine in Google's AI Platform to run a live version of my model.
I include in my package predictor.py which contains a Predictor class as such:
import os
import numpy as np
import pickle
import keras
from keras.models import load_model
class Predictor(object):
"""Interface for constructing custom predictors."""
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
def predict(self, instances, **kwargs):
"""Performs custom prediction.
Instances are the decoded values from the request. They have already
been deserialized from JSON.
Args:
instances: A list of prediction input instances.
**kwargs: A dictionary of keyword args provided as additional
fields on the predict request body.
Returns:
A list of outputs containing the prediction results. This list must
be JSON serializable.
"""
# pre-processing
preprocessed_inputs = self._preprocessor.preprocess(instances[0])
# predict
outputs = self._model.predict(preprocessed_inputs)
# post-processing
outputs = np.array([np.fliplr(x) for x in x_test])
return outputs.tolist()
#classmethod
def from_path(cls, model_dir):
"""Creates an instance of Predictor using the given path.
Loading of the predictor should be done in this method.
Args:
model_dir: The local directory that contains the exported model
file along with any additional files uploaded when creating the
version resource.
Returns:
An instance implementing this Predictor class.
"""
model_path = os.path.join(model_dir, 'keras.model')
model = load_model(model_path, compile=False)
preprocessor_path = os.path.join(model_dir, 'preprocess.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
The full error Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'str' object has no attribute 'decode' (Error code: 0)" indicates that the issue is in this script, specifically when loading the model. However, I am able to successfully load the model in my notebook locally with the same code block in predict.py:
from keras.models import load_model
model = load_model('keras.model', compile=False)
I have seen similar posts which suggest to set the version of h5py<3.0.0 but this hasn't helped. I can set versions of modules for my custom prediction routine as such in a setup.py file:
from setuptools import setup
REQUIRED_PACKAGES = ['keras==2.3.1', 'h5py==2.10.0', 'opencv-python', 'pydicom', 'scikit-image']
setup(
name='my_custom_code',
install_requires=REQUIRED_PACKAGES,
include_package_data=True,
version='0.23',
scripts=['predictor.py', 'preprocess.py'])
Unfortunately, I haven't found a good way to debug model deployment in google's AI Platform and the troubleshooting guide is unhelpful. Any pointers would be much appreciated. Thanks!
Edit 1:
The h5py module's version is wrong –– at 3.1.0, despite setting it to 2.10.0 in setup.py. Anyone know why? I confirmed that Keras version and other modules are set properly however. I've tried 'h5py==2.9.0' and 'h5py<3.0.0' to no avail. More on including PyPi package dependencies here.
Edit 2:
So it turns out google currently does not support this capability.
StackOverflow, enzed01
I have encountered the same problem with using AI platform with code that was running fine two months ago, when we last trained our models. Indeed, it is due to the dependency on h5py which fails to load the h5 model out of the blue.
After a while I was able to make it work with runtime 2.2 and python version 3.7. I am also using the custom prediction routine and my model was a simple 2-layer bidirectional LSTM serving classifications.
I had a notebook VM set up with TF == 2.1 and downgraded h5py to <3.0.0 with:
!pip uninstall -y h5py
!pip install 'h5py < 3.0.0'
My setup.py looks like this:
from setuptools import setup
REQUIRED_PACKAGES = ['tensorflow==2.1', 'h5py<3.0.0']
setup(
name="my_package",
version="0.1",
include_package_data=True,
scripts=["preprocess.py", "model_prediction.py"]
)
I added compile=False to my model load code. Without it, I ran into another problem with deployment which was giving following error: Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'sample_weight_mode' (Error code: 0)"
The code change from OP:
model = keras.models.load_model(
os.path.join(model_dir,'model.h5'), compile = False)
And this made the model be deployed as before without a problem. I suspect the
compile=False might mean slower prediction serving, but have not noticed anything so far.
Hope this helps anyone stuck and googling these issues!
I want to use this TF Hub asset:
https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/3
Versions:
Version: 1.15.0-dev20190726
Eager mode: False
Hub version: 0.5.0
GPU is available
Code
feature_extractor_url = "https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/3"
feature_extractor_layer = hub.KerasLayer(module,
input_shape=(HEIGHT, WIDTH, CHANNELS))
I get:
ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 2 MetaGraphs in the SavedModel with tag sets [[], ['train']]. Pass a 'tags=' argument to load this SavedModel.
I tried:
module = hub.Module("https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/3",
tags={"train"})
feature_extractor_layer = hub.KerasLayer(module,
input_shape=(HEIGHT, WIDTH, CHANNELS))
But when I try to save the model I get:
tf.keras.experimental.export_saved_model(model, tf_model_path)
# model.save(h5_model_path) # Same error
NotImplementedError: Can only generate a valid config for `hub.KerasLayer(handle, ...)`that uses a string `handle`.
Got `type(handle)`: <class 'tensorflow_hub.module.Module'>
Tutorial here
It's been a while, but assuming you have migrated to the TF2, this can easily be accomplished with the most recent model version as follows:
import tensorflow as tf
import tensorflow_hub as hub
num_classes=10 # For example
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5", trainable=True)
tf.keras.layers.Dense(num_classes, activation='softmax')
])
m.build([None, 224, 224, 3]) # Batch input shape.
# train as needed
m.save("/some/output/path")
Please update this question if that doesn't work for you. I believe your issue arose from mixing hub.Module with hub.KerasLayer. The model version you were using was in TF1 Hub format, so within TF1 it is meant to be used exclusively with hub.Module, and not mixed with hub.KerasLayer. Within TF2, hub.KerasLayer can load TF1 Hub format models directly from their URL for composition in larger models, but they cannot be fine-tuned.
Please refer to this compatibility guide for more information
You should use tf.keras.models.save_model(model,'NeuralNetworkModel')
You will get saved model in a folder that can be used later in your sequential nework