TensorRT- Unsupported operation _Fill - python

I try to run resnet from .pb(tensorflow) by TensorRT engine to .trt. I converted my .pb to .uff and now try to load it to engine by this code:
import tensorrt.legacy as trt
import tensorflow as tf
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
import cv2
from tensorrt.legacy.parsers import uffparser
import graphsurgeon as gs
# Build TensorRT engine
uff_model_path = "model/resnet_model_v1.uff"
engine_path = "model/resnet_model_v1.engine"
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
trt.init_libnvinfer_plugins(TRT_LOGGER, '')
trt_runtime = trt.Runtime(TRT_LOGGER)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
builder.max_workspace_size = 1 << 30
builder.fp16_mode = True
builder.max_batch_size = 1
parser.register_input("input_image", (3, 150, 150))
parser.register_output("embedding_layer/MatMul")
parser.parse(uff_model_path, network)
print("Building TensorRT engine, this may take a few minutes...")
trt_engine = builder.build_cuda_engine(network)
But it Falls with this Error:
[TensorRT] ERROR: UffParser: Validator error: reshape_4/zeros: Unsupported operation _Fill
Building TensorRT engine, this may take a few minutes...
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
QUESTION
What I need to do with adding unsupported operation in tensorRT?

Related

NameError: name 'scipy' is not defined when trying to create a model

I'm currently trying to create a model using transfer learning, but I'm getting an error
NameError: name 'scipy' is not defined
I'm going to learn from the video. We have loaded some datasets to the computer and I am trying to convert these datasets into '.json' and '.h5' files. I had to run the code you saw in the first part and create the model. There was supposed to be a download like in the video, but instead I got an error and I can't solve it.
Here are my codes:
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense
from keras.applications.vgg16 import VGG16
import matplotlib.pyplot as plt
from glob import glob
from keras.utils import img_to_array
from keras.utils import load_img
train_path = "/Users/atakansever/Desktop/CNNN/fruits-360_dataset/fruits-360/Training/"
test_path = "/Users/atakansever/Desktop/CNNN/fruits-360_dataset/fruits-360/Test/"
# img = load_img(train_path + "Tangelo/0_100.jpg")
# plt.imshow(img)
# plt.axes("off")
# plt.show()
numberOfClass = len(glob(train_path + "/*"))
# print(numberOfClass)
vgg = VGG16()
# print(vgg.summary())
vgg_layer_list = vgg.layers
# print(vgg_layer_list)
model = Sequential()
for i in range(len(vgg_layer_list)-1):
model.add(vgg_layer_list[i])
# print(model.summary())
for layers in model.layers:
layers.trainable = False
model.add(Dense(numberOfClass, activation="softmax"))
# print(model.summary())
model.compile(loss = "categorical_crossentropy",optimizer = "rmsprop",metrics = ["accuracy"])
#train
train_data = ImageDataGenerator().flow_from_directory(train_path, target_size=(224,224))
test_data = ImageDataGenerator().flow_from_directory(test_path, target_size=(224,224))
batch_size = 32
hist = model.fit_generator(train_data,
steps_per_epoch=1600//batch_size,
epochs=25,
validation_data= test_data,
validation_steps=800//batch_size)
and here is the error
pyenv shell 3.9.7
atakansever#atakan-Air CNNN % pyenv shell 3.9.7
pyenv: shell integration not enabled. Run `pyenv init' for instructions.
atakansever#atakan-Air CNNN % /Users/atakansever/.pyenv/versions/3.9.7/bin/python /Users/atakansever/Desktop/CNNN/fruits.py
Metal device set to: Apple M1
systemMemory: 8.00 GB
maxCacheSize: 2.67 GB
2022-07-10 11:17:50.428036: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-07-10 11:17:50.428259: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 67692 images belonging to 131 classes.
Found 22688 images belonging to 131 classes.
/Users/atakansever/Desktop/CNNN/fruits.py:53: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
hist = model.fit_generator(train_data, steps_per_epoch=1600//batch_size,epochs=25,validation_data= test_data,validation_steps=800//batch_size)
Traceback (most recent call last):
File "/Users/atakansever/Desktop/CNNN/fruits.py", line 53, in <module>
hist = model.fit_generator(train_data, steps_per_epoch=1600//batch_size,epochs=25,validation_data= test_data,validation_steps=800//batch_size)
File "/Users/atakansever/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/engine/training.py", line 2260, in fit_generator
return self.fit(
File "/Users/atakansever/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/atakansever/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/preprocessing/image.py", line 2244, in apply_affine_transform
if scipy is None:
NameError: name 'scipy' is not defined
try pip install scipy or pip3 install scipy would solve the problem
First, install the scipy package if it isn't already installed:
pip install scipy
and then add scipy to your imports:
import scipy # This is new!
from keras.preprocessing.image import ImageDataGenerator
# ... all your imports
I clicked on the error message and it directed you to the source code.
Comment that two line and save the python script.
# if scipy is None:
# raise ImportError('Image transformations require SciPy. '
# 'Install SciPy.')
Commect code image
Then it will work perfectly.
You have to:
Install scipy pip install scipy
Restart VS code to your IDE or perhaps restart Python Kernel and rerun the code.

OpenCV: Error in loading Net from onnx file

I'm trying to load with cv.dnn.readNetFromONNX a pre-trained torch model (U2Net to be precise) saved as onnx.
But I'm receiving the error:
error: OpenCV(4.1.2) /io/opencv/modules/dnn/include/opencv2/dnn/dnn.inl.hpp:349:
error (-204:Requested object was not found) Required argument "starts" not found
into dictionary in function 'get'
This is the code to reproduce the error with Google Colab:
### get U2Net implementation ###
%cd /content
!git clone https://github.com/shreyas-bk/U-2-Net
### download pre-trained model ###
!gdown --id 1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ -O /content/U-2-Net/u2net.pth
###
%cd /content/U-2-Net
### imports ###
from google.colab import files
from model import U2NET
import torch
import os
### create U2Net model from state
model_dir = '/content/U-2-Net/u2net.pth'
net = U2NET(3, 1)
net.load_state_dict(torch.load(model_dir, map_location='cpu'))
net.eval()
### pass to it a dummy input and save to onnx ###
img = torch.randn(1, 3, 320, 320, requires_grad=False)
img = img.to(torch.device('cpu'))
output_dir = os.path.join('/content/u2net.onnx')
torch.onnx.export(net, img, output_dir, opset_version=11, verbose=True)
### load the model in OpenCV ###
import cv2 as cv
net = cv.dnn.readNetFromONNX('/content/u2net.onnx')
[ OpenCV => 4.1.2, Platform => Google Colab, Torch => 1.11.0+cu113]
As #berak suggestet, the issue was related to the OpenCV version (was 4.1.2). Updating to 4.5.5 solved the issue.

How to save and use a Tensorflow dataset using the Experimental save and load mehods?

I wrote two python files create_save.py and load_use.py as shown below.
create_save.py is running good and saving tf dataset.
But load_use.py is giving errors shown below.
How to fix load_use.py errors please?
create_save.py
import os
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.data.experimental import save as tf_save
ds_dir = os.path.join('./', "save_load_tfds_dir")
ds = tf.data.Dataset.range(12)
tf_save(ds, ds_dir)
load_use.py
import os
import numpy as np
import pandas as pd
import tensorflow as tf
ds_dir = os.path.join('./', "save_load_tfds_dir")
new_ds = tf.data.experimental.load(ds_dir)
for elem in new_ds:
print(elem)
Above load_use.py program is giving following errors:
TypeError Traceback (most recent call last)
in
----> 1 new_ds = tf.data.experimental.load(ds_dir)
TypeError: load() missing 1 required positional argument:
'element_spec'
How to fix above error?
To load a previously saved dataset, you need to specify element_spec argument -- a type signature of the elements of the saved dataset, which can be obtained via tf.data.Dataset.element_spec. This requirement exists so that shape inference of the loaded dataset does not need to perform I/O.
import tempfile
path = os.path.join(tempfile.gettempdir(), "saved_data")
# Save a dataset
dataset = tf.data.Dataset.range(2)
tf.data.experimental.save(dataset, path)
new_dataset = tf.data.experimental.load(path,
tf.TensorSpec(shape=(), dtype=tf.int64)) # element_spec arg
for elem in new_dataset:
print(elem)
When you are creating a tf.data.Dataset, it has the attribute element_spec which is what you should be using while loading your saved file. (Refer: Dataset doc).
In the above example, the element_spec argument in the load() method is given as per the type specification of the data being saved in the code.
TF Data Load Documentation

Unable to import a keras application

I am trying to use a keras application in pycharm. I start my script off with the following imports:
from keras_vggface.vggface import VGGFace
from keras_vggface.utils import preprocess_input
from keras_vggface.utils import decode_predictions
Upon running this block of code, I get this error:
ImportError: You need to first `import keras` in order to use `keras_applications`. For instance, you can do:
```
import keras
from keras_applications import vgg16
```
Or, preferably, this equivalent formulation:
```
from keras import applications
```
I have tried importing the appropriate keras libraries as suggested, but the problem persists. I have also tried checking the json file to see if it contains the correct backend(it does).
How can I resolve this issue?
"edit for clarity"
My full imports go as follows:
from PIL import Image # for extracting image
from numpy import asarray
from numpy import expand_dims
from matplotlib import pyplot
from mtcnn.mtcnn import MTCNN # because i am too lazy to make one myself
import keras
from keras_applications import vgg16
from keras_vggface.vggface import VGGFace
from keras_vggface.utils import preprocess_input
from keras_vggface.utils import decode_predictions
Traceback:
Traceback (most recent call last):
File "C:/Users/###/PycharmProjects/##/#.py", line 17, in <module>
from keras_applications import vgg16
File "C:\Users\###\anaconda3\envs\tensor\lib\site-packages\keras_applications\vgg16.py", line 17, in <module>
backend = get_keras_submodule('backend')
File "C:\Users\###\anaconda3\envs\tensor\lib\site-packages\keras_applications\__init__.py", line 39, in get_keras_submodule
raise ImportError('You need to first `import keras` '
ImportError: You need to first `import keras` in order to use `keras_applications`. For instance, you can do:
```
import keras
from keras_applications import vgg16
```
Or, preferably, this equivalent formulation:
```
from keras import applications
```
Process finished with exit code 1
Are you planning to use the Tensorflow framework for executing the model. If it is tensorflow then i suggest using
import tensorflow as tf \ from tensorflow.keras.applications.vgg16 import VGG. Keras comes in-built in latest TF framework and hence we dont have to do an explicit import
even otherwise if you want to use Keras directly i believe the code should be :
import keras \ from keras.applications.vgg16 import VGG16 \ vggmodel = VGG16(weights='imagenet', include_top=True)

Random Forest on Tensorflow at Google Cloud Datalab restarting kernel (Code not working)

I am using the data from the following Kaggle competition to train Random Forest on Tensorflow - https://www.kaggle.com/c/santander-product-recommendation
The code was working fine a day ago but now whenever I run the training code for the Random Forest: I get the following message (This is not an error message on the code but for the jupyter kernel):
The kernel appears to have died. It will restart automatically.
I am using the following code:
import tensorflow as tf
import numpy as np
import pandas as pd
import math
import os
from glob import glob
import google.datalab.bigquery as bq
print('Libraries Imported')
trainingdata = bq.Query('SELECT * FROM `kagglesantander.training`')
train_dataset = trainingdata.execute(output_options=bq.QueryOutput.dataframe()).result()
print('Train Data Fetched')
X = train_dataset.iloc[:,1:-1]
y = train_dataset.iloc[:,-1]
x_train = X.astype(np.float32).values
y_train = y.astype(np.float32).values
print('Data Prepared')
params = tf.contrib.tensor_forest.python.tensor_forest.ForestHParams(
num_classes=1, num_features=369, num_trees = 10).fill()
print("Params =")
print(vars(params))
# Remove previous checkpoints so that we can re-run this step if necessary.
for f in glob("./ModelTrain/*"):
os.remove(f)
classifier = tf.contrib.tensor_forest.client.random_forest.TensorForestEstimator(
params, model_dir="./ModelTrain/")
classifier.fit(x=x_train, y=y_train)
print('Forest Trained')
The error is happening due to the line:
classifier.fit(x=x_train, y=y_train)
As I tried the code without the line and it was working fine

Categories

Resources