I was following a tutorial about object detection, and it gave me this code:
from imageai.Detection import ObjectDetection
import os
execution_path = os.getcwd()
detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath( os.path.join(execution_path , "resnet50_coco_best_v2.1.0.h5"))
detector.loadModel()
detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "image.jpg"), output_image_path=os.path.join(execution_path , "imagenew.jpg"))
for eachObject in detections:
print(eachObject["name"] , " : " , eachObject["percentage_probability"] )
The problem is, it kept giving me an error like this:
ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'
I searched around, and I think its to do with my tensorflow version, but I never found a solution that worked.
You have to import Batch Normalization from tf.keras.layers
import tensorflow as tf
from tf.keras.layers import BatchNormalization
Hope , this Documentation may help you better.
If my answer finds you well..upvote
.
.
Happy Learning
When you get the error message there should be a directory given to a ini.py file. Open it with your IDE and past this line. It somehow worked for me.
from keras.layers.normalization.batch_normalization import BatchNormalization
I am working with an API for the very first time with Python and while doing so I got the following Error: ValueError: cannot mix country/category param with sources param.
What should I do to solve this Error?
This is the code:
import config
from newsapi import NewsApiClient
newsapi = NewsApiClient(api_key=config.api_key)
top_headlines = newsapi.get_top_headlines(q='Neuralink',sources='the-verge',category='technology',language='en',country='us')```
I'm trying to load a trained model on FastAPI and try pinging it from a notebook (to mimic a frontend call). But keep getting error saying the model file doesn't exist. I'm very new to this, any advice welcome...
Training notebook:
model.save('/data/model')
Downloaded the model and put the whole folder in the FastAPI folder.
File structure in FastAPI:
>> API
>> _pycache_
>> model
>> assets
>> variables
keras_metadata.pb
saved_model.pb
>> pyapi-env
api.py
api.py
from fastapi import FastAPI
from tensorflow.keras.models import load_model
...
#app.get("/predict")
def predict(test):
...
model = load_model("./model/saved_model.pb")
...
Testing notebook:
import requests
url = "http://localhost:8000/predict"
params = {
"test": "testing",
}
res = requests.get(url, params=params)
res.json()
Error: OSError: SavedModel file does not exist at: ./model/saved_model.pb\{saved_model.pbtxt|saved_model.pb}
I had the same issue and this worked for me:
model = load_model("./model/")
It seems like your code was treating "saved_model.pb" as a directory and looking for the model file inside it.
I try to run resnet from .pb(tensorflow) by TensorRT engine to .trt. I converted my .pb to .uff and now try to load it to engine by this code:
import tensorrt.legacy as trt
import tensorflow as tf
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
import cv2
from tensorrt.legacy.parsers import uffparser
import graphsurgeon as gs
# Build TensorRT engine
uff_model_path = "model/resnet_model_v1.uff"
engine_path = "model/resnet_model_v1.engine"
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
trt.init_libnvinfer_plugins(TRT_LOGGER, '')
trt_runtime = trt.Runtime(TRT_LOGGER)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
builder.max_workspace_size = 1 << 30
builder.fp16_mode = True
builder.max_batch_size = 1
parser.register_input("input_image", (3, 150, 150))
parser.register_output("embedding_layer/MatMul")
parser.parse(uff_model_path, network)
print("Building TensorRT engine, this may take a few minutes...")
trt_engine = builder.build_cuda_engine(network)
But it Falls with this Error:
[TensorRT] ERROR: UffParser: Validator error: reshape_4/zeros: Unsupported operation _Fill
Building TensorRT engine, this may take a few minutes...
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
QUESTION
What I need to do with adding unsupported operation in tensorRT?
I have deployed a PyTorch model on AWS with SageMaker, and I try to send a request to test the service. However, I got a very vague error message saying "no module named 'sagemaker'". I have tried to search online, but cannot find posts about similar message.
My client code:
import numpy as np
from sagemaker.pytorch.model import PyTorchPredictor
ENDPOINT = '<endpoint name>'
predictor = PyTorchPredictor(ENDPOINT)
predictor.predict(np.random.random_sample([1, 3, 224, 224]).tobytes())
Detailed error message:
Traceback (most recent call last):
File "client.py", line 7, in <module>
predictor.predict(np.random.random_sample([1, 3, 224, 224]).tobytes())
File "/Users/jiashenc/Env/py3/lib/python3.7/site-packages/sagemaker/predictor.py", line 110, in predict
response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args)
File "/Users/jiashenc/Env/py3/lib/python3.7/site-packages/botocore/client.py", line 276, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/jiashenc/Env/py3/lib/python3.7/site-packages/botocore/client.py", line 586, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from model with message "No module named 'sagemaker'". See https://us-east-2.console.aws.amazon.com/cloudwatch/home?region=us-east-2#logEventViewer:group=/aws/sagemaker/Endpoints/<endpoint name> in account xxxxxxxxxxxxxx for more information.
This bug is because I merge both the serving script and my deploy script together, see below
import os
import torch
import numpy as np
from sagemaker.pytorch.model import PyTorchModel
from torch import cuda
from torchvision.models import resnet50
def model_fn(model_dir):
device = torch.device('cuda' if cuda.is_available() else 'cpu')
model = resnet50()
with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
model.load_state_dict(torch.load(f, map_location=device))
return model.to(device)
def predict_fn(input_data, model):
device = torch.device('cuda' if cuda.is_available() else 'cpu')
model.eval()
with torch.no_grad():
return model(input_data.to(device))
if __name__ == '__main__':
pytorch_model = PyTorchModel(model_data='s3://<bucket name>/resnet50/model.tar.gz',
entry_point='serve.py', role='jiashenC-sagemaker',
py_version='py3', framework_version='1.3.1')
predictor = pytorch_model.deploy(instance_type='ml.t2.medium', initial_instance_count=1)
print(predictor.predict(np.random.random_sample([1, 3, 224, 224]).astype(np.float32)))
The root cause is the 4th line in my code. It tries to import sagemaker, which is an unavailable library.
(edit 2/9/2020 with extra code snippets)
Your serving code tries to use the sagemaker module internally. The sagemaker module (also called SageMaker Python SDK, one of the numerous orchestration SDKs for SageMaker) is not designed to be used in model containers, but instead out of models, to orchestrate their activity (train, deploy, bayesian tuning, etc). In your specific example, you shouldn't include the deployment and model call code to server code, as those are actually actions that will be conducted from outside the server to orchestrate its lifecyle and interact with it. For model deployment with the Sagemaker Pytorch container, your entry point script just needs to contain the required model_fn function for model deserialization, and optionally an input_fn, predict_fn and output_fn, respectively for pre-processing, inference and post-processing (detailed in the documentation here). This logic is beautiful :) : you don't need anything else to deploy a production-ready deep learning server! (MMS in the case of Pytorch and MXNet, Flask+Gunicorn in the case of sklearn).
In summary, this is how your code should be split:
An entry_point script serve.py that contains model serving code and looks like this:
import os
import numpy as np
import torch
from torch import cuda
from torchvision.models import resnet50
def model_fn(model_dir):
# TODO instantiate a model from its artifact stored in model_dir
return model
def predict_fn(input_data, model):
# TODO apply model to the input_data, return result of interest
return result
and some orchestration code to instantiate a SageMaker Model object, deploy it to a server and query it. This is run from the orchestration runtime of your choice, which could be a SageMaker Notebook, your laptop, an AWS Lambda function, an Apache Airflow operator, etc - and with the SDK for your choice; don't need to use python for this.
import numpy as np
from sagemaker.pytorch.model import PyTorchModel
pytorch_model = PyTorchModel(
model_data='s3://<bucket name>/resnet50/model.tar.gz',
entry_point='serve.py',
role='jiashenC-sagemaker',
py_version='py3',
framework_version='1.3.1')
predictor = pytorch_model.deploy(instance_type='ml.t2.medium', initial_instance_count=1)
print(predictor.predict(np.random.random_sample([1, 3, 224, 224]).astype(np.float32)))