FastAPI post method ends up giving "Method not allowed" - python

I am trying to deploy an image classification model on server using FastAPI.
As such, I have two issues related to my code.
The first issue is that in the original code (without using FastAPI), I would read an image using OpenCV and then convert it from BGR to RGB. Not doing this conversion would give me inaccurate results at test time.
Using FastAPI, the image is being read as follows:
def read_image(payload):
stream=BytesIO(payload)
image=np.asarray(bytearray(stream.read()),dtype="uint8")
image=cv2.imdecode(image,cv2.IMREAD_COLOR)
if isinstance(image,np.ndarray):
img=Image.fromarray(image)
return img
The second issue I am facing is with the POST method when I run the server, and accessing the URL
http:127.0.0.1:9999/, the GET method is running, which prints the following message:
Welcome to classification server
However, when I execute the post method shown below:
#app.post("/classify/")
async def classify_image(file:UploadFile=File(...)):
#return "File Uploaded."
image_byte=await file.read()
return classify(image_byte)
When I go to the link http:127.0.0.1:9999/classify/ I end up recieving the error:
method not allowed
Any reasons on why this is happening and what can be done to fix the error?
The full code is listed below. If there are any errors that I am missing in this, please let me know. I am new to FastAPI and as such, I am really confused about this.
from fastapi import FastAPI, UploadFile, File
import uvicorn
import torch
import torchvision
from torchvision import transforms as T
from PIL import Image
from build_effnet import build_model
import torch.nn.functional as F
import io
from io import BytesIO
import numpy as np
import cv2
app = FastAPI()
class_name = ['F_AF', 'F_AS', 'F_CA', 'F_LA', 'M_AF', 'M_AS', 'M_CA', 'M_LA']
idx_to_class = {i: j for i, j in enumerate(class_name)}
class_to_idx = {value: key for key, value in idx_to_class.items()}
test_transform = T.Compose([
#T.Resize(size=(224,224)), # Resizing the image to be 224 by 224
#T.RandomRotation(degrees=(-20,+20)), #NO need for validation
T.ToTensor(), #converting the dimension from (height,weight,channel) to (channel,height,weight) convention of PyTorch
T.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225]) # Normalize by 3 means 3 StD's of the image net, 3 channels
])
#Load model
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = build_model()
model.load_state_dict(torch.load(
'CharacterClass_effnet_SGD.pt', map_location='cpu'))
model.eval()
model.to(device)
def read_image(payload):
stream=BytesIO(payload)
image=np.asarray(bytearray(stream.read()),dtype="uint8")
image=cv2.imdecode(image,cv2.IMREAD_COLOR)
if isinstance(image,np.ndarray):
img=Image.fromarray(image)
return img
def classify(payload):
img=read_image(payload)
img=test_transform(img)
with torch.no_grad():
ps=model(img.unsqueeze(0))
ps=F.softmax(ps,dim=1)
topk,topclass=ps.topk(1,dim=1)
x=topclass.view(-1).cpu().view()
return idx_to_class[x[0]]
#app.get("/")
def get():
return "Welcome to classification server."
#app.post("/classify/")
async def classify_image(file:UploadFile=File(...)):
#return "File Uploaded."
image_byte=await file.read()
return classify(image_byte)

Your code has defined the classify route on POST requests. Your browser will only perform GET requests from the url
If you expect the browser to work, use #app.get("/classify/"). However, you'll need a different way to provide the file argument, which as a query parameter and a file path (not recommended for security reasons).
If you want POST requests to work, test your code with curl or Postman

Related

Sending images as if they were embedded, with FastAPI [duplicate]

Using the python module fastAPI, I can't figure out how to return an image. In flask I would do something like this:
#app.route("/vector_image", methods=["POST"])
def image_endpoint():
# img = ... # Create the image here
return Response(img, mimetype="image/png")
what's the corresponding call in this module?
If you already have the bytes of the image in memory
Return a fastapi.responses.Response with your custom content and media_type.
You'll also need to muck with the endpoint decorator to get FastAPI to put the correct media type in the OpenAPI specification.
#app.get(
"/image",
# Set what the media type will be in the autogenerated OpenAPI specification.
# fastapi.tiangolo.com/advanced/additional-responses/#additional-media-types-for-the-main-response
responses = {
200: {
"content": {"image/png": {}}
}
}
# Prevent FastAPI from adding "application/json" as an additional
# response media type in the autogenerated OpenAPI specification.
# https://github.com/tiangolo/fastapi/issues/3258
response_class=Response,
)
def get_image()
image_bytes: bytes = generate_cat_picture()
# media_type here sets the media type of the actual response sent to the client.
return Response(content=image_bytes, media_type="image/png")
See the Response documentation.
If your image exists only on the filesystem
Return a fastapi.responses.FileResponse.
See the FileResponse documentation.
Be careful with StreamingResponse
Other answers suggest StreamingResponse. StreamingResponse is harder to use correctly, so I don't recommend it unless you're sure you can't use Response or FileResponse.
In particular, code like this is pointless. It will not "stream" the image in any useful way.
#app.get("/image")
def get_image()
image_bytes: bytes = generate_cat_picture()
# ❌ Don't do this.
image_stream = io.BytesIO(image_bytes)
return StreamingResponse(content=image_stream, media_type="image/png")
First of all, StreamingResponse(content=my_iterable) streams by iterating over the chunks provided by my_iterable. But when that iterable is a BytesIO, the chunks will be \n-terminated lines, which won't make sense for a binary image.
And even if the chunk divisions made sense, chunking is pointless here because we had the whole image_bytes bytes object available from the start. We may as well have just passed the whole thing into a Response from the beginning. We don't gain anything by holding data back from FastAPI.
Second, StreamingResponse corresponds to HTTP chunked transfer encoding. (This might depend on your ASGI server, but it's the case for Uvicorn, at least.) And this isn't a good use case for chunked transfer encoding.
Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images.
Unnecessary chunked transfer encoding can be harmful. For example, it means clients can't show progress bars when they're downloading the file. See:
Content-Length header versus chunked encoding
Is it a good idea to use Transfer-Encoding: chunked on static files?
I had a similar issue but with a cv2 image. This may be useful for others. Uses the StreamingResponse.
import io
from starlette.responses import StreamingResponse
app = FastAPI()
#app.post("/vector_image")
def image_endpoint(*, vector):
# Returns a cv2 image array from the document vector
cv2img = my_function(vector)
res, im_png = cv2.imencode(".png", cv2img)
return StreamingResponse(io.BytesIO(im_png.tobytes()), media_type="image/png")
All the other answer(s) is on point, but now it's so easy to return an image
from fastapi.responses import FileResponse
#app.get("/")
async def main():
return FileResponse("your_image.jpeg")
It's not properly documented yet, but you can use anything from Starlette.
So, you can use a FileResponse if it's a file in disk with a path: https://www.starlette.io/responses/#fileresponse
If it's a file-like object created in your path operation, in the next stable release of Starlette (used internally by FastAPI) you will also be able to return it in a StreamingResponse.
Thanks to #biophetik's answer, with an important reminder that caused me confusion: If you're using BytesIO especially with PIL/skimage, make sure to also do img.seek(0) before returning!
#app.get("/generate")
def generate(data: str):
img = generate_image(data)
print('img=%s' % (img.shape,))
buf = BytesIO()
imsave(buf, img, format='JPEG', quality=100)
buf.seek(0) # important here!
return StreamingResponse(buf, media_type="image/jpeg",
headers={'Content-Disposition': 'inline; filename="%s.jpg"' %(data,)})
The answer from #SebastiánRamírez pointed me in the right direction, but for those looking to solve the problem, I needed a few lines of code to make it work. I needed to import FileResponse from starlette (not fastAPI?), add CORS support, and return from a temporary file. Perhaps there is a better way, but I couldn't get streaming to work:
from starlette.responses import FileResponse
from starlette.middleware.cors import CORSMiddleware
import tempfile
app = FastAPI()
app.add_middleware(
CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"]
)
#app.post("/vector_image")
def image_endpoint(*, vector):
# Returns a raw PNG from the document vector (define here)
img = my_function(vector)
with tempfile.NamedTemporaryFile(mode="w+b", suffix=".png", delete=False) as FOUT:
FOUT.write(img)
return FileResponse(FOUT.name, media_type="image/png")
My needs weren't quite met from the above because my image was built with PIL. My fastapi endpoint takes an image file name, reads it as a PIL image, and generates a thumbnail jpeg in memory that can be used in HTML like:
<img src="http://localhost:8000/images/thumbnail/bigimage.jpg">
import io
from PIL import Image
from fastapi.responses import StreamingResponse
#app.get('/images/thumbnail/{filename}',
response_description="Returns a thumbnail image from a larger image",
response_class="StreamingResponse",
responses= {200: {"description": "an image", "content": {"image/jpeg": {}}}})
def thumbnail_image (filename: str):
# read the high-res image file
image = Image.open(filename)
# create a thumbnail image
image.thumbnail((100, 100))
imgio = io.BytesIO()
image.save(imgio, 'JPEG')
imgio.seek(0)
return StreamingResponse(content=imgio, media_type="image/jpeg")
You can use a FileResponse if it's a file in disk with a path:
import os
from fastapi import FastAPI
from fastapi.responses import FileResponse
app = FastAPI()
path = "/path/to/files"
#app.get("/")
def index():
return {"Hello": "World"}
#app.get("/vector_image", responses={200: {"description": "A picture of a vector image.", "content" : {"image/jpeg" : {"example" : "No example available. Just imagine a picture of a vector image."}}}})
def image_endpoint():
file_path = os.path.join(path, "files/vector_image.jpg")
if os.path.exists(file_path):
return FileResponse(file_path, media_type="image/jpeg", filename="vector_image_for_you.jpg")
return {"error" : "File not found!"}
If when following the top answer and you are attempting to return a BytesIO object like this in your Response
buffer = BytesIO(my_data)
# Return file
return Response(content=buffer, media_type="image/jpg")
You may receive an error that looks like this (as described in this comment)
AttributeError: '_io.BytesIO' object has no attribute 'encode'
This is caused by the render function in Response which explicitly checks for a bytes type here. Since BytesIO != bytes it attempts to encode the value and fails.
The solution is to get the bytes value from the BytesIO object with getvalue()
buffer = BytesIO(my_data)
# Return file
return Response(content=buffer.getvalue(), media_type="image/jpg")
You can do something very similar in FastAPI
from fastapi import FastAPI, Response
app = FastAPI()
#app.post("/vector_image/")
async def image_endpoint():
# img = ... # Create the image here
return Response(content=img, media_type="image/png")

Reading Data in Vertex AI Pipelines

This is my first time using Google's Vertex AI Pipelines. I checked this codelab as well as this post and this post, on top of some links derived from the official documentation. I decided to put all that knowledge to work, in some toy example: I was planning to build a pipeline consisting of 2 components: "get-data" (which reads some .csv file stored in Cloud Storage) and "report-data" (which basically returns the shape of the .csv data read in the previous component). Furthermore, I was cautious to include some suggestions provided in this forum. The code I currently have, goes as follows:
from kfp.v2 import compiler
from kfp.v2.dsl import pipeline, component, Dataset, Input, Output
from google.cloud import aiplatform
# Components section
#component(
packages_to_install=[
"google-cloud-storage",
"pandas",
],
base_image="python:3.9",
output_component_file="get_data.yaml"
)
def get_data(
bucket: str,
url: str,
dataset: Output[Dataset],
):
import pandas as pd
from google.cloud import storage
storage_client = storage.Client("my-project")
bucket = storage_client.get_bucket(bucket)
blob = bucket.blob(url)
blob.download_to_filename('localdf.csv')
# path = "gs://my-bucket/program_grouping_data.zip"
df = pd.read_csv('localdf.csv', compression='zip')
df['new_skills'] = df['new_skills'].apply(ast.literal_eval)
df.to_csv(dataset.path + ".csv" , index=False, encoding='utf-8-sig')
#component(
packages_to_install=["pandas"],
base_image="python:3.9",
output_component_file="report_data.yaml"
)
def report_data(
inputd: Input[Dataset],
):
import pandas as pd
df = pd.read_csv(inputd.path)
return df.shape
# Pipeline section
#pipeline(
# Default pipeline root. You can override it when submitting the pipeline.
pipeline_root=PIPELINE_ROOT,
# A name for the pipeline.
name="my-pipeline",
)
def my_pipeline(
url: str = "test_vertex/pipeline_root/program_grouping_data.zip",
bucket: str = "my-bucket"
):
dataset_task = get_data(bucket, url)
dimensions = report_data(
dataset_task.output
)
# Compilation section
compiler.Compiler().compile(
pipeline_func=my_pipeline, package_path="pipeline_job.json"
)
# Running and submitting job
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
run1 = aiplatform.PipelineJob(
display_name="my-pipeline",
template_path="pipeline_job.json",
job_id="mlmd-pipeline-small-{0}".format(TIMESTAMP),
parameter_values={"url": "test_vertex/pipeline_root/program_grouping_data.zip", "bucket": "my-bucket"},
enable_caching=True,
)
run1.submit()
I was happy to see that the pipeline compiled with no errors, and managed to submit the job. However "my happiness lasted short", as when I went to Vertex AI Pipelines, I stumbled upon some "error", which goes like:
The DAG failed because some tasks failed. The failed tasks are: [get-data].; Job (project_id = my-project, job_id = 4290278978419163136) is failed due to the above error.; Failed to handle the job: {project_number = xxxxxxxx, job_id = 4290278978419163136}
I did not find any related info on the web, neither could I find any log or something similar, and I feel a bit overwhelmed that the solution to this (seemingly) easy example, is still eluding me.
Quite obviously, I don't what or where I am mistaking. Any suggestion?
With some suggestions provided in the comments, I think I managed to make my demo pipeline work. I will first include the updated code:
from kfp.v2 import compiler
from kfp.v2.dsl import pipeline, component, Dataset, Input, Output
from datetime import datetime
from google.cloud import aiplatform
from typing import NamedTuple
# Importing 'COMPONENTS' of the 'PIPELINE'
#component(
packages_to_install=[
"google-cloud-storage",
"pandas",
],
base_image="python:3.9",
output_component_file="get_data.yaml"
)
def get_data(
bucket: str,
url: str,
dataset: Output[Dataset],
):
"""Reads a csv file, from some location in Cloud Storage"""
import ast
import pandas as pd
from google.cloud import storage
# 'Pulling' demo .csv data from a know location in GCS
storage_client = storage.Client("my-project")
bucket = storage_client.get_bucket(bucket)
blob = bucket.blob(url)
blob.download_to_filename('localdf.csv')
# Reading the pulled demo .csv data
df = pd.read_csv('localdf.csv', compression='zip')
df['new_skills'] = df['new_skills'].apply(ast.literal_eval)
df.to_csv(dataset.path + ".csv" , index=False, encoding='utf-8-sig')
#component(
packages_to_install=["pandas"],
base_image="python:3.9",
output_component_file="report_data.yaml"
)
def report_data(
inputd: Input[Dataset],
) -> NamedTuple("output", [("rows", int), ("columns", int)]):
"""From a passed csv file existing in Cloud Storage, returns its dimensions"""
import pandas as pd
df = pd.read_csv(inputd.path+".csv")
return df.shape
# Building the 'PIPELINE'
#pipeline(
# i.e. in my case: PIPELINE_ROOT = 'gs://my-bucket/test_vertex/pipeline_root/'
# Can be overriden when submitting the pipeline
pipeline_root=PIPELINE_ROOT,
name="readcsv-pipeline", # Your own naming for the pipeline.
)
def my_pipeline(
url: str = "test_vertex/pipeline_root/program_grouping_data.zip",
bucket: str = "my-bucket"
):
dataset_task = get_data(bucket, url)
dimensions = report_data(
dataset_task.output
)
# Compiling the 'PIPELINE'
compiler.Compiler().compile(
pipeline_func=my_pipeline, package_path="pipeline_job.json"
)
# Running the 'PIPELINE'
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
run1 = aiplatform.PipelineJob(
display_name="my-pipeline",
template_path="pipeline_job.json",
job_id="mlmd-pipeline-small-{0}".format(TIMESTAMP),
parameter_values={
"url": "test_vertex/pipeline_root/program_grouping_data.zip",
"bucket": "my-bucket"
},
enable_caching=True,
)
# Submitting the 'PIPELINE'
run1.submit()
Now, I will add some complementary comments, which in sum, managed to solve my problem:
First, having the "Logs Viewer" (roles/logging.viewer) enabled for your user, will greatly help to troubleshoot any existing error in your pipeline (Note: that role worked for me, however you might want to look for a better matching role for you own purposes here). Those errors will appear as "Logs", which can be accessed by clicking the corresponding button:
NOTE: In the picture above, when the "Logs" are displayed, it might be helpful to carefully check each log (close to the time when you created you pipeline), as generally each eof them corresponds with a single warning or error line:
Second, the output of my pipeline was a tuple. In my original approach, I just returned the plain tuple, but it is advised to return a NamedTuple instead. In general, if you need to input / output one or more "small values" (int or str, for any reason), pick a NamedTuple to do so.
Third, when the connection between your pipelines is Input[Dataset] or Ouput[Dataset], adding the file extension is needed (and quite easy to forget). Take for instance the ouput of the get_data component, and notice how the data is recorded by specifically adding the file extension, i.e. dataset.path + ".csv".
Of course, this is a very tiny example, and projects can easily scale to huge projects, however as some sort of "Hello Vertex AI Pipelines" it will work well.
Thank you.
Thanks for your writeup. Very helpful! I had the same error, but it turned out to be for a different reasons, so noting it here...
In my pipeline definition step I have the following parameters...
'''
def my_pipeline(bq_source_project: str = BQ_SOURCE_PROJECT,
bq_source_dataset: str = BQ_SOURCE_DATASET,
bq_source_table: str = BQ_SOURCE_TABLE,
output_data_path: str = "crime_data.csv"):
'''
My error was when I run the pipeline, I did not have these same parameters entered. Below is the fixed version...
'''
job = pipeline_jobs.PipelineJob(
project=PROJECT_ID,
location=LOCATION,
display_name=PIPELINE_NAME,
job_id=JOB_ID,
template_path=FILENAME,
pipeline_root=PIPELINE_ROOT,
parameter_values={'bq_source_project': BQ_SOURCE_PROJECT,
'bq_source_dataset': BQ_SOURCE_DATASET,
'bq_source_table': BQ_SOURCE_TABLE}
'''

Tensorflow Serving keeps returning the same output

So, I'm following this tutorial: https://www.youtube.com/watch?v=t6NI0u_lgNo&t=1826s
and right after the tensorflow serving part I had been testing my fastapi API code which looks like this:
from fastapi import FastAPI, File, UploadFile
from fastapi.middleware.cors import CORSMiddleware
import uvicorn
import numpy as np
from io import BytesIO
from PIL import Image
import tensorflow as tf
import os
import requests
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
app = FastAPI()
endpoint = "http://localhost:8501/v1/models/plant_model:predict"
CLASS_NAMES = ['Potato___Early_blight',
'Potato___Late_blight',
'Potato___healthy',
'Tomato_Early_blight',
'Tomato_Late_blight',
'Tomato_healthy']
#app.get("/ping")
async def ping():
return "Hello, I am alive"
def read_file_as_image(data) -> np.ndarray:
image = np.array(Image.open(BytesIO(data)))
return image
#app.post("/predict")
async def predict(
file: UploadFile = File(...)
):
image = read_file_as_image(await file.read())
img_batch = np.expand_dims(image, 0)
json_data = {
"instances": img_batch.tolist()
}
response = requests.post(endpoint, json=json_data)
prediction = np.array(response.json()["predictions"][0])
predicted_class = CLASS_NAMES[np.argmax(prediction[0])]
confidence = np.max(prediction[0])
return {
'class': predicted_class,
'confidence': float(confidence)
}
if __name__ == "__main__":
uvicorn.run(app, host='localhost', port=8000)
By the way I'm using Ubuntu Ubuntu 20.04.
and I'm passing the image of a 255x255 leaf to it. (my model is made to classify different kinds of diseases for different kinds of vegetable leaves)
But, for some reason it always gives me this same false output:
"class": "Potato___Early_blight",
"confidence": 0.374938548
}
I also tried it with another leaf image but it's still the same just with a different confidence:
"class": "Potato___Early_blight",
"confidence": 1.21042137e-06
I can't post images here because my rank is too low
and here is the link to the AI google colab notebook I made for the AI:https://colab.research.google.com/drive/1i2v_RbZ8lI-e0joE-qBxym6_6xF5rR0g?usp=sharing
So, what am I doing wrong? I have checked other answers but they go into the specifics of the code instead of a general answer.
There's no issue at all to get different confidence for different leaf images. Images are different in each category and the model detect confidence accordingly

How to write a sagemaker tensorflow input_handler() that returns a numpy array?

Posting it intentionally with a similar caption as this question:
How to correctly write a sagemaker tensorflow input_handler() that returns a numpy array?
because the suggested solution doesnt work for me and i have the same problem: my model expects a numpy array but i found no way of giving it to the model.
my input handler in inference.py looks like this:
def input_handler(data, context):
if context.request_content_type == 'application/json':
decoded_data = data.read().decode('utf-8')
numpyArr= np.array(decoded_data)
return json.dumps({"inputs":numpyArr.tolist() })
I get the Error message Type:Invalid argument: JSON Value: "..." String is not of expected type: float
Strangely it works without a input handler when i invoke it from a boto3 client:
data = np.load("testValues.npy")
payload = json.dumps(data.tolist())
response = client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/json',
Body=payload)
result = json.loads(response['Body'].read().decode())
Sagemaker version: Version: 2.16.3.post0
TensorFlow ModelServer: 2.2.0-rc2+dev.sha.no_git
TensorFlow Library: 2.2.0

How to send a tf.example into a TensorFlow Serving gRPC predict request

I have data in tf.example form and am attempting to make requests in predict form (using gRPC) to a saved model. I am unable to identify the method call to effect this.
I am starting with the well known Automobile pricing DNN regression model (https://github.com/tensorflow/models/blob/master/samples/cookbook/regression/dnn_regression.py) which I have already exported and mounted via the TF Serving docker container
import grpc
import numpy as np
import tensorflow as tf
from tensorflow_serving.apis import predict_pb2, prediction_service_pb2_grpc
stub = prediction_service_pb2_grpc.PredictionServiceStub(grpc.insecure_channel("localhost:8500"))
tf_ex = tf.train.Example(
features=tf.train.Features(
feature={
'curb-weight': tf.train.Feature(float_list=tf.train.FloatList(value=[5.1])),
'highway-mpg': tf.train.Feature(float_list=tf.train.FloatList(value=[3.3])),
'body-style': tf.train.Feature(bytes_list=tf.train.BytesList(value=[b"wagon"])),
'make': tf.train.Feature(bytes_list=tf.train.BytesList(value=[b"Honda"])),
}
)
)
request = predict_pb2.PredictRequest()
request.model_spec.name = "regressor_test"
# Tried this:
request.inputs['inputs'].CopyFrom(tf_ex)
# Also tried this:
request.inputs['inputs'].CopyFrom(tf.contrib.util.make_tensor_proto(tf_ex))
# This doesn't work either:
request.input.example_list.examples.extend(tf_ex)
# If it did work, I would like to inference on it like this:
result = self.stub.Predict(request, 10.0)
Thanks for any advice
I assume your savedModel has an serving_input_receiver_fn taking string as input and parse to tf.Example. Using SavedModel with Estimators
def serving_example_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string)
receiver_tensors = {'inputs': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, YOUR_EXAMPLE_SCHEMA)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
so, serving_input_receiver_fn accepts a string, so you have to SerializeToString your tf.Example(). Besides, serving_input_receiver_fn works like input_fn to training, data dump into model in a batch fashion.
The code may change to :
request = predict_pb2.PredictRequest()
request.model_spec.name = "regressor_test"
request.model_spec.signature_name = 'your method signature, check use saved_model_cli'
request.inputs['inputs'].CopyFrom(tf.make_tensor_proto([tf_ex.SerializeToString()], dtype=types_pb2.DT_STRING))
#hakunami's answer didn't work for me. But when I modify the last line to
request.inputs['inputs'].CopyFrom(tf.make_tensor_proto([tf_ex.SerializeToString()], dtype=types_pb2.DT_STRING),shape=[1])
it works. If "shape" is None, the resulting tensor proto represents the numpy array precisely.enter link description here

Categories

Resources