Sending images as if they were embedded, with FastAPI [duplicate] - python

Using the python module fastAPI, I can't figure out how to return an image. In flask I would do something like this:
#app.route("/vector_image", methods=["POST"])
def image_endpoint():
# img = ... # Create the image here
return Response(img, mimetype="image/png")
what's the corresponding call in this module?

If you already have the bytes of the image in memory
Return a fastapi.responses.Response with your custom content and media_type.
You'll also need to muck with the endpoint decorator to get FastAPI to put the correct media type in the OpenAPI specification.
#app.get(
"/image",
# Set what the media type will be in the autogenerated OpenAPI specification.
# fastapi.tiangolo.com/advanced/additional-responses/#additional-media-types-for-the-main-response
responses = {
200: {
"content": {"image/png": {}}
}
}
# Prevent FastAPI from adding "application/json" as an additional
# response media type in the autogenerated OpenAPI specification.
# https://github.com/tiangolo/fastapi/issues/3258
response_class=Response,
)
def get_image()
image_bytes: bytes = generate_cat_picture()
# media_type here sets the media type of the actual response sent to the client.
return Response(content=image_bytes, media_type="image/png")
See the Response documentation.
If your image exists only on the filesystem
Return a fastapi.responses.FileResponse.
See the FileResponse documentation.
Be careful with StreamingResponse
Other answers suggest StreamingResponse. StreamingResponse is harder to use correctly, so I don't recommend it unless you're sure you can't use Response or FileResponse.
In particular, code like this is pointless. It will not "stream" the image in any useful way.
#app.get("/image")
def get_image()
image_bytes: bytes = generate_cat_picture()
# ❌ Don't do this.
image_stream = io.BytesIO(image_bytes)
return StreamingResponse(content=image_stream, media_type="image/png")
First of all, StreamingResponse(content=my_iterable) streams by iterating over the chunks provided by my_iterable. But when that iterable is a BytesIO, the chunks will be \n-terminated lines, which won't make sense for a binary image.
And even if the chunk divisions made sense, chunking is pointless here because we had the whole image_bytes bytes object available from the start. We may as well have just passed the whole thing into a Response from the beginning. We don't gain anything by holding data back from FastAPI.
Second, StreamingResponse corresponds to HTTP chunked transfer encoding. (This might depend on your ASGI server, but it's the case for Uvicorn, at least.) And this isn't a good use case for chunked transfer encoding.
Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images.
Unnecessary chunked transfer encoding can be harmful. For example, it means clients can't show progress bars when they're downloading the file. See:
Content-Length header versus chunked encoding
Is it a good idea to use Transfer-Encoding: chunked on static files?

I had a similar issue but with a cv2 image. This may be useful for others. Uses the StreamingResponse.
import io
from starlette.responses import StreamingResponse
app = FastAPI()
#app.post("/vector_image")
def image_endpoint(*, vector):
# Returns a cv2 image array from the document vector
cv2img = my_function(vector)
res, im_png = cv2.imencode(".png", cv2img)
return StreamingResponse(io.BytesIO(im_png.tobytes()), media_type="image/png")

All the other answer(s) is on point, but now it's so easy to return an image
from fastapi.responses import FileResponse
#app.get("/")
async def main():
return FileResponse("your_image.jpeg")

It's not properly documented yet, but you can use anything from Starlette.
So, you can use a FileResponse if it's a file in disk with a path: https://www.starlette.io/responses/#fileresponse
If it's a file-like object created in your path operation, in the next stable release of Starlette (used internally by FastAPI) you will also be able to return it in a StreamingResponse.

Thanks to #biophetik's answer, with an important reminder that caused me confusion: If you're using BytesIO especially with PIL/skimage, make sure to also do img.seek(0) before returning!
#app.get("/generate")
def generate(data: str):
img = generate_image(data)
print('img=%s' % (img.shape,))
buf = BytesIO()
imsave(buf, img, format='JPEG', quality=100)
buf.seek(0) # important here!
return StreamingResponse(buf, media_type="image/jpeg",
headers={'Content-Disposition': 'inline; filename="%s.jpg"' %(data,)})

The answer from #SebastiánRamírez pointed me in the right direction, but for those looking to solve the problem, I needed a few lines of code to make it work. I needed to import FileResponse from starlette (not fastAPI?), add CORS support, and return from a temporary file. Perhaps there is a better way, but I couldn't get streaming to work:
from starlette.responses import FileResponse
from starlette.middleware.cors import CORSMiddleware
import tempfile
app = FastAPI()
app.add_middleware(
CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"]
)
#app.post("/vector_image")
def image_endpoint(*, vector):
# Returns a raw PNG from the document vector (define here)
img = my_function(vector)
with tempfile.NamedTemporaryFile(mode="w+b", suffix=".png", delete=False) as FOUT:
FOUT.write(img)
return FileResponse(FOUT.name, media_type="image/png")

My needs weren't quite met from the above because my image was built with PIL. My fastapi endpoint takes an image file name, reads it as a PIL image, and generates a thumbnail jpeg in memory that can be used in HTML like:
<img src="http://localhost:8000/images/thumbnail/bigimage.jpg">
import io
from PIL import Image
from fastapi.responses import StreamingResponse
#app.get('/images/thumbnail/{filename}',
response_description="Returns a thumbnail image from a larger image",
response_class="StreamingResponse",
responses= {200: {"description": "an image", "content": {"image/jpeg": {}}}})
def thumbnail_image (filename: str):
# read the high-res image file
image = Image.open(filename)
# create a thumbnail image
image.thumbnail((100, 100))
imgio = io.BytesIO()
image.save(imgio, 'JPEG')
imgio.seek(0)
return StreamingResponse(content=imgio, media_type="image/jpeg")

You can use a FileResponse if it's a file in disk with a path:
import os
from fastapi import FastAPI
from fastapi.responses import FileResponse
app = FastAPI()
path = "/path/to/files"
#app.get("/")
def index():
return {"Hello": "World"}
#app.get("/vector_image", responses={200: {"description": "A picture of a vector image.", "content" : {"image/jpeg" : {"example" : "No example available. Just imagine a picture of a vector image."}}}})
def image_endpoint():
file_path = os.path.join(path, "files/vector_image.jpg")
if os.path.exists(file_path):
return FileResponse(file_path, media_type="image/jpeg", filename="vector_image_for_you.jpg")
return {"error" : "File not found!"}

If when following the top answer and you are attempting to return a BytesIO object like this in your Response
buffer = BytesIO(my_data)
# Return file
return Response(content=buffer, media_type="image/jpg")
You may receive an error that looks like this (as described in this comment)
AttributeError: '_io.BytesIO' object has no attribute 'encode'
This is caused by the render function in Response which explicitly checks for a bytes type here. Since BytesIO != bytes it attempts to encode the value and fails.
The solution is to get the bytes value from the BytesIO object with getvalue()
buffer = BytesIO(my_data)
# Return file
return Response(content=buffer.getvalue(), media_type="image/jpg")

You can do something very similar in FastAPI
from fastapi import FastAPI, Response
app = FastAPI()
#app.post("/vector_image/")
async def image_endpoint():
# img = ... # Create the image here
return Response(content=img, media_type="image/png")

Related

FastAPI post method ends up giving "Method not allowed"

I am trying to deploy an image classification model on server using FastAPI.
As such, I have two issues related to my code.
The first issue is that in the original code (without using FastAPI), I would read an image using OpenCV and then convert it from BGR to RGB. Not doing this conversion would give me inaccurate results at test time.
Using FastAPI, the image is being read as follows:
def read_image(payload):
stream=BytesIO(payload)
image=np.asarray(bytearray(stream.read()),dtype="uint8")
image=cv2.imdecode(image,cv2.IMREAD_COLOR)
if isinstance(image,np.ndarray):
img=Image.fromarray(image)
return img
The second issue I am facing is with the POST method when I run the server, and accessing the URL
http:127.0.0.1:9999/, the GET method is running, which prints the following message:
Welcome to classification server
However, when I execute the post method shown below:
#app.post("/classify/")
async def classify_image(file:UploadFile=File(...)):
#return "File Uploaded."
image_byte=await file.read()
return classify(image_byte)
When I go to the link http:127.0.0.1:9999/classify/ I end up recieving the error:
method not allowed
Any reasons on why this is happening and what can be done to fix the error?
The full code is listed below. If there are any errors that I am missing in this, please let me know. I am new to FastAPI and as such, I am really confused about this.
from fastapi import FastAPI, UploadFile, File
import uvicorn
import torch
import torchvision
from torchvision import transforms as T
from PIL import Image
from build_effnet import build_model
import torch.nn.functional as F
import io
from io import BytesIO
import numpy as np
import cv2
app = FastAPI()
class_name = ['F_AF', 'F_AS', 'F_CA', 'F_LA', 'M_AF', 'M_AS', 'M_CA', 'M_LA']
idx_to_class = {i: j for i, j in enumerate(class_name)}
class_to_idx = {value: key for key, value in idx_to_class.items()}
test_transform = T.Compose([
#T.Resize(size=(224,224)), # Resizing the image to be 224 by 224
#T.RandomRotation(degrees=(-20,+20)), #NO need for validation
T.ToTensor(), #converting the dimension from (height,weight,channel) to (channel,height,weight) convention of PyTorch
T.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225]) # Normalize by 3 means 3 StD's of the image net, 3 channels
])
#Load model
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = build_model()
model.load_state_dict(torch.load(
'CharacterClass_effnet_SGD.pt', map_location='cpu'))
model.eval()
model.to(device)
def read_image(payload):
stream=BytesIO(payload)
image=np.asarray(bytearray(stream.read()),dtype="uint8")
image=cv2.imdecode(image,cv2.IMREAD_COLOR)
if isinstance(image,np.ndarray):
img=Image.fromarray(image)
return img
def classify(payload):
img=read_image(payload)
img=test_transform(img)
with torch.no_grad():
ps=model(img.unsqueeze(0))
ps=F.softmax(ps,dim=1)
topk,topclass=ps.topk(1,dim=1)
x=topclass.view(-1).cpu().view()
return idx_to_class[x[0]]
#app.get("/")
def get():
return "Welcome to classification server."
#app.post("/classify/")
async def classify_image(file:UploadFile=File(...)):
#return "File Uploaded."
image_byte=await file.read()
return classify(image_byte)
Your code has defined the classify route on POST requests. Your browser will only perform GET requests from the url
If you expect the browser to work, use #app.get("/classify/"). However, you'll need a different way to provide the file argument, which as a query parameter and a file path (not recommended for security reasons).
If you want POST requests to work, test your code with curl or Postman

How to post an image file with a list of strings using FastAPI?

I have tried a lot of things, but it doesn't seem to work. Here is my code:
#app.post("/my-endpoint")
async def my_func(
languages: List[str] = ["en", "hi"], image: UploadFile = File(...)
):
The function works fine when I remove one of the parameters, but with both of the parameters, the retrieved list comes out to be like ["en,hi"], whereas I want it to be ["en, "hi].
I am not even sure if my approach is correct, hence the broader question, if this approach is not right then how can I post a list and an image together?
Your function looks just fine. That behaviour though has to do with how FastAPI autodocs (Swagger UI)—I am assuming you are using it for testing, as I did myself and noticed the exact same behaviour—handles the list items. For some reason, the Swagger UI/OpenAPI adds all items as a single item to the list, separated by comma (i.e., ["en, hi, ..."], instead of ["en", "hi", ...]).
Testing the code with Python requests and sending the languages' list in the proper way, it works just fine. To fix, however, the behaviour of Swagger UI, or any other tool that might behave the same, you could perform a check on the length of the list that is received in the function, and if it is equal to 1 (meaning that the list contains a single item), then split this item using comma delimiter to get a new list with all languages included.
Below is a working example:
app.py
from fastapi import File, UploadFile, FastAPI
from typing import List
app = FastAPI()
#app.post("/submit")
def submit(languages: List[str] = ["en", "hi"], image: UploadFile = File(...)):
if (len(languages) == 1):
languages= [item.strip() for item in languages[0].split(',')]
return {"Languages ": languages, "Uploaded filename": image.filename}
test.py
import requests
url = 'http://127.0.0.1:8000/submit'
image = {'image': open('sample.png', 'rb')}
#payload ={"languages": ["en", "hi"]} # send languages as separate items
payload ={"languages": "en, hi"} # send languages as a single item
resp = requests.post(url=url, data=payload, files=image)
print(resp.json())
I solved this using Query parameters! This might be helpful for someone, though I think Chris' answer makes much more sense -
#app.post("/my-endpoint")
async def my_func(
languages: List[str] = Query(["en", "hi"]), image: UploadFile = File(...)
):

Attempted implicit sequence conversion while serving PIL image using Flask uwsgi

I am serving a PNG or SVG image through Flask. Locally it works fine but when I run the application inside docker and send request (POST) I get following error:
RuntimeError: Attempted implicit sequence conversion but the response object is in direct passthrough mode.
Bellow code for serving PIL image through flask:
def serve_image(image: Image, mime_type: FileFormat, download: bool):
suffix = mime_type.value.split('/')[-1]
temp_file = tempfile.TemporaryFile(mode='w+b', suffix=suffix)
if suffix == 'png':
image.save(temp_file, suffix)
else:
# we cant force svg extension in PIL
image.save(temp_file)
temp_file.seek(0, 0)
return send_file(temp_file, mimetype=mime_type.value, as_attachment=download,
attachment_filename='img.' + suffix)
I have tried using BytesIO no luck there either. Setting
Response.implicit_sequence_conversion = False
Response.direct_passthrough = False
or
#app.after_request
def after_request_func(r):
r.direct_passthrough = False
r.implicit_sequence_conversion = False
return r
Did not help either.
The problem was in openapi-core Flask validation, and it was solved by creating werkzeug Response with direct passtrough set to False. Other headers like attachment cache-timeout had to be set manually.
res = Response(temp_file, direct_passthrough=False, mimetype=mime_type.value)

embedding resources in python scripts

I'd like to figure out how to embed binary content in a python script. For instance, I don't want to have any external files around (images, sound, ... ), I want all this content living inside of my python scripts.
Little example to clarify, let's say I got this small snippet:
from StringIO import StringIO
from PIL import Image, ImageFilter
embedded_resource = StringIO(open("Lenna.png", "rb").read())
im = Image.open(embedded_resource)
im.show()
im_sharp = im.filter(ImageFilter.SHARPEN)
im_sharp.show()
As you can see, the example is reading the external file 'Lenna.png'
Question
How to proceed to embed "Lenna.png" as a resource (variable) into my python script. What's the fastest way to achieve this simple task using python?
You might find the following class rather useful for embedding resources in your program. To use it, call the package method with paths to the files that you want to embed. The class will print out a DATA attribute that should be used to replace the one already found in the class. If you want to add files to your pre-built data, use the add method instead. To use the class in your program, make calls to the load method using context manager syntax. The returned value is a Path object that can be used as a filename argument to other functions or for the purpose of directly loading the reconstituted file. See this SMTP Client for example usage.
import base64
import contextlib
import pathlib
import pickle
import pickletools
import sys
import zlib
class Resource:
"""Manager for resources that would normally be held externally."""
WIDTH = 76
__CACHE = None
DATA = b''
#classmethod
def package(cls, *paths):
"""Creates a resource string to be copied into the class."""
cls.__generate_data(paths, {})
#classmethod
def add(cls, *paths):
"""Include paths in the pre-generated DATA block up above."""
cls.__preload()
cls.__generate_data(paths, cls.__CACHE.copy())
#classmethod
def __generate_data(cls, paths, buffer):
"""Load paths into buffer and output DATA code for the class."""
for path in map(pathlib.Path, paths):
if not path.is_file():
raise ValueError('{!r} is not a file'.format(path))
key = path.name
if key in buffer:
raise KeyError('{!r} has already been included'.format(key))
with path.open('rb') as file:
buffer[key] = file.read()
pickled = pickle.dumps(buffer, pickle.HIGHEST_PROTOCOL)
optimized = pickletools.optimize(pickled)
compressed = zlib.compress(optimized, zlib.Z_BEST_COMPRESSION)
encoded = base64.b85encode(compressed)
cls.__print(" DATA = b'''")
for offset in range(0, len(encoded), cls.WIDTH):
cls.__print("\\\n" + encoded[
slice(offset, offset + cls.WIDTH)].decode('ascii'))
cls.__print("'''")
#staticmethod
def __print(line):
"""Provides alternative printing interface for simplicity."""
sys.stdout.write(line)
sys.stdout.flush()
#classmethod
#contextlib.contextmanager
def load(cls, name, delete=True):
"""Dynamically loads resources and makes them usable while needed."""
cls.__preload()
if name not in cls.__CACHE:
raise KeyError('{!r} cannot be found'.format(name))
path = pathlib.Path(name)
with path.open('wb') as file:
file.write(cls.__CACHE[name])
yield path
if delete:
path.unlink()
#classmethod
def __preload(cls):
"""Warm up the cache if it does not exist in a ready state yet."""
if cls.__CACHE is None:
decoded = base64.b85decode(cls.DATA)
decompressed = zlib.decompress(decoded)
cls.__CACHE = pickle.loads(decompressed)
def __init__(self):
"""Creates an error explaining class was used improperly."""
raise NotImplementedError('class was not designed for instantiation')
The best way to go about this is converting your picture into a python string, and have it in a separate file called something like resources.py, then you simply parse it.
If you are looking to embed the whole thing inside a single binary, then you're looking at something like py2exe. Here is an example embedding external files
In the first scenario, you could even use base64 to (de)code the picture, something like this:
import base64
file = open('yourImage.png');
encoded = base64.b64encode(file.read())
data = base64.b64decode(encoded) # Don't forget to file.close() !

Django ReportLab: using Drawing object to create PDF and return via Httpresponse

In ReportLab, Drawing object can be written into different renderers, e.g
d = shapes.Drawing(400, 400)
renderPDF.drawToFile(d, 'test.pdf')
and in Django, Canvas object can be sent via httpresponse, e.g.:
response = HttpResponse(mimetype='application/pdf')
response['Content-Disposition'] = 'filename=test.pdf'
c = canvas.Canvas(response)
in my case, my problem is that I have a reportLab script using Drawing object which saves to local file system. I now put it in Django views, and wondering whether there is a way to not save to local file system but instead sent back to client.
I hope I describe this question clearly.
Thanks for any advice!
updates
it turns out there is a function in renderPDF:
renderPDF.draw(drawing, canvas, x, y)
which can render drawing() object in the given canvas.
Using ReportLab in Django without saving to disk is actually pretty easy. There are even examples in the DjangoDocs (https://docs.djangoproject.com/en/dev/howto/outputting-pdf)
The trick basically boils down to using a "file like object" instead of an actual file. Most people use StringIO for this.
You could do it pretty simply with
from cStringIO import StringIO
def some_view(request):
filename = 'test.pdf'
# Make your response and prep to attach
response = HttpResponse(mimetype='application/pdf')
response['Content-Disposition'] = 'attachment; filename=%s.pdf' % (filename)
tmp = StringIO()
# Create a canvas to write on
p = canvas.Canvas(tmp)
# With someone on
p.drawString(100, 100, "Hello world")
# Close the PDF object cleanly.
p.showPage()
p.save()
# Get the data out and close the buffer cleanly
pdf = tmp.getvalue()
tmp.close()
# Get StringIO's body and write it out to the response.
response.write(pdf)
return response
it turns out there is a function in renderPDF:
renderPDF.draw(drawing, canvas, x, y)
which can render drawing() object in the given canvas.
Drawing has a method called asString with a one required attribute that represents the required drawing format such as 'png', 'gif' or 'jpg'.
so instead of calling
renderPDF.drawToFile(d, 'test.pdf')
You could call
binaryStuff = d.asString('gif')
return HttpResponse(binaryStuff, 'image/gif')
Without the need to save your drawing to the disc.
Check https://code.djangoproject.com/wiki/Charts for full example.

Categories

Resources