Using the python module fastAPI, I can't figure out how to return an image. In flask I would do something like this:
#app.route("/vector_image", methods=["POST"])
def image_endpoint():
# img = ... # Create the image here
return Response(img, mimetype="image/png")
what's the corresponding call in this module?
If you already have the bytes of the image in memory
Return a fastapi.responses.Response with your custom content and media_type.
You'll also need to muck with the endpoint decorator to get FastAPI to put the correct media type in the OpenAPI specification.
#app.get(
"/image",
# Set what the media type will be in the autogenerated OpenAPI specification.
# fastapi.tiangolo.com/advanced/additional-responses/#additional-media-types-for-the-main-response
responses = {
200: {
"content": {"image/png": {}}
}
}
# Prevent FastAPI from adding "application/json" as an additional
# response media type in the autogenerated OpenAPI specification.
# https://github.com/tiangolo/fastapi/issues/3258
response_class=Response,
)
def get_image()
image_bytes: bytes = generate_cat_picture()
# media_type here sets the media type of the actual response sent to the client.
return Response(content=image_bytes, media_type="image/png")
See the Response documentation.
If your image exists only on the filesystem
Return a fastapi.responses.FileResponse.
See the FileResponse documentation.
Be careful with StreamingResponse
Other answers suggest StreamingResponse. StreamingResponse is harder to use correctly, so I don't recommend it unless you're sure you can't use Response or FileResponse.
In particular, code like this is pointless. It will not "stream" the image in any useful way.
#app.get("/image")
def get_image()
image_bytes: bytes = generate_cat_picture()
# ❌ Don't do this.
image_stream = io.BytesIO(image_bytes)
return StreamingResponse(content=image_stream, media_type="image/png")
First of all, StreamingResponse(content=my_iterable) streams by iterating over the chunks provided by my_iterable. But when that iterable is a BytesIO, the chunks will be \n-terminated lines, which won't make sense for a binary image.
And even if the chunk divisions made sense, chunking is pointless here because we had the whole image_bytes bytes object available from the start. We may as well have just passed the whole thing into a Response from the beginning. We don't gain anything by holding data back from FastAPI.
Second, StreamingResponse corresponds to HTTP chunked transfer encoding. (This might depend on your ASGI server, but it's the case for Uvicorn, at least.) And this isn't a good use case for chunked transfer encoding.
Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images.
Unnecessary chunked transfer encoding can be harmful. For example, it means clients can't show progress bars when they're downloading the file. See:
Content-Length header versus chunked encoding
Is it a good idea to use Transfer-Encoding: chunked on static files?
I had a similar issue but with a cv2 image. This may be useful for others. Uses the StreamingResponse.
import io
from starlette.responses import StreamingResponse
app = FastAPI()
#app.post("/vector_image")
def image_endpoint(*, vector):
# Returns a cv2 image array from the document vector
cv2img = my_function(vector)
res, im_png = cv2.imencode(".png", cv2img)
return StreamingResponse(io.BytesIO(im_png.tobytes()), media_type="image/png")
All the other answer(s) is on point, but now it's so easy to return an image
from fastapi.responses import FileResponse
#app.get("/")
async def main():
return FileResponse("your_image.jpeg")
It's not properly documented yet, but you can use anything from Starlette.
So, you can use a FileResponse if it's a file in disk with a path: https://www.starlette.io/responses/#fileresponse
If it's a file-like object created in your path operation, in the next stable release of Starlette (used internally by FastAPI) you will also be able to return it in a StreamingResponse.
Thanks to #biophetik's answer, with an important reminder that caused me confusion: If you're using BytesIO especially with PIL/skimage, make sure to also do img.seek(0) before returning!
#app.get("/generate")
def generate(data: str):
img = generate_image(data)
print('img=%s' % (img.shape,))
buf = BytesIO()
imsave(buf, img, format='JPEG', quality=100)
buf.seek(0) # important here!
return StreamingResponse(buf, media_type="image/jpeg",
headers={'Content-Disposition': 'inline; filename="%s.jpg"' %(data,)})
The answer from #SebastiánRamírez pointed me in the right direction, but for those looking to solve the problem, I needed a few lines of code to make it work. I needed to import FileResponse from starlette (not fastAPI?), add CORS support, and return from a temporary file. Perhaps there is a better way, but I couldn't get streaming to work:
from starlette.responses import FileResponse
from starlette.middleware.cors import CORSMiddleware
import tempfile
app = FastAPI()
app.add_middleware(
CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"]
)
#app.post("/vector_image")
def image_endpoint(*, vector):
# Returns a raw PNG from the document vector (define here)
img = my_function(vector)
with tempfile.NamedTemporaryFile(mode="w+b", suffix=".png", delete=False) as FOUT:
FOUT.write(img)
return FileResponse(FOUT.name, media_type="image/png")
My needs weren't quite met from the above because my image was built with PIL. My fastapi endpoint takes an image file name, reads it as a PIL image, and generates a thumbnail jpeg in memory that can be used in HTML like:
<img src="http://localhost:8000/images/thumbnail/bigimage.jpg">
import io
from PIL import Image
from fastapi.responses import StreamingResponse
#app.get('/images/thumbnail/{filename}',
response_description="Returns a thumbnail image from a larger image",
response_class="StreamingResponse",
responses= {200: {"description": "an image", "content": {"image/jpeg": {}}}})
def thumbnail_image (filename: str):
# read the high-res image file
image = Image.open(filename)
# create a thumbnail image
image.thumbnail((100, 100))
imgio = io.BytesIO()
image.save(imgio, 'JPEG')
imgio.seek(0)
return StreamingResponse(content=imgio, media_type="image/jpeg")
You can use a FileResponse if it's a file in disk with a path:
import os
from fastapi import FastAPI
from fastapi.responses import FileResponse
app = FastAPI()
path = "/path/to/files"
#app.get("/")
def index():
return {"Hello": "World"}
#app.get("/vector_image", responses={200: {"description": "A picture of a vector image.", "content" : {"image/jpeg" : {"example" : "No example available. Just imagine a picture of a vector image."}}}})
def image_endpoint():
file_path = os.path.join(path, "files/vector_image.jpg")
if os.path.exists(file_path):
return FileResponse(file_path, media_type="image/jpeg", filename="vector_image_for_you.jpg")
return {"error" : "File not found!"}
If when following the top answer and you are attempting to return a BytesIO object like this in your Response
buffer = BytesIO(my_data)
# Return file
return Response(content=buffer, media_type="image/jpg")
You may receive an error that looks like this (as described in this comment)
AttributeError: '_io.BytesIO' object has no attribute 'encode'
This is caused by the render function in Response which explicitly checks for a bytes type here. Since BytesIO != bytes it attempts to encode the value and fails.
The solution is to get the bytes value from the BytesIO object with getvalue()
buffer = BytesIO(my_data)
# Return file
return Response(content=buffer.getvalue(), media_type="image/jpg")
You can do something very similar in FastAPI
from fastapi import FastAPI, Response
app = FastAPI()
#app.post("/vector_image/")
async def image_endpoint():
# img = ... # Create the image here
return Response(content=img, media_type="image/png")
I suspect I am facing an issue because my json data is within square brackets ([]).
I am processing pubsub data which is in below format.
[{"ltime":"2022-04-12T11:33:00.970Z","cnt":199,"fname":"MYFILENAME","data":[{"NAME":"N1","ID":11.4,"DATE":"2005-10-14 00:00:00"},{"NAME":"M1","ID":25.0,"DATE":"2005-10-14 00:00:00"}]
I am successfully processing/extracting all the fields except 'data'. I need to create a json file using the 'data' in a cloud storage.
when I use msg['data'] I am getting below,
[{"NAME":"N1","ID":11.4,"DATE":"2005-10-14 00:00:00"},{"NAME":"M1","ID":25.0,"DATE":"2005-10-14 00:00:00"}]
Sample code:
pubsub_msg = base64.b64decode(event['data']).decode('utf-8')
pubsub_data = json.loads(pubsub_msg)
json_file = pubsub_data['fname'] + '.json'
json_content = pubsub_data['data']
try:
<< call function >>
except Exception as e:
<< Error >>
below is the error I am getting it.
2022-04-12T17:35:01.761Zmyapp-pipelineeopcm2kjno5h Error in uploading json document to gcs: [{"NAME":"N1","ID":11.4,"DATE":"2005-10-14 00:00:00"},{"NAME":"M1","ID":25.0,"DATE":"2005-10-14 00:00:00"}] could not be converted to bytes
I am not sure the issue is because of square brackets [].
Correct me if I am wrong and please help to get the exact json data for creating a file.
I am trying to use Google Video API and pass a video which is on my local drive using the "input_content" argument but I get this error: InvalidArgument: 400 Either `input_uri` or `input_content` should be set.
Here is the code based on Google Documentation:
"""Detect labels given a file path."""
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.Feature.LABEL_DETECTION]
cwd = "E:/Google_Video_API/videos/video.mp4"
with io.open(cwd, "rb") as movie:
input_content = movie.read()
operation = video_client.annotate_video(
request={"features": features, "input_content": input_content}
)
Video file need to be Base64 encoded so try this:
import base64
...
operation = video_client.annotate_video(
request={"features": features, "input_content": base64.b64encode(input_content)}
)
I am completely stuck as when dabbling in Reddit's API aka Praw I wanted to learn to save the number 1 hottest post as an mp4 however Reddit saves all of their gifs on Imgur which convert all gifs to gifv, how would I go around converting the gifv to mp4 so I can read them? Btw simply renaming it seems to lead to corruption.
This is my code so far: (details have been xxxx'd for confidentiality)
reddit = praw.Reddit(client_id ="xxxx" , client_secret ="xxxx", username = "xxxx", password ="xxxx", user_agent="xxxx")
subreddit = reddit.subreddit("dankmemes")
hot_dm = subreddit.hot(limit=1);
for sub in hot_dm:
print(sub)
url = sub.url
print(url)
print(sub.permalink)
meme = requests.get(url)
newF = open("{}.mp4".format(sub), "wb") #here the file is created but when played is corrupted
newF.write(meme.content)
newF.close()
Some posts already have an mp4 conversion inside the preview > variants portion of the json response.
Therefore to download only those posts that have a gif and therefore have an mp4 version you could do something like this:
subreddit = reddit.subreddit("dankmemes")
hot_dm = subreddit.hot(limit=10)
for sub in hot_dm:
if sub.selftext == "": # check that the post is a link to some content (image/video/link)
continue
try: # try to access variants and catch the exception thrown
has_variants = sub.preview['images'][0]['variants'] # variants contain both gif and mp4 versions (if available)
except AttributeError:
continue # no conversion available as variants doesn't exist
if 'mp4' not in has_variants: # check that there is an mp4 conversion available
continue
mp4_video = has_variants['mp4']['source']['url']
print(sub, sub.url, sub.permalink)
meme = requests.get(mp4_video)
with open(f"{sub}.mp4", "wb") as newF:
newF.write(meme.content)
Though you are most likely going to want to increase the limit of posts that you look through when searching through hot as the first post may be a pinned post (usually some rules about the subreddit), this is why I initially checked the selftext. In addition, there may be other posts that are only images, therefore with a small limit you might not return any posts that could be converted to mp4s.
I trying decrypt my data using google protocol buffer in python
sample.proto file:-
syntax = "proto3";
message SimpleMessage {
string deviceID = 1;
string timeStamp = 2;
string data = 3;
}
After that, I have generated python files using the proto command:-
protoc --proto_path=./ --python_out=./ simple.proto
My Python code below:-
import json
import simple_pb2
import base64
encryptedData = 'iOjEuMCwic2VxIjoxODEsInRtcyI6IjIwMjEtMDEtMjJUMTQ6MDY6MzJaIiwiZGlkIjoiUlFI'
t2 = bytes(encryptedData, encoding='utf8')
print(encryptedData)
data = base64.b64decode(encryptedData)
test = simple_pb2.SimpleMessage()
v1 = test.ParseFromString(data)
While executing above code getting error:- google.protobuf.message.DecodeError: Wrong wire type in tag Error
What i am doing wrong. can anyone help?
Your data is not "encrypted", it's just base64-encoded. If you use your example code and inspect your data variable, then you get:
import base64
data = base64.b64decode(b'eyJ2ZXIiOjEuMCwic2VxIjoxODEsInRtcyI6IjIwMjEtMDEtMjJUMTQ6MDY6MzJaIiwiZGlkIjoiUlFIVlRKRjAwMDExNzY2IiwiZG9wIjoxLjEwMDAwMDAyMzg0MTg1NzksImVyciI6MCwiZXZ0IjoiVE5UIiwiaWdzIjpmYWxzZSwibGF0IjoyMi45OTI0OTc5OSwibG5nIjo3Mi41Mzg3NDgyOTk5OTk5OTUsInNwZCI6MC4wfQo=')
print(data)
> b'{"ver":1.0,"seq":181,"tms":"2021-01-22T14:06:32Z","did":"RQHVTJF00011766","dop":1.1000000238418579,"err":0,"evt":"TNT","igs":false,"lat":22.99249799,"lng":72.538748299999995,"spd":0.0}\n'
Which is evidently a piece of of JSON data, not a binary-serialized protocol buffer - which is what ParseFromString expects. Also, looking at the names and types of the fields, it looks like this payload just doesn't match the proto definition you've shown.
There are certainly ways to parse a JSON into a proto, and even to control the field names in that transformation, but not even the number of fields match directly. So you first need to define what you want: what proto message would you expect this JSON object to represent?