I'm trying to make my FastAPI docs pretty. I have 2 endpoints—both post requests. The first has a single required field, while the second has two required fields.
My end points:
#reset session
#app.post("/reset_session/", tags=["Recording"])
async def reset_session( session_id: str = Body(example=SESSION_ID_EXAMPLE, title="Session title bob", description="bob the builder")):
return database.reset_session(session_id)
#reset session/computer_id
#app.post("/reset_session_computer/", tags=["Recording"])
async def reset_session_computer(session_id : str = Body(example=SESSION_ID_EXAMPLE, embed=True, title="sessions are happy"),computer_id : str = Body(example=COMPUTER_ID_EXAMPLE, description="computer description", embed=True) ):
return database.reset_session_for_computer(session_id, computer_id)
I don't want to use a pydantic model.
Is there a way to embed these two required fields into a starlette Request object and still communicate to the generated docs that these two are required fields in the request?
For example,
#reset session/computer_id
#app.post("/reset_session_computer/", tags=["Recording"])
async def reset_session_computer(request : Request = Body(...): #What goes in here?
session_id = request.json().get("session_id")
computer_id = request.json().get("computer_id")
return database.reset_session_for_computer(session_id, computer_id)
How do I get the default values to appear in the docs? For the first request, it is working as intended. However, the second request with 2 params, currently, they are showing are showing as {"session_id":"string", "computer_id":"string"} rather that what I specified. What's the best way for me to document my function without a pydantic model? Ironically, the schema generates correctly, it's just the default values are not...
Here's a screenshot
working (example value is provided example value:
not working (example values both string):
Related
I am having some issues inserting into MongoDB via FastAPI.
The below code works as expected. Notice how the response variable has not been used in response_to_mongo().
The model is an sklearn ElasticNet model.
app = FastAPI()
def response_to_mongo(r: dict):
client = pymongo.MongoClient("mongodb://mongo:27017")
db = client["models"]
model_collection = db["example-model"]
model_collection.insert_one(r)
#app.post("/predict")
async def predict_model(features: List[float]):
prediction = model.predict(
pd.DataFrame(
[features],
columns=model.feature_names_in_,
)
)
response = {"predictions": prediction.tolist()}
response_to_mongo(
{"predictions": prediction.tolist()},
)
return response
However when I write predict_model() like this and pass the response variable to response_to_mongo():
#app.post("/predict")
async def predict_model(features: List[float]):
prediction = model.predict(
pd.DataFrame(
[features],
columns=model.feature_names_in_,
)
)
response = {"predictions": prediction.tolist()}
response_to_mongo(
response,
)
return response
I get an error stating that:
TypeError: 'ObjectId' object is not iterable
From my reading, it seems that this is due to BSON/JSON issues between FastAPI and Mongo. However, why does it work in the first case when I do not use a variable? Is this due to the asynchronous nature of FastAPI?
As per the documentation:
When a document is inserted a special key, "_id", is automatically
added if the document doesn’t already contain an "_id" key. The value
of "_id" must be unique across the collection. insert_one() returns an
instance of InsertOneResult. For more information on "_id", see the
documentation on _id.
Thus, in the second case of the example you provided, when you pass the dictionary to the insert_one() function, Pymongo will add to your dictionary the unique identifier (i.e., ObjectId) necessary to retrieve the data from the database; and hence, when returning the response from the endpoint, the ObjectId fails getting serialized—since, as described in this answer in detail, FastAPI, by default, will automatically convert that return value into JSON-compatible data using the jsonable_encoder (to ensure that objects that are not serializable are converted to a str), and then return a JSONResponse, which uses the standard json library to serialise the data.
Solution 1
Use the approach demonstrated here, by having the ObjectId converted to str by default, and hence, you can return the response as usual inside your endpoint.
# place these at the top of your .py file
import pydantic
from bson import ObjectId
pydantic.json.ENCODERS_BY_TYPE[ObjectId]=str
return response # as usual
Solution 2
Dump the loaded BSON to valid JSON string and then reload it as dict, as described here and here.
from bson import json_util
import json
response = json.loads(json_util.dumps(response))
return response
Solution 3
Define a custom JSONEncoder, as described here, to convert the ObjectId into str:
import json
from bson import ObjectId
class JSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, ObjectId):
return str(o)
return json.JSONEncoder.default(self, o)
response = JSONEncoder().encode(response)
return response
Solution 4
You can have a separate output model without the 'ObjectId' (_id) field, as described in the documentation. You can declare the model used for the response with the parameter response_model in the decorator of your endpoint. Example:
from pydantic import BaseModel
class ResponseBody(BaseModel):
name: str
age: int
#app.get('/', response_model=ResponseBody)
def main():
# response sample
response = {'_id': ObjectId('53ad61aa06998f07cee687c3'), 'name': 'John', 'age': '25'}
return response
Solution 5
Remove the "_id" entry from the response dictionary before returning it (see here on how to remove a key from a dict):
response.pop('_id', None)
return response
my db model looks like this...
from pydantic import BaseModel
class Store(BaseModel):
name: str
store_code : str
and there can be same store names in db with different store_code.
what I want is filtering all informations of stores with same names.
for example, if my db is like this...
{
name:lg
store_code: 123
name:lg
store_code:456
}
I'd like to see all those two documents
my python fast api code is like this..
from fastapi import FastAPI, HTTPException
from database import *
app = FastAPI()
#app.get("/api/store{store_name}", response_model=Store)
async def get_store_by_name(store_name):
response = await fetch_store_by_name(store_name)
if response:
return response
raise HTTPException
and this is my mongo query code...
from pymongo import MongoClient
from model import Store
client = MongoClient(host='localhost', port=27017)
database = client.store
async def fetch_store_by_name(store_name:str):
document = collection.find({"name":store_name})
return document
i thought in the document, there would be two documents eventually.
but there's always an error like this
pydantic.error_wrappers.ValidationError: 1 validation error for Store
response
value is not a valid dict (type=type_error.dict)
is there anyone to help me please?
++++
I just changed my query like this
async def fetch_store_by_name(store_name:str):
stores = []
cursor = collection.find({"name":store_name})
for document in cursor:
stores.append(document)
return stores
this should returns two documents like I expected but it still has
ValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
this error.
I think my fast-api code has a problem which I really have no idea...
async def fetch_store_by_name(store_name:str):
stores = [] ---Fault in this line---
cursor = collection.find({"name":store_name})
for document in cursor:
stores.append(document)
return stores
stores should be a string value, not a list as Mongodb will try to find it as the default value that you provided. In this case - str
I want to call a generate() function and send a user a message, but then continue executing a function.
#application.route("/api/v1.0/gen", methods=['POST'])
def generate():
return "Your id for getting the generated data is 'hgF8_dh4kdsRjdr'"
main() #generate a data
return "Successfully generated something. Use your id to get the data"
I understand that this is not a correct way of returning, but I hope you get the idea of what I am trying to accomplish. Maybe Flask has some build-in method to return multiple times from one api call?
Basically, what are you describing is called Server-Sent Events (aka SSE)
The difference of this format, that they returned an 'eventstream' Response type instead of usual JSON/plaintext
And if you want to use it with python/flask, you need generators.
Small code example (with GET request):
#application.route("/api/v1.0/gen", methods=['GET'])
def stream():
def eventStream():
text = "Your id for getting the generated data is 'hgF8_dh4kdsRjdr'"
yield str(Message(data = text, type="message"))
main()
text = "Successfully generated something. Use your id to get the data"
yield str(Message(data = text, type="message"))
resp.headers['Content-Type'] = 'text/event-stream'
resp.headers['Cache-Control'] = 'no-cache'
resp.headers['Connection'] = 'keep-alive'
return resp
Message class you can find here: https://gist.github.com/Alveona/b79c6583561a1d8c260de7ba944757a7
And of course, you need specific client that can properly read such responses.
postwoman.io supports SSE at Real-Time tab
How to decode messages.Message to JSON in python 2.7 for Google Cloud Endpoints Frameworks ? Especially when we have some nested messages.
Endpoints version :
google-endpoints==2.4.5 and google-endpoints-api-management==1.3.0
from protorpc import messages
# messsage definition
class GPSCoord(messages.Message):
"""
GPS data obj
"""
latitude = messages.FloatField(1)
longitude = messages.FloatField(2)
class Address(messages.Message):
"""
Address objectt
"""
type = messages.StringField(1)
name = messages.StringField(2)
number = messages.StringField(3)
city = messages.StringField(4)
zip_code = messages.IntegerField(5)
gps_coord = messages.MessageField(GPSCoord, 6)
I tried to add a method "to_json" to messages definition but I had "MessageDefinitionError: May only use fields in message definitions. " exception.
It looks like a rudimentary operation but it's not that easy. Python SDK needs a huge improvement for this part.
You should make use of the built-in Endpoints JSON code. This is not exact, but something like this:
from endpoints import protojson
p = protojson.EndpointsProtoJson()
p.decode_message(Address, '{...}')
I developed my own solution finally, here the code :
def request_to_json(request):
"""
Take a coming request (POST) and
get JSON.
"""
json_dict = {}
for field in request.all_fields():
if field.__class__.__name__ == 'MessageField':
data = getattr(request, field.name)
if data:
if data.__class__.__name__ == 'FieldList':
json_dict.update({
field.name: [request_to_json(data[i]) for i in range(len(data))]
})
else:
json_dict.update({
field.name: request_to_json(data)
})
else:
json_dict.update({
field.name: getattr(request, field.name)
})
return json_dict
It considers the nested messages fields, the list fields and primitive fields.
I tested it on POST requests and it works well.
In order to test a Flask application, I got a flask test client POSTing request with files as attachment
def make_tst_client_service_call1(service_path, method, **kwargs):
_content_type = kwargs.get('content-type','multipart/form-data')
with app.test_client() as client:
return client.open(service_path, method=method,
content_type=_content_type, buffered=True,
follow_redirects=True,**kwargs)
def _publish_a_model(model_name, pom_env):
service_url = u'/publish/'
scc.data['modelname'] = model_name
scc.data['username'] = "BDD Script"
scc.data['instance'] = "BDD Stub Simulation"
scc.data['timestamp'] = datetime.now().strftime('%d-%m-%YT%H:%M')
scc.data['file'] = (open(file_path, 'rb'),file_name)
scc.response = make_tst_client_service_call1(service_url, method, data=scc.data)
Flask Server end point code which handles the above POST request is something like this
#app.route("/publish/", methods=['GET', 'POST'])
def publish():
if request.method == 'POST':
LOG.debug("Publish POST Service is called...")
upload_files = request.files.getlist("file[]")
print "Files :\n",request.files
print "Upload Files:\n",upload_files
return render_response_template()
I get this Output
Files:
ImmutableMultiDict([('file', <FileStorage: u'Single_XML.xml' ('application/xml')>)])
Upload Files:
[]
If I change
scc.data['file'] = (open(file_path, 'rb'),file_name)
into (thinking that it would handle multiple files)
scc.data['file'] = [(open(file_path, 'rb'),file_name),(open(file_path, 'rb'),file_name1)]
I still get similar Output:
Files:
ImmutableMultiDict([('file', <FileStorage: u'Single_XML.xml' ('application/xml')>), ('file', <FileStorage: u'Second_XML.xml' ('application/xml')>)])
Upload Files:
[]
Question:
Why request.files.getlist("file[]") is returning an empty list?
How can I post multiple files using flask test client, so that it can be retrieved using request.files.getlist("file[]") at flask server side ?
Note:
I would like to have flask client I dont want curl or any other client based solutions.
I dont want to post single file in multiple requests
Thanks
Referred these links already:
Flask and Werkzeug: Testing a post request with custom headers
Python - What type is flask.request.files.stream supposed to be?
You send the files as the parameter named file, so you can't look them up with the name file[]. If you want to get all the files named file as a list, you should use this:
upload_files = request.files.getlist("file")
On the other hand, if you really want to read them from file[], then you need to send them like that:
scc.data['file[]'] = # ...
(The file[] syntax is from PHP and it's used only on the client side. When you send the parameters named like that to the server, you still access them using $_FILES['file'].)
Lukas already addressed this,just providing these info as it may help someone
Werkzeug client is doing some clever stuff by storing requests data in MultiDict
#native_itermethods(['keys', 'values', 'items', 'lists', 'listvalues'])
class MultiDict(TypeConversionDict):
"""A :class:`MultiDict` is a dictionary subclass customized to deal with
multiple values for the same key which is for example used by the parsing
functions in the wrappers. This is necessary because some HTML form
elements pass multiple values for the same key.
:class:`MultiDict` implements all standard dictionary methods.
Internally, it saves all values for a key as a list, but the standard dict
access methods will only return the first value for a key. If you want to
gain access to the other values, too, you have to use the `list` methods as
explained below.
getList call looks for a given key in the "requests" dictionary. If the key doesn't exist, it returns empty list.
def getlist(self, key, type=None):
"""Return the list of items for a given key. If that key is not in the
`MultiDict`, the return value will be an empty list. Just as `get`
`getlist` accepts a `type` parameter. All items will be converted
with the callable defined there.
:param key: The key to be looked up.
:param type: A callable that is used to cast the value in the
:class:`MultiDict`. If a :exc:`ValueError` is raised
by this callable the value will be removed from the list.
:return: a :class:`list` of all the values for the key.
"""
try:
rv = dict.__getitem__(self, key)
except KeyError:
return []
if type is None:
return list(rv)
result = []
for item in rv:
try:
result.append(type(item))
except ValueError:
pass
return result