How to debug Fastapi openapi generation error - python

I spend some time going over this error but had no success.
File "C:\Users\ebara.conda\envs\asci\lib\site-packages\fastapi\openapi\utils.py", line 388, in get_openapi
flat_models=flat_models, model_name_map=model_name_map
File "C:\Users\ebara.conda\envs\asci\lib\site-packages\fastapi\utils.py", line 28, in get_model_definitions
model_name = model_name_map[model]
KeyError: <class 'pydantic.main.Body_login_access_token_api_v1_login_access_token_post'>
The problem is that I'm trying to build a project with user authentication from OpenAPI form to create new users in database.
I've used backend part of this template project https://github.com/tiangolo/full-stack-fastapi-postgresql
Everything works except for Authentication like here.
#router.post("/login/access-token", response_model=schemas.Token)
def login_access_token(
db: Session = Depends(deps.get_db), form_data: OAuth2PasswordRequestForm = Depends()) -> Any:
When I add this part form_data: OAuth2PasswordRequestForm = Depends() - and go to /docs page - this error appears (Failed to load API definition. Fetch error. Internal Server Error /openapi.json)
.
The server itself runs in normal mode, but it can't load the open API. If I remove the aforementioned formdata part - then everything works smoothly, but without Authorisation. I tried to debug it, but I have no success. I think it might be connected to a dependency graph or some start-up issues, but have no guess how to trace it back.
Here is the full working example which will reproduce the error. The link points out the code which causes the problem. If you will comment out lines 18-39 - the docs will open without any problems.
https://github.com/BEEugene/fastapi_error_demo/blob/master/fastapi_service/api/api_v1/endpoints/login.py
Any ideas on how to debug or why this error happens?

You are using Depends function without an argument. Maybe in the console, you were having the error provoked by a function. You must pass the OAuth2PasswordRequestForm function after importing it from fastapi.security to get the result you were expecting.
from fastapi.security import OAuth2PasswordRequestForm
form_data: OAuth2PasswordRequestForm = Depends(OAuth2PasswordRequestForm)
it might work.

It seems that in my case - the main error issue was that I was an idiot.
As said, if you will comment out lines 18-39 - the docs will open without any problems. But, you will notice this warning:
UserWarning: Duplicate Operation ID read_users_api_v1_users__get for
function read_users at
...\fastapi_error\fastapi_service\api\api_v1\endpoints\users.py
warnings. Warn(message)
I've started to compare all the files and it is appeared that I included router to the fastapi twice:
import logging
from fastapi import FastAPI
from starlette.middleware.cors import CORSMiddleware
from fastapi_service.api.api_v1.api import api_router
from fastapi_service.core.config import settings
from fastapi_service.core.event_handlers import (start_app_handler,
stop_app_handler)
log = logging.getLogger(__name__)
def get_app(mode="prod") -> FastAPI:
fast_app = FastAPI(title=settings.PROJECT_NAME,
version=settings.APP_VERSION,
debug=settings.IS_DEBUG)
# openapi_url=f"{settings.API_V1_STR}/openapi.json")
# first time when I included the router
fast_app.include_router(api_router, prefix=f"{settings.API_V1_STR}")
fast_app.mode = mode
logger = log.getChild("get_app")
logger.info("adding startup")
fast_app.add_event_handler("startup", start_app_handler(fast_app))
logger.info("adding shutdown")
fast_app.add_event_handler("shutdown", stop_app_handler(fast_app))
return fast_app
app = get_app()
# Set all CORS enabled origins
if settings.BACKEND_CORS_ORIGINS:
app.add_middleware(
CORSMiddleware,
allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# second time when I included the router
app.include_router(api_router, prefix=settings.API_V1_STR)
So, if you comment out (or just delete) the second router introduction - the app will work normally.
It seems, that the answer to my question on how to debug this error - is to find the point where the bug appears in fastapi and compare the values in it to the version where there is no error. In my case the number of keys in different dictionaries differed in the function get_model_definitions.

I had the same problem. For me it was because I had code like this
from pydantic import BaseModel
class A(BaseModel):
b: B
class B(BaseModel):
c: int
but instead, Class B should have been defined above class A. This fixed it:
from pydantic import BaseModel
class B(BaseModel):
c: int
class A(BaseModel):
b: B
More info: https://stackoverflow.com/a/70384637/9439097
Regaring your original question on how to debug these or similar errors:
You probably have your routes defined somewhere. Comment all of your routers/routes out, then the openapi docs should generate (and they should show you have no routes. Then, enable the routes one by one and see which one causes the error. THis is how I debuged my situation.

Related

Recursive logging issue when using Opencensus with FastAPI

I have a problem with my implementation of Opencensus, logging in Python and FastAPI. I want to log incomming requests to Application Insights in Azure, so I added a FastAPI middleware to my code following the Microsoft docs and this Github post:
propagator = TraceContextPropagator()
#app.middleware('http')
async def middleware_opencensus(request: Request, call_next):
tracer = Tracer(
span_context=propagator.from_headers(request.headers),
exporter=AzureExporter(connection_string=os.environ['APPLICATION_INSIGHTS_CONNECTION_STRING']),
sampler=AlwaysOnSampler(),
propagator=propagator)
with tracer.span('main') as span:
span.span_kind = SpanKind.SERVER
tracer.add_attribute_to_current_span(HTTP_HOST, request.url.hostname)
tracer.add_attribute_to_current_span(HTTP_METHOD, request.method)
tracer.add_attribute_to_current_span(HTTP_PATH, request.url.path)
tracer.add_attribute_to_current_span(HTTP_ROUTE, request.url.path)
tracer.add_attribute_to_current_span(HTTP_URL, str(request.url))
response = await call_next(request)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, response.status_code)
return response
This works great when running local, and all incomming requests to the api are logged to Application Insights. Since having Opencensus implemented however, when deployed in a Container Instance on Azure, after a couple of days (approximately 3) an issue arises where it looks like some recursive logging issue happens (+30.000 logs per second!), i.a. stating Queue is full. Dropping telemetry, before finally crashing after a few hours of mad logging:
Our logger.py file where we define our logging handlers is as follows:
import logging.config
import os
import tqdm
from pathlib import Path
from opencensus.ext.azure.log_exporter import AzureLogHandler
class TqdmLoggingHandler(logging.Handler):
"""
Class for enabling logging during a process with a tqdm progress bar.
Using this handler logs will be put above the progress bar, pushing the
process bar down instead of replacing it.
"""
def __init__(self, level=logging.NOTSET):
super().__init__(level)
self.formatter = logging.Formatter(fmt='%(asctime)s <%(name)s> %(levelname)s: %(message)s',
datefmt='%d-%m-%Y %H:%M:%S')
def emit(self, record):
try:
msg = self.format(record)
tqdm.tqdm.write(msg)
self.flush()
except (KeyboardInterrupt, SystemExit):
raise
except:
self.handleError(record)
logging_conf_path = Path(__file__).parent
logging.config.fileConfig(logging_conf_path / 'logging.conf')
logger = logging.getLogger(__name__)
logger.addHandler(TqdmLoggingHandler(logging.DEBUG)) # Add tqdm handler to root logger to replace the stream handler
if os.getenv('APPLICATION_INSIGHTS_CONNECTION_STRING'):
logger.addHandler(AzureLogHandler(connection_string=os.environ['APPLICATION_INSIGHTS_CONNECTION_STRING']))
warning_level_loggers = ['urllib3', 'requests']
for lgr in warning_level_loggers:
logging.getLogger(lgr).setLevel(logging.WARNING)
Does anyone have any idea on what could be the cause of this issue, or have people encountered similar issues? I don't know what the 'first' error log is due to the fast amount of logging.
Please let me know if additional information is required.
Thanks in advance!
We decided to revisit the problem and found two helpful threads describing similar if not exactly the same behaviour we were seeing:
https://github.com/census-instrumentation/opencensus-python/issues/862
https://github.com/census-instrumentation/opencensus-python/issues/1007
As described in the second thread it seems that Opencensus attempts to send a trace to AI and on failure the failed logs will be batched and sent again in 15s (default). This will go on indefinitely until success, possibly causing the huge and seemingly recursive spam of failure logs.
A solution introduced and proposed by Izchen in this comment is to set the enable_local_storage=False for this issue.
Another solution would be to migrate to OpenTelemetry that should not contain this potential problem and is the solution we are currently running. Do keep in mind that Opencensus is still the officially supported application monitoring solution by Microsoft and OpenTelemetry is still very young. OpenTelemetry does seem to have a lot of support however and getting more and more traction.
As for the implementation of OpenTelemetry we did the following to trace our requests:
if os.getenv('APPLICATION_INSIGHTS_CONNECTION_STRING'):
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
from opentelemetry import trace
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.propagate import extract
from opentelemetry.sdk.resources import SERVICE_NAME, SERVICE_NAMESPACE, SERVICE_INSTANCE_ID, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
provider = TracerProvider()
processor = BatchSpanProcessor(AzureMonitorTraceExporter.from_connection_string(
os.environ['APPLICATION_INSIGHTS_CONNECTION_STRING']))
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
FastAPIInstrumentor.instrument_app(app)
OpenTelemetry supports a lot of custom Instrumentors that can be used to create spans for for example Requests PyMongo, Elastic, Redis, etc. => https://opentelemetry.io/registry/.
If you'd want to write your custom tracers/spans like in the OpenCensus example above you can attempt something like this:
# These come still from Opencensus for convenience
HTTP_HOST = COMMON_ATTRIBUTES['HTTP_HOST']
HTTP_METHOD = COMMON_ATTRIBUTES['HTTP_METHOD']
HTTP_PATH = COMMON_ATTRIBUTES['HTTP_PATH']
HTTP_ROUTE = COMMON_ATTRIBUTES['HTTP_ROUTE']
HTTP_URL = COMMON_ATTRIBUTES['HTTP_URL']
HTTP_STATUS_CODE = COMMON_ATTRIBUTES['HTTP_STATUS_CODE']
provider = TracerProvider()
processor = BatchSpanProcessor(AzureMonitorTraceExporter.from_connection_string(
os.environ['APPLICATION_INSIGHTS_CONNECTION_STRING']))
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
#app.middleware('http')
async def middleware_opentelemetry(request: Request, call_next):
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span('main',
context=extract(request.headers),
kind=trace.SpanKind.SERVER) as span:
span.set_attributes({
HTTP_HOST: request.url.hostname,
HTTP_METHOD: request.method,
HTTP_PATH: request.url.path,
HTTP_ROUTE: request.url.path,
HTTP_URL: str(request.url)
})
response = await call_next(request)
span.set_attribute(HTTP_STATUS_CODE, response.status_code)
return response
The AzureLogHandler from our logger.py configuration wasn't needed any more with this solution and was thus removed.
Some other sources that might be useful:
https://learn.microsoft.com/en-us/azure/communication-services/quickstarts/telemetry-application-insights?pivots=programming-language-python#setting-up-the-telemetry-tracer-with-communication-identity-sdk-calls
https://learn.microsoft.com/en-us/python/api/overview/azure/monitor-opentelemetry-exporter-readme?view=azure-python-preview
https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-enable?tabs=python

flask restapi get query param not working for me

I have the below python code which use flask and request:
from flask import request
#app.route('/v1/api/check_current_weather_by_city?city=Tel-aviv')
def check_current_weather_by_city():
city = request.args.get('city')
but city doesn't get the expected value of Tel-aviv, but throw me exception:
_lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
I'm working with this service:
https://openweathermap.org/current
It seems that you have mixed issues in your hands. First of all, your route should not contain the ?city=Tel-aviv, just the "static" part of your URL.
Replace this line:
#app.route('/v1/api/check_current_weather_by_city?city=Tel-aviv')
By this line:
#app.route('/v1/api/check_current_weather_by_city')
Despite this issue, it shouldn't raise you the RuntimeError: Working outside of request context.. Be sure to initialize flask properly, the app was initialized in this file? You may need to have a line like this: app = Flask(__name__)
Is this a blueprint? If it is, be sure to initialize it too. Check the official documentation.

Python web application with flask on local machine

I'm new to python and Flask, and I have a project from the production ENV which I'm trying to run on my local.
I was able to install all the packages and bring them up on http://127.0.0.1:5000, but the problem is that is the only page that actually works on my local. and when I try to do Authorization or even simple post, it does not do anything on my local ( I put some print on the other files and none of them get fire) so I assume they keep going to production as it does have some APIs as well.
Here is the main page (application.py) which is working on my local.
import os
import jwt
import logging
from datetime import datetime, timedelta
from http import HTTPStatus
from pydantic import BaseModel
from passlib.context import CryptContext
from flask import Flask, request, jsonify
from flask_restplus import Api, Resource, fields
from werkzeug.middleware.proxy_fix import ProxyFix
from applicationinsights.flask.ext import AppInsights
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1)
api = Api(app, doc='/')
ns = api.namespace(name='Room Parsing', path='/')
swaggerTokenParser = api.parser()
swaggerTokenParser.add_argument('username', location='form')
swaggerTokenParser.add_argument('password', location='form')
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
rtp = RoomTitleParser(room_prototype_XX_path)
ALGORITHM = "HS256"
app.config["SECRET_KEY"] = os.getenv('SECRET_KEY')
app.config["APPINSIGHTS_INSTRUMENTATIONKEY"] = os.getenv('APPINSIGHTS_INSTRUMENTATIONKEY')
appinsights = AppInsights(app)
app.logger.setLevel(level=logging.INFO)
logger = app.logger
#ns.route("/api/room/parser")
class RoomParser(Resource):
#api.expect(swaggerRoom)
#api.doc(description='Process a JSON object with a room description and a unique identifier in order to run it through the parser. This will result in a list of keywords which will be extracted from the description. The result will be returned in a JSON format')
def post(self):
try:
room_desc = "deluxe suite queen ocean view"
room_id = "ID123"
print('11111111111')
if not room_desc or not room_id:
return make_json_error_message("Please send a a room with description and id",HTTPStatus.BAD_REQUEST)
room_dict = dict(Room(description=room_desc, id=room_id))
print(parsed)
parsed = rtp.parse_title(room_dict)
print(parsed)
return jsonify(parsed['room'])
except Exception as e:
logger.error("Error parsing a room: " + repr(e))
return make_json_error_message("We have encountered an error. Please try again later",HTTPStatus.BAD_REQUEST)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
As you can see I have some print statements and all of them are working on my local consol. But when I track down the code for Example this line,
parsed = rtp.parse_title(room_dict)
and put some print command inside the parse_title() function which is located in another file, I do NOT see any output in the console as well as webpage!
Why? I have no idea!!! LOL and that is why I'm here.
I believe it might be related to the #ns.route("/api/room/parser") that I have on top of the class, but not sure.
Can you guys please drop some knowledge here so I can learn and get this code to work on my local completely?
Thanks for your help!
With what you've provided, there doesn't appear to be any reference to the production environment.
The only thing that sticks out to me is
app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1)
The Werkzeug Documentation states that this middleware can set REMOTE_ADDR, HTTP_HOST from X-Forwarded headers. You might try removing that for a bit and see if that helps. There might be some reference to production in that proxy. I don't know enough about that middleware to know for sure however.
It might be helpful to know of any other configuration information or environment you have setup.
It tuned out to be related to my conda env,
So I uninstalled the Anaconda and then installed the plain python and installed pyCharm as well and setup a new env in pycharm. Then it worked like a charm!
Thanks

Webob / Pyramid query string parameters out of order upon receipt

I am running Pyramid as my API server. Recently we started getting query string parameters out of order when handed to the RESTful API server. For example, a GET to /v1/finishedGoodRequests?exact=true&id=39&join=OR&exact=false&name=39
is logged by the RESTful api module upon init as request.url:
v1/finishedGoodRequests?join=OR&name=39&exact=true&exact=false&id=39
with request.query_string: join=OR&name=39&exact=true&exact=false&id=39
I process the query params in order to qualify the search, in this case id exactly 39 or 39 anywhere in the name. What kind of possible server setting or bug could have crept in to the server code to cause such a thing? It is still a MultiDict...
As a simple example, this works fine for me, and the MultiDict has always preserved the order and so I suspect something is getting rewritten by something you're using in your stack.
from pyramid.config import Configurator
from pyramid.view import view_config
from waitress import serve
#view_config(renderer='json')
def view(request):
return list(request.GET.items())
config = Configurator()
config.scan(__name__)
app = config.make_wsgi_app()
serve(app, listen='127.0.0.1:8080')
$ curl http://localhost:8080\?join=OR\&name=39\&exact=true\&exact=false\&id=39
[["join", "OR"], ["name", "39"], ["exact", "true"], ["exact", "false"], ["id", "39"]]
Depending on which WSGI server you are using, often you can view environ vars to see the original url which may be handy. Waitress does not, so instead just put something high up in the pipeline (wsgi middleware) that can log out the environ['QUERY_STRING'] and see if it doesn't match somewhere lower down in your stack.

Mailchimp python wrapper gives error - no session

I'm trying to implement the mailchimp python API in a django project similar, following their example on github. I was trying to make a connection in a class based view however when I load up the view I get the notice
Attribute Error at\
'module' object has no attribute 'session'
It's set up exactly like their example and the error occurs where I define
m = get_mailchimp_api()
I opened up the mailchimp.py file in my site packages after following the traceback and saw the following:
import requests
class Mailchimp(object):
root = 'https://api.mailchimp.com/2.0/'
def __init__(self, apikey=None, debug=False):
'''Initialize the API client
Args:
apikey (str|None): provide your MailChimp API key. If this is left as None, we will attempt to get the API key from the following locations::
- MAILCHIMP_APIKEY in the environment vars
- ~/.mailchimp.key for the user executing the script
- /etc/mailchimp.key
debug (bool): set to True to log all the request and response information to the "mailchimp" logger at the INFO level. When set to false, it will log at the DEBUG level. By default it will write log entries to STDERR
'''
self.session = requests.session()
The traceback ends at the self.session = requests.session() line.
This is my view where I am trying to call Mailchimp
from app.utils import get_mailchimp_api
import mailchimp
from django.views.generic import TemplateView
class HomeView(TemplateView):
template_name = 'home.html'
# print requests -- this is undefined
m = get_mailchimp_api()
Is it because the CBV doesn't have a request parameter? In the github example they show the connection being made in a function based view where the function takes a requests. If that's the case, how can I pass the response into the CBV? This is the exact example Mailchimp gives on github:
def index(request):
try:
m = get_mailchimp_api()
lists = m.lists.list()
except mailchimp.Error, e:
messages.error(request, 'An error occurred: %s - %s' % (e.__class__, e))
return redirect('/')
Requests doesn't have a session() method...but id does have a Session() object.
Sounds like a bug in the wrapper.
Requests aliases Session() with session(), so that's probably not the issue. It almost sounds like there's something up either with your get_mailchimp_api() method or something is weird with the imports. Other stackoverflow questions about similar error messages seem to come from mutual imports, typos, or other such things.
Presumably your app.utils module is importing mailchimp already, like MailChimp's does? If not, I'd try that. If so, maybe remove your import mailchimp from this file.

Categories

Resources