Deploying Flask sqlalchemy apps in AWS lambda and API gateway - python

I am not able to find good resources which can help me understand how can i migrate my Flask and sqlalchemy apps to AWS lambda and API gateway and make it serverless. Like for instance below is a sample code taken from the flask_sqlalchemy documentation :
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
def __repr__(self):
return '<User %r>' % self.username
Now how can i migrate this code to AWS lambda . Is it even possible . For instance the line app = Flask(__name__) should not be there right ? If there is no app variable how am i going to initialize db variable ?
Please can someone give me some intro or a link to a good tutorial which will clear these concepts?
Many thanks in advance.

To use a Flask/sqlalchemy app with Lambda, you need to wrap Flask in the Lambda dispatch model, and make sure sqlalchemy can access its database.
Dispatching Lambda requests to Flask
You can integrate Chalice with Flask like this:
class ChaliceWithFlask(chalice.Chalice):
"""
Subclasses Chalice to host a Flask app, route and proxy requests to it.
"""
def __init__(self, flask_app, *args, **kwargs):
super().__init__(*args, **kwargs)
self.flask_app = flask_app
self.trailing_slash_routes = []
routes = collections.defaultdict(list)
for rule in self.flask_app.url_map.iter_rules():
route = re.sub(r"<(.+?)(:.+?)?>", r"{\1}", rule.rule)
if route.endswith("/"):
self.trailing_slash_routes.append(route.rstrip("/"))
routes[route.rstrip("/")] += rule.methods
for route, methods in routes.items():
self.route(route, methods=list(set(methods) - {"OPTIONS"}), cors=True)(self.dispatch)
def dispatch(self, *args, **kwargs):
uri_params = self.current_request.uri_params or {}
path = self.current_request.context["resourcePath"].format(**uri_params)
if self.current_request.context["resourcePath"] in self.trailing_slash_routes:
if self.current_request.context["path"].endswith("/"):
path += "/"
else:
return chalice.Response(status_code=requests.codes.found, headers={"Location": path + "/"}, body="")
req_body = self.current_request.raw_body if self.current_request._body is not None else None
base_url = "https://{}".format(self.current_request.headers["host"])
query_string = self.current_request.query_params or {}
with self.flask_app.test_request_context(path=path,
base_url=base_url,
query_string=list(query_string.items()),
method=self.current_request.method,
headers=list(self.current_request.headers.items()),
data=req_body,
environ_base=self.current_request.stage_vars):
flask_res = self.flask_app.full_dispatch_request()
res_headers = dict(flask_res.headers)
res_headers.pop("Content-Length", None)
res_body = b"".join([c for c in flask_res.response])
return chalice.Response(status_code=flask_res._status_code, headers=res_headers, body=res_body)
flask_app = flask.Flask(app_name)
# add routes, etc. to your Flask app here
app = ChaliceWithFlask(app_name, flask_app=flask_app)
Connecting sqlalchemy to the database
You could access the database directly, but that means opening the database port to the Internet or placing your Lambda in a VPC (which makes it impossible for the Lambda to be accessible over the Internet). Also, traditional database drivers make assumptions about persistence of their connections that are not satisfied in Lambda.
AWS recently came out with the perfect solution for this - the AWS Aurora RDS Data API. It's basically an AWS-authenticated SQL-over-HTTP tunnel. I wrote a SQLAlchemy adapter for it: sqlalchemy-aurora-data-api. After installing it, you can do:
from sqlalchemy import create_engine
cluster_arn = "arn:aws:rds:us-east-1:123456789012:cluster:my-aurora-serverless-cluster"
secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:MY_DB_CREDENTIALS"
app = Flask(app_name)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+auroradataapi://:#/my_db_name'
engine_options=dict(connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn))
db = flask_sqlalchemy.SQLAlchemy(app, engine_options=engine_options)

First of all, in AWS Lambda, you don't use your Flask for routing anymore. Instead use AWS API Gateway for routing. An example of the routing is shown below, from https://apievangelist.com/2017/10/23/a-simple-api-with-aws-dynamodb-lambda-and-api-gateway/
As you can see at the right end of the picture, the "Lambda" box shows the name of the Lambda function you have uploaded. For Lambda in Python, see https://docs.aws.amazon.com/lambda/latest/dg/python-programming-model-handler-types.html
Basically, the main thing in Python lambda is the:
def handler_name(event, context):
...
return some_value
From the event and context you can get everything: path, HTTP method, headers, params, body, etc (like flask.request). You might also need to know there are two ways of doing Lambda the LAMBDA and LAMBDA_PROXY (see the Integration Request box in the first picture).
Short version difference is:
LAMBDA mode will preprocess request body automatically and gives your Lambda function a Python object in event.
LAMBDA_PROXY will give you raw HTTP request, you need to convert the content yourself inside the Lambda function.
As for SQL Alchemy, all you need to do is to zip all the SQL Alchemy library code and its' dependency together with your Lambda function and upload it to the Lambda Console, it works without any modification.
Please note that SQLite will not work in Lambda, as Lambda function has no filesystem access. You should put the data somewhere else, e.g. Amazon RDS (with MySQL, PostgreSQL, whatever you like) and then make sure the Lambda is connected to the same VPC (Amazon internal router) with the RDS database.

Related

PonyORM/Postgres multi-tenancy options

I have an application running in production that I've built for a single client that I want to convert to support multiple "tenants".
Currently I am using a Postgres database where all my data resides in a single database in the default public schema. I would like to isolate each tenant to a separate Postgres schema. Ideally, my application's UI would make a call to my API using the tenant's subdomain. In before_request I would somehow be able to set all database queries during the current request context to only query that tenant's schema, is this possible?
I envisage an ideal solution to be something similar to this contrived example:
from flask import Flask, request, jsonify
from pony.orm import Database, Required
app = Flask(__name__)
db = Database(**{<db_connection_dict>})
class User(db.Entity):
email = Required(str)
password = Required(str)
#classmethod
def login(cls, email: str, password: str) -> str:
user = cls.get(lambda u: u.email.lower() == email.lower())
if not user:
return None
password_is_valid = <method_to_check_hashed_pasword>
if not password_is_valid:
return None
return <method_to_generate_jwt>
db.generate_mapping()
#app.before_request
def set_tenant():
tenant_subdomain = request.host.split(".")[0]
// MISSING STEP.. set_schema is a fictitous method, does something similar to this exist?
db.set_schema(schema=tenant_subdomain)??
#app.route("auth/login", methods=["POST"]
def login_route():
data = request.get_json()
jwt = User.login(data["email"], data["password"])
if not jwt:
return make_response({}, 403)
return make_response(jsonify(data=jwt), 200)
I've come across an interesting/simple example using SQLAlchemy. If not possible with PonyORM I may consider porting my models over to SQLAlchemy but would miss the simplicity of Pony :(
I thought about possibly using the Database.on_connect method to do something as such but not sure if if anyone has any other ideas or if this would even work properly in production. I suspect not because if I had two separate tenants querying the database they would overwrite the search path..
#db.on_connect()
def set_request_context_tenant_schema(db, connection) -> None:
subdomain = request.host.split(".")[0]
cursor = connection.cursor()
cursor.execute(f"SET search_path TO {subdomain}, public;")

Using Flask SQLAlchemy from worker threads

I have a python app that uses Flask RESTful as well as Flask SQLAlchemy. Part of the API I'm writing has the side effect of spinning off Timer objects. When a Timer expires, it executes some database queries. I'm seeing an issue in which code that is supposed to update rows in the database (a sqlite backend) is actually not issuing any UPDATE statements. I have verified this by turning the SQLALCHEMY_ECHO flag on to log the SQL statements. Whether or not the code works seems to be random. About half the time it fails to issue the UPDATE statement. See full example below.
My guess here is that SQLAlchemy Flask does not work properly when called from a worker thread. I think part of the point of Flask SQLAlchemy is to manage the SQLAlchemy sessions for you per API request. Obviously since there are no API requests going on when the Timer expires, I could see where things may not work properly.
Just to test this, I went ahead and wrote a simple data access layer using python's sqlite3 interface and it seems to solve the problem.
I'd really rather not have to rewrite a bunch of data access code though. Is there a way to get Flask SQLAlchemy to work properly in this case?
Sample code
Here's where I set up the flask app and save off the SQLAlchemy db object:
from flask import Flask
from flask_restful import Api
from flask.ext.sqlalchemy import SQLAlchemy
from flask_cors import CORS
import db_conn
flask_app = Flask(__name__)
flask_app.config.from_object('config')
CORS(flask_app)
api = Api(flask_app)
db_conn.db = SQLAlchemy(flask_app)
api.add_resource(SomeClass, '/abc/<some_id>/def')
Here's how I create the ORM models:
import db_conn
db = db_conn.db
class MyTable(db.Model):
__tablename__ = 'my_table'
id = db.Column(db.Integer, primary_key=True)
phase = db.Column(db.Integer, nullable=False, default=0)
def set_phase(self, phase):
self.phase = phase
db.session.commit()
Here's the API handler with timer and the database call that is failing:
from flask_restful import Resource
from threading import Timer
from models import MyTable
import db_conn
import global_store
class SomeClass(Resource):
def put(self, some_id):
global_store.saved_id = some_id
self.timer = Timer(60, self.callback)
return '', 204
def callback(self):
row = MyTable.query.filter_by(id=global_store.saved_id).one()
# sometimes this works, sometimes it doesn't
row.set_phase(1)
db_conn.db.session.commit()
I'm guessing in your callback you aren't actually changing the value of the object. SQLAlchemey won't issue DB UPDATE calls if the session state is not dirty. So if the phase is already 1 for some reason there is nothing to do.

How can I test my flask application using unittest?

I'm trying to test my flask application using unittest. I want to refrain from flask-testing because I don't like to get ahead of myself.
I've really been struggling with this unittest thing now. It is confusing because there's the request context and the app context and I don't know which one I need to be in when I call db.create_all().
It seems like when I do add to the database, it adds my models to the database specified in my app module (init.py) file, but not the database specified in the setUp(self) method.
I have some methods that must populate the database before every test_ method.
How can I point my db to the right path?
def setUp(self):
#self.db_gd, app.config['DATABASE'] = tempfile.mkstemp()
app.config['TESTING'] = True
# app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE']
basedir = os.path.abspath(os.path.dirname(__file__))
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + \
os.path.join(basedir, 'test.db')
db = SQLAlchemy(app)
db.create_all()
#self.app = app.test_client()
#self.app.testing = True
self.create_roles()
self.create_users()
self.create_buildings()
#with app.app_context():
# db.create_all()
# self.create_roles()
# self.create_users()
# self.create_buildings()
def tearDown(self):
#with app.app_context():
#with app.request_context():
db.session.remove()
db.drop_all()
#os.close(self.db_gd)
#os.unlink(app.config['DATABASE'])
Here is one of the methods that populates my database:
def create_users(self):
#raise ValueError(User.query.all())
new_user = User('Some User Name','xxxxx#gmail.com','admin')
new_user.role_id = 1
new_user.status = 1
new_user.password = generate_password_hash(new_user.password)
db.session.add(new_user)
Places I've looked at:
http://kronosapiens.github.io/blog/2014/08/14/understanding-contexts-in-flask.html
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvi-debugging-testing-and-profiling
And the flask documentation:
http://flask.pocoo.org/docs/0.10/testing/
one issue that your hitting is the limitations of flask contexts, this is the primary reason i think long and hard before including a flask extension into my project, and flask-sqlalchemy is one of the biggest offenders. i say this because in most cases it is completely unnecessary to depend on the flask app context when dealing with your database. Sure it can be nice, especially since flask-sqlalchemy does a lot behind the scenes for you, mainly you dont have to manually manage your session, metadata or engine, but keeping that in mind those things can easily be done on your own, and for doing that you get the benefit of unrestricted access to your database, with no worry about the flask context. here is an example of how to setup your db manually, first i will show the flask-sqlalchemy way, then the manual plain sqlalchemy way:
the flask-sqlalchemy way:
import flask
from flask_sqlalchemy import SQLAlchemy
app = flask.Flask(__name__)
db = SQLAlchemy(app)
# define your models using db.Model as base class
# and define columns using classes inside of db
# ie: db.Column(db.String(255),nullable=False)
# then create database
db.create_all() # <-- gives error if not currently running flask app
the standard sqlalchemy way:
import flask
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
# first we need our database engine for the connection
engine = sa.create_engine(MY_DB_URL,echo=True)
# the line above is part of the benefit of using flask-sqlalchemy,
# it passes your database uri to this function using the config value
# SQLALCHEMY_DATABASE_URI, but that config value is one reason we are
# tied to the application context
# now we need our session to create querys with
Session = sa.orm.scoped_session(sa.orm.sessionmaker())
Session.configure(bind=engine)
session = Session()
# now we need a base class for our models to inherit from
Model = declarative_base()
# and we need to tie the engine to our base class
Model.metadata.bind = engine
# now define your models using Model as base class and
# anything that would have come from db, ie: db.Column
# will be in sa, ie: sa.Column
# then when your ready, to create your db just call
Model.metadata.create_all()
# no flask context management needed now
if you set your app up like that, any context issues your having should go away.
as a separate answer, to actually just force what you need to work, you can just use the test_request_context function, ie: in setup do: self.ctx = app.test_request_context() then just activate it, self.ctx.push() and when your done get rid of it, ie in tearDown: self.ctx.pop()

What is the difference between Session and db.session in SQLAlchemy?

In the event mapper level docs
it says that Session.add() is not supported, but when I tried to do db.session.add(some_object) inside after_insert event it worked, example:
def after_insert_listener(mapper, connection, user):
global_group = Group.query.filter_by(groupname='global').first()
a = Association(user,global_group)
db.session.add(a)
event.listen(User, 'after_insert', after_insert_listener)
Basically any new user should be part of global_group, so I added it in the after_insert event. I tried to insert a user, and then checked into my database and I found the user record, and the association record.
Let's check the diferences:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite://///Users/dedeco/Documents/tmp/testDb.db'
db = SQLAlchemy(app)
>>>type(db.session)
<class 'sqlalchemy.orm.scoping.scoped_session'>
or
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
some_engine = create_engine('sqlite://///Users/dedeco/Documents/tmp/testDb.db')
Session = sessionmaker(bind=some_engine)
session = Session()
Base = declarative_base()
>>> type(session)
<class 'sqlalchemy.orm.session.Session'>
Basically the difference is:
In the first way you are using a API developed for the Flask framework, called Flask-SQLAlchemy. It's the option if you are creating a Flask application, because the scope of the Session can be managed automatically by your application. You have many benefits like a infrastructure to establish a single Session, associated with the request, which is correctly constructed and torn down corresponding torn down at the end of a request.
In the second way is a pure SQLAlchemy app, so if you are using a library to connect a particular database, you can use just a SQLAlchemy API, for example, for a command-line script, background daemon, GUI interface-driven application, etc.
So, in a both way you can add, like:
Using a Flask-SQLAlchemy:
class User(db.Model):
__tablename__ = 'users'
user_id = db.Column(db.Integer(), primary_key = True)
user_name = db.Column(db.String(80), unique=True)
def __init__(self, user_name):
self.user_name = user_name
>>> db.create_all()
>>> u = User('user1')
>>> db.session.add(u)
>>> db.session.commit()
>>> users = db.session.query(User).all()
>>> for u in users:
... print u.user_name
...
user1
Using just SQLAlchemy:
class User(Base):
__tablename__ = 'users'
user_id = Column(Integer(), primary_key = True)
user_name = Column(String(80), unique=True)
>>> u = User()
>>> u.user_name = 'user2'
>>> session.add(u)
>>> session.commit()
>>> users = session.query(User).all()
>>> for u in users:
... print u.user_name
...
user1
user2
Realize that I am connecting in the same database just for show that you can add using many ways.
server = Flask(__name__)
app = dash.Dash(__name__,server=server,external_stylesheets=[dbc.themes.LITERA], suppress_callback_exceptions = True)
app.server.config["SQLALCHEMY_DATABASE_URI"] = f'postgresql://postgres:.../...'
db = SQLAlchemy(app.server)
I have the same problem of not knowing at what point I should close the session of the database in my web application. I found this in the link that #GabrielChu shared so what I understood was if you are dealing with a web app the session is closed when the user close their tab
A web application is the easiest case because such an application is already constructed around a single, consistent scope - this is the request, which represents an incoming request from a browser, the processing of that request to formulate a response, and finally the delivery of that response back to the client. Integrating web applications with the Session is then the straightforward task of linking the scope of the Session to that of the request. The Session can be established as the request begins, or using a lazy initialization pattern which establishes one as soon as it is needed. The request then proceeds, with some system in place where application logic can access the current Session in a manner associated with how the actual request object is accessed. As the request ends, the Session is torn down as well, usually through the usage of event hooks provided by the web framework. The transaction used by the Session may also be committed at this point, or alternatively the application may opt for an explicit commit pattern, only committing for those requests where one is warranted, but still always tearing down the Session unconditionally at the end.
Some web frameworks include infrastructure to assist in the task of aligning the lifespan of a Session with that of a web request. This includes products such as Flask-SQLAlchemy, for usage in conjunction with the Flask web framework, and Zope-SQLAlchemy, typically used with the Pyramid framework. SQLAlchemy recommends that these products be used as available

Using Flask-SQLAlchemy in Blueprint models without reference to the app [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to create a "modular application" in Flask using Blueprints.
When creating models, however, I'm running into the problem of having to reference the app in order to get the db-object provided by Flask-SQLAlchemy. I'd like to be able to use some blueprints with more than one app (similar to how Django apps can be used), so this is not a good solution.*
It's possible to do a switcharoo, and have the Blueprint create the db instance, which the app then imports together with the rest of the blueprint. But then, any other blueprint wishing to create models need to import from that blueprint instead of the app.
My questions are thus:
Is there a way to let Blueprints define models without any awareness of the app they're being used in later -- and have several Blueprints come together? By this, I mean having to import the app module/package from your Blueprint.
Am I wrong from the outset? Are Blueprints not meant to be independent of the app and be redistributable (à la Django apps)?
If not, then what pattern should you use to create something like that? Flask extensions? Should you simply not do it -- and maybe centralize all models/schemas à la Ruby on Rails?
Edit: I've been thinking about this myself now, and this might be more related to SQLAlchemy than Flask because you have to have the declarative_base() when declaring models. And that's got to come from somewhere, anyway!
Perhaps the best solution is to have your project's schema defined in one place and spread it around, like Ruby on Rails does. Declarative SQLAlchemy class definitions are really more like schema.rb than Django's models.py. I imagine this would also make it easier to use migrations (from alembic or sqlalchemy-migrate).
I was asked to provide an example, so let's do something simple: Say I have a blueprint describing "flatpages" -- simple, "static" content stored in the database. It uses a table with just shortname (for URLs), a title and a body. This is simple_pages/__init__.py:
from flask import Blueprint, render_template
from .models import Page
flat_pages = Blueprint('flat_pages', __name__, template_folder='templates')
#flat_pages.route('/<page>')
def show(page):
page_object = Page.query.filter_by(name=page).first()
return render_template('pages/{}.html'.format(page), page=page_object)
Then, it would be nice to let this blueprint define its own model (this in simple_page/models.py):
# TODO Somehow get ahold of a `db` instance without referencing the app
# I might get used in!
class Page(db.Model):
name = db.Column(db.String(255), primary_key=True)
title = db.Column(db.String(255))
content = db.Column(db.String(255))
def __init__(self, name, title, content):
self.name = name
self.title = title
self.content = content
This question is related to:
Flask-SQLAlchemy import/context issue
What's your folder layout for a Flask app divided in modules?
And various others, but all replies seem to rely on import the app's db instance, or doing the reverse. The "Large app how to" wiki page also uses the "import your app in your blueprint" pattern.
* Since the official documentation shows how to create routes, views, templates and assets in a Blueprint without caring about what app it's "in", I've assumed that Blueprints should, in general, be reusable across apps. However, this modularity doesn't seem that useful without also having independent models.
Since Blueprints can be hooked into an app more than once, it might simply be the wrong approach to have models in Blueprints?
I believe the truest answer is that modular blueprints shouldn't concern themselves directly with data access, but instead rely on the application providing a compatible implementation.
So given your example blueprint.
from flask import current_app, Blueprint, render_template
flat_pages = Blueprint('flat_pages', __name__, template_folder='templates')
#flat_pages.record
def record(state):
db = state.app.config.get("flat_pages.db")
if db is None:
raise Exception("This blueprint expects you to provide "
"database access through flat_pages.db")
#flat_pages.route('/<page>')
def show(page):
db = current_app.config["flat_pages.db"]
page_object = db.find_page_by_name(page)
return render_template('pages/{}.html'.format(page), page=page_object)
From this, there is nothing preventing you from providing a default implementation.
def setup_default_flat_pages_db(db):
class Page(db.Model):
name = db.Column(db.String(255), primary_key=True)
title = db.Column(db.String(255))
content = db.Column(db.String(255))
def __init__(self, name, title, content):
self.name = name
self.title = title
self.content = content
class FlatPagesDBO(object):
def find_page_by_name(self, name):
return Page.query.filter_by(name=name).first()
return FlatPagesDBO()
And in your configuration.
app.config["flat_pages.db"] = setup_default_flat_pages_db(db)
The above could be made cleaner by not relying in direct inheritance from db.Model and instead just use a vanilla declarative_base from sqlalchemy, but this should represent the gist of it.
I have similar needs of making Blueprints completely modular and having no reference to the App. I came up with a possibly clean solution but I'm not sure how correct it is and what its limitations are.
The idea is to create a separate db object (db = SQLAlchemy()) inside the blueprint and call the init_app() and create_all() methods from where the root app is created.
Here's some sample code to show how the project is structured:
The app is called jobs and the blueprint is called status and it is stored inside the blueprints folder.
blueprints.status.models.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy() # <--- The db object belonging to the blueprint
class Status(db.Model):
__tablename__ = 'status'
id = db.Column(db.Integer, primary_key=True)
job_id = db.Column(db.Integer)
status = db.Column(db.String(120))
models.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy() # <--- The db object belonging to the root app
class Job(db.Model):
__tablename__ = 'job'
id = db.Column(db.Integer, primary_key=True)
state = db.Column(db.String(120)
factory.py
from .blueprints.status.models import db as status_db # blueprint db
from .blueprints.status.routes import status_handler # blueprint handler
from .models import db as root_db # root db
from flask import Flask
def create_app():
app = Flask(__name__)
# Create database resources.
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////path/to/app.db'
root_db.init_app(app)
status_db.init_app(app) # <--- Init blueprint db object.
with app.app_context():
root_db.create_all()
status_db.create_all() # <--- Create blueprint db.
# Register blueprint routes.
app.register_blueprint(status_handler, url_prefix="/status")
return app
I tested it with gunicorn with gevent worker and it works. I asked a separate question about the robustness of the solution here:
Create one SQLAlchemy instance per blueprint and call create_all multiple times
You asked "Are Blueprints not meant to be independent of the app and be redistributable (à la Django apps)? "
The answer is yes. Blueprints are not similar to Django App.
If you want to use different app/configurations, then you need to use "Application Dispatching" and not blueprints. Read this
[1]: http://flask.pocoo.org/docs/patterns/appdispatch/#app-dispatch [1]
Also, the link here [1] http://flask.pocoo.org/docs/blueprints/#the-concept-of-blueprints [1]
It clearly says and I quote "A blueprint in Flask is not a pluggable app because it is not actually an application – it’s a set of operations which can be registered on an application, even multiple times. Why not have multiple application objects? You can do that (see Application Dispatching), but your applications will have separate configs and will be managed at the WSGI layer."

Categories

Resources