Use Flask-SQLAlchemy models in Jupyter notebook - python

Is there any way I can import my Flask-SQLAlchemy models into a Jupyter notebook? I would like to be able to explore my models and data in the notebook.

I haven't tried this but I believe it can be done, with a little bit of work.
tl;dr
Import the app, db, and the models you want to use. Push the app context before doing a query. If you understood all this, you're done.
In more detail
In the code which sets up your Flask app, you have a Flask-SQLAlchemy object, which is usually defined something like this:
from flask_sqlalchemy import FlaskSQLAlchemy
db = FlaskSQLAlchemy()
And somewhere else you have your models:
from db_setup import db
class MyThing(db.Model):
thing_id = db.Column(db.Integer())
And further somewhere you have the app:
from flask import Flask
from db_setup import db
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = '...'
db.init_app(app)
Now, in your iPython notebook you have to be able to import the last two pieces above:
from app_setup import app
from models import MyThing
To run a query, you have to be in the app context (see http://flask.pocoo.org/docs/1.0/api/#flask.Flask.app_context):
with app.app_context():
things = MyThing.query.filter(MyThing.thing_id < 100).all()
You should be able to run any query there. If I remember correctly, even outside of the with block the objects in things will still be valid, and you can retrieve their properties, etc.
If you want to explicitly commit, you can import db from where it's defined, and do
db.session.commit()
Just like using the model class name to make a query, db only works inside a context.
Technicalities
Don't worry about this section unless you got the above working but you want to tweak how you did it.
First of all, you might not want to use an app created in exactly the same way that you create it in your Flask code. For example, you might want to use a different config. Instead of importing the module where app is defined, you could just create a new Flask app in your notebook. You still have to import db (to do db.init_app(app)) and MyThing. But these modules probably don't have any configuration code in, since the configuration is all done at the level of the app.
Secondly, instead of use with, you could also explicitly do
my_context = app.app_context()
my_context.push()
then your SQLAlchemy code, and then later
my_context.pop()
This has two advantages. You can just push the context once, before using it in multiple notebook cells. The with block only works inside one cell.
Furthermore, storing the context in a variable after creating it means that you can re-use the same context. For the purposes of SQLAlchemy, the context acts a bit like a transaction. If you make a change to an object in one context, they won't apply in another context, unless you committed to the database. If you store a FlaskSQLAlchemy object in a Python variable, you won't be able to do anything with it inside a different context.
You could also store the context in a variable, then use it in multiple with blocks.
my_context = app.app_context()
with my_context.push():
thing1 = MyThing.query().order_by(MyThing.thing_id).first()
# (Maybe in another cell)
with my_context.push()
print thing1.thing_id
A last consideration, is that it might make sense to define your models using vanilla SQLAlchemy instead of FlaskSQLAlchemy. This would mean that you wouldn't need all the stuff above using contexts, just a database connection to create a session. This would make it much easier to import the models in non-flask code, but the tradeoff would be that it would make using them in Flask a bit harder.

Related

Sharing PonyORM's db session across different python module

I initially started a small python project (Python, Tkinter amd PonyORM) and became larger that is why I decided to divide the code (used to be single file only) to several modules (e.g. main, form1, entity, database). Main acting as the main controller, form1 as an example can contain a tkinter Frame which can be used as an interface where the user can input data, entity contains the db.Enttiy mappings and database for the pony.Database instance along with its connection details. I think problem is that during import, I'm getting this error "pony.orm.core.ERDiagramError: Cannot define entity 'EmpInfo': database mapping has already been generated". Can you point me to any existing code how should be done.
Probably you import your modules in a wrong order. Any module which contains entity definitions should be imported before db.generate_mapping() call.
I think you should call db.generate_mapping() right before entering tk.mainloop() when all imports are already done.
A good approach to avoid this is rather than having your db.generate_mapping() call happening at a module's top-level code, have a function that a module exports that calls db.generate_mapping() after all other modules have been imported.
The pattern I use is to put all of my db.Entity subclasses into a single module named model, and then at the bottom of model.py is:
def setup():
""" Set up the database """
db.bind(**database_config, create_db=True)
db.generate_mapping(create_tables=True)
This function is called by my application's own startup (which is also responsible for setting up database_config). This way the correct import and setup order can be guaranteed.
The db object itself is also owned by this model module; if I need to use it somewhere else I import model and use model.db.
If you want to further separate things out (with different model classes living in different modules) you can have a module that owns db, then your separate model modules, and then a third module that imports db and the models and provides the setup function. For example, your directory structure could look like this:
model/
__init__.py -- imports all of the model sub-modules and provides a setup function
db.py -- provides the db object itself and any common entities objects that everyone else needs
form1.py, form2.py, etc. -- imports db and uses its database object to define the entities
Then your main app can do something like:
import model
model.setup()

Proper code style in python Flask application

I have a flask app with a single file (app.py) a large code base size of 6K lines which i want to modularize by making Separate files for each group of route handlers.
Which one is the proper approach
creating Class for similar routes like user and giving member functions like login, register
user.py
class User:
def login():
pass
def register():
pass
use it like
user = User()
user.login()
or create a python file user.py and just droping all the functions inside that
user.py
def login():
pass
def register():
pass
and use it like
import user
user.login()
from above mentioned approaches which one will use proper memory and more efficient
You should almost never use classes for flask routes as they are inherantly static, and so are not really suited for having instances made of them
The easiest solution is just to separate related routes into modules, as shown in the second part of your question.
If I were you I would also look into Flask's blueprints, which are specifically designed to group routes together:
http://flask.pocoo.org/docs/1.0/blueprints/
(I would also recommend doing the tutorial for Flask that is available on the Flask website, where you make a small blogging application and blueprints and modularisation are explained http://flask.pocoo.org/docs/1.0/tutorial/)
The latter is Pythonic.
Don't use classes when you don't need instance data; use modules.

How should I create a Flask extension which depends on another extension?

I want to create a Flask extension which depends on another Flask extension. For the sake of argument, say that it's Flask-Foo, and that it needs Flask-Redis to store some specific data in a Redis database.
I know that I can add an install dependency to Flask-Redis. However I don't understand how I should instantiate and initialize Flask-Redis.
The setup for Flask-Foo sets up the Flask-Redis object. The drawback of this is that it assumes that the app isn't also using Flask-Redis for some other reason, configured explicitly outside of Flask-Foo. If it is, we get two objects which exist side-by-side, which seems wrong.
The user has to themselves instantiate and configure Flask-Redis. Flask-Foo checks that it has been initialized for that app, and complains otherwise. The problem with this is that it seems to impose boilerplate on the user - why should they have to set up Flask-Redis to use Flask-Foo, when they have no other knowledge or interest in the configuration of Flask-Redis? Furthermore, aren't we asking for trouble if this means that Flask-Foo.init_app() always has to be called after Flask-Redis.init_app()?
Don't use Flask-Redis. Use the Redis package directly, and manage the connection in Flask-Foo code. This would probably avoid the above problems. But it seems unelegant - we will basically have to resolve problems solved by Flask-Redis. If Flask-Foo goes on to support an alternative database, it will become complicated as we have to maintain code to manage the different types of connection.
Just to be clear, this is not a question specifically about Flask-Redis or how it works! I just want to understand what is generally the right way to build an extension on top of an extension.
You can pass depend extension to init_app. http://flask.pocoo.org/docs/1.0/extensiondev/
flask_foo/init.py
class FooManager:
def __init__(self, app=None, db=None, **kwargs):
self.app = app
if app is not None:
self.init_app(app, db, **kwargs)
def init_app(self, app, db, **kwargs):
self.db = db
app.config.setdefault('xxx', xxx)
# Bind Flask-Foo to app
app.foo_manager = self
Now, you can get foo_manager object from current_app like this:
models.py
from flask import current_app
db = current_app.foo_manager.db
class XXX(db.Model):
pass
Last, maybe you must register foo by app_context():
run.py
with app.app_context():
FooManager(app, db) # or xx = FooManager(); xx.init_app(app, db)
wonderful, depend extension works good for us.
Other tip: https://stackoverflow.com/a/51739367/5204664
Flask extension has the same structure as a python module. You should specify all requirements in setup.py file.
For example flask-babel
install_requires=[
'Flask',
'Babel>=2.3',
'Jinja2>=2.5'
],

How to properly initialise the flask-sqlalchemy module?

With flask-sqlalchemy, does anyone know why the second approach of construction in http://pythonhosted.org/Flask-SQLAlchemy/api.html doesn't suggest db.app = app as well? It seems the major difference between the first and second construction methods is simply that the first does db.app = app whilst the second does db.app = None
Thanks!
The two methods of initialization are pretty standard for Flask extensions and follow an implicit convention on how extensions are to be initialized. In this section of the Flask documentation you can find a note that explains it:
As you noticed, init_app does not assign app to self. This is intentional! Class based Flask extensions must only store the application on the object when the application was passed to the constructor. This tells the extension: I am not interested in using multiple applications.
When the extension needs to find the current application and it does not have a reference to it, it must either use the current_app context local or change the API in a way that you can pass the application explicitly.
The idea can be summarized as follows:
If you use the SQLAlchemy(app) constructor then the extension will assume that app is the only application, so it will store a reference to it in self.app.
If you use the init_app(app) constructor then the extension will assume that app is one of possibly many applications. So instead of saving a reference it will rely on current_app to locate the application every time it needs it.
The practical difference between the two ways to initialize extensions is that the first format requires the application to exist, because it must be passed in the constructor. The second format allows the db object to be created before the application exists because you pass nothing to the constructor. In this case you postpone the call to db.init_app(app) until you have an application instance. The typical situation in which the creation of the application instance is delayed is if you use the application factory pattern.

Storing Configuration details

I have a bunch of string and integer constants that I use at various places in my app.
I am planning to put them up in a centralized location, so that it is easier to change them in the future. I could think of the following approaches:
1) Have them as individual variables, stored in the model db.py
settings_title = "My Amazing App"
settings_ver = 2.0
settings_desc = "Moar and Moar cats"
2) Have them as a dict, stored in db.py
settings = { "title": "My Amazing App",
"ver" = 2.0,
"desc" = "Moar and Moar cats"
}
Is it a good idea to use the model db.py? I've heard that it is evaluated for every request. Could putting settings there have a noticeable overhead?
Is there any difference, performance-wise between the two approaches?
Is there a better way of doing this?
You can directly put your config variable in .py file and use that file by import in your module as django used setting.py. If you want to combine the variable on some section bases then you can use ConfigParser which can ready the .cfg file.
Your db.py model file will be executed on every request in any case, so adding a few setting assignments to the code will add negligible overhead. Instead of putting the settings in db.py, for better organization you might consider creating a separate model file. Note, model files are executed in alphabetical order, so if the settings have to be available in subsequent model files, name the settings file something like 0_settings.py to ensure it is executed before any other model files.
If you prefer, you can instead put the settings in a module (e.g., settings.py) in the application's /modules folder and import the settings object in your application code wherever you need it (in that case, the module will only be loaded once by the interpreter). If any of the settings need to be set dynamically based on the incoming request, though, you are probably better off keeping the settings in a model file.
Finally, rather than a standard dictionary, you might consider using a web2py Storage object, which is like a dictionary but allows you to access values as attributes and returns None rather than a KeyError if you try to access a key/attribute that doesn't exist:
from gluon.storage import Storage
settings = Storage()
settings.title = 'My Amazing App'
or
settings = Storage({'title': 'My Amazing App'})
Note, the web2py request, response, and session objects are all instances of the Storage class.
Django, for instance, uses a file settings.py.
It's not a model, but just a collection of variables of all types, strings/ints/dicts/whatever, and you import settings or from settings import * in every module that needs access to them.
Since it is not a single model, there's no overhead on access.

Categories

Resources