Moving some database logic to its own helper module in Flask-Sqlalchemy? - python

I am trying to separate some of my database logic into its own helper module. This is because I have several routes that perform the same database functions, and I don't want to keep repeating the same code. I'm a bit confused on the db session scopes.
From the SQLAlchemy docs:
Some web frameworks include infrastructure to assist in the task of aligning the lifespan of a Session with that of a web request. This includes products such as Flask-SQLAlchemy, for usage in conjunction with the Flask web framework...
I think this means my db session scope is contained within a particular route since I'm using Flask and Flask-SQLAlchemy, so I came up with the following:
init.py
app = Flask(__name__)
db = SQLAlchemy(app)
routes.py
from init import db
#app.route('/one')
def one():
form = MyForm()
if form.validate_on_submit():
myhelper.saveStuff1(form.stuff1.data)
myhelper.saveStuff2(form.stuff2.data)
db.session.commit()
return render_template(...)
#app.route('/two')
def two():
form = MyForm()
if form.validate_on_submit():
myhelper.saveStuff1(form.stuff1.data)
myhelper.saveStuff2(form.stuff2.data)
myhelper.saveStuff3(form.stuff3.data)
db.session.commit()
return render_template(...)
myhelper.py
from init import db
# Add new Item
def saveStuff1(formdata):
db.session.add(Item(name=formdata))
# Update Item
def saveStuff2(formdata):
item = Item.query.filter_by(name=formdata).first()
item.description = 'default'
db.session.add(item)
# etc...
Would this be the correct way for structuring my helpers? I'm worried that from init import db will cause problems with scoping since it's imported in both files, or if this overall code pattern will cause other problems.

SQLAlchemy's session scope is not related to Python's variable scope. So no, importing db in multiple places as you've shown won't cause problems. Regarding the session scope, Flask-SQLAlchemy takes care of that for you, so you can ignore (or not worry about) the discussion of scope in the SQLAlchemy docs.

Related

Reflecting different databases in Flask factory setup

I'd like to use Flask's application factory mechanism fpr my application. I have is that the databases I use within some blueprints are located differently, so I'm using binds for pointing to them. The tables itself are in production and already in use, so I need to reflect them in order to use them within my application.
Problem is that I can't get the reflect function working because of the application context. I always get the message, that I'm working outside the application context. I fully understand that and see, that db is really outside, but don't have any idea anymore on how to involve it.
I tried different variations on passing app via current_app to my models.py, but nothing was working.
config.py:
class Config(object):
#Secret key
SECRET_KEY = 'my_very_secret_key'
ITEMS_PER_PAGE = 25
SQLALCHEMY_BINDS = {
'mysql_bind': 'mysql+mysqlconnector://localhost:3306/tmpdb'
}
SQLALCHEMY_TRACK_MODIFICATIONS = False
main.py:
from webapp import create_app
app = create_app('config.Config')
if __name__ == '__main__':
app.run(debug=true)
webapp/init.py:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def create_app(config_object):
app=Flask(__name__)
app.config.from_object(config_object)
db.init_app(app)
from main import create_module as main_create_module
main_create_module(app)
return app
webapp/main/init.py:
def create_module(app):
from .controller import blueprint
app.register(blueprint)
webapp/main/controller.py:
from flask import Blueprint, render_template, current_app as app
from .models import db, MyTable # <-- Problem might be here ...
bluerint = Blueprint('main', __name__)
#blueprint.route('/'):
def index():
resp = db.session.query(MyTable)\
.db.func.count(MyTable.versions)\
.filter(MyTable.versions =! '')\
.group_by(MyTable.name).all()
if resp:
return render_template('index.html', respo=respo)
else:
return 'Nothing happend'
webapp/main/models.py:
from .. import db # <-- and here ...
db.reflect(bind='mysql_bind')
class MyTable(db.Model):
__bind_key__ = 'mysql_bind'
__table__ = db.metadata.tables['my_table']
Expected result would be to get the reflection working in different blueprints.
Got it working, full solution here:
https://github.com/researcher2/stackoverflow_56885380
I have used sqllite3 for the test, run create_db.py script to setup db. Run flask with debug.sh, since recent versions you can't seem to just app.run() inside __main__ anymore.
Explanation
As I understand it a blueprint is just a way to group together several views if you need to use them multiple times in a single app or across multiple apps. You can add different route prefix as you desire.
A db object is not associated with a blueprint, it is associated with an app, which provide the configuration information. Once inside the blueprint views you will have access to the db object with the relevant app context automatically available.
Regarding the db.reflect, you need to make the call inside create_app and pass it the app object(preferred) or import the app inside the model which is spaghetti.
Multiple DBs can be accessed using binding as you've shown.
So your blueprints will have access to all tables imported and flask-sqlalchemy knows which db connection to use based on the binding.
I'm normally a fan of explicitly defining tables so you have access to the ORM objects and fields in code completion. Do you have lots of tables/fields or maybe you are creating something to query table metadata for total automation on any schema? Like a schema viewer or something like that.
This might be useful for others coming to this post:
https://flask-sqlalchemy.palletsprojects.com/en/2.x/contexts/
Brilliant! Thank you very much. Got it also working. Your tip gave me a hint to find another way:
#blueprint.route('/')
def index():
# pushing app_context() to import MyTable
# now I can use db.reflect() also in models.py
with app.app_context():
from .models import MyTable
results = db.session.query(MyTable).all()
print(results)
for row in results:
print (row)
print(row.versions)
print(row.name)
if results:
return render_template('my_table.html', results=results)
else:
return 'Nothing happend'
Then the reflection can be done inside models.py. The link you posted is really helpful, don't know why I did not stumble over it myself ...
Anyway, I do now have a lot more possibilities than before!
Cheers, mate!

Changes in Instances in 'view.py' won't reflect in pages in Flask

I've created in my 'view.py' program (HTML redirector) a thread responsible to update my instances that contains data from DB. The idea is that every 15 seconds the data from DB is refreshed in these instances so when the user refreshes the HTML page, the data is available.
Code:
#defining the Thread Function
def set_global_vars():
while True:
global ALL_SERVERS_FROM_DB
time.sleep(15)
ALL_SERVERS_FROM_DB = ServersManipulator(Server().query_all())
set_vars_thread = threading.Thread(target=set_global_vars)
set_vars_thread .start()
# Redirecting to servers page uging ALL_SERVERS_FROM_DB
#page.route('/servers', methods=['GET', 'POST'])
def servers(server='ALL'):
return render_template('servers.html', server_man=ALL_SERVERS_FROM_DB, server=server)
Problem is if I add a new entry that should affect 'ALL_SERVERS_FROM_DB', it doesn't reflect on the HTML page, that uses this instance to populate a table.
Hope I was clear and that someone can help me.
Kind Regards
Well,
It turns out I must recreate my DB Engine session every time I query from DB, otherwise, it seems it uses some kind of cache.
def query_all(self):
"""
Purpose:
Retrieves all entries related to this Class from Database
Parameters:
"""
global DBSession
DBSession = sessionmaker(bind=DBENGINE)() # <- I've added this to work
result = DBSession.query(type(self)).order_by(type(self).id)
DBSession.close()
return result
Before I was only starting the DBsession when the Object was instantiated.
Regards

How do I factor this code into the 'factory' paradigm?

Here's the code I have in my models.py file, sitting at the very bottom, as per this guide:
db.configure_mappers()
db.create_all()
db.commit()
This will execute at import time. So now, whenever I import a model, I get the following message:
RuntimeError: application not registered on db instance and no applicationbound to current context
I'm not sure how to factor this code into the factory paradigm. I tried to wrap the code in a function, and then call it in create_app.
When I do that, this is the error I get:
sqlalchemy.exc.CompileError: (in table 'ad', column 'search_vector'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f81bfc14940> can't render element of type <class 'sqlalchemy.dialects.postgresql.base.TSVECTOR'>
The search_vector column is pretty simple, taken from the sqlalchemy-searchable quickstart guide:
search_vector = db.Column(TSVectorType('title', 'body'))
Your approach is correct — wrap initializing code in a function:
from models import db
def create_app():
app = Flask(__name__)
db.init_app(app)
db.configure_mappers()
db.create_all()
db.commit()
return app
app = create_app()
Or if you want to avoid cluttering your launch file define the function taking application as parameter (models.py):
def init_db(app):
db.init_app(app)
db.configure_mappers()
db.create_all()
db.commit()
And use the function to initialize application:
from models import init_db
app = Flask(__name__)
init_db(app)
But your error is caused by another problem: SQLite does not support TSVector, it is PostgreSQL specific feature. All examples for SQAlchemy-Searchable are written for PostgreSQL databases.
It also looks like SQLite does not have full text search out of the box. Perhaps you need to switch to another DBMS.

How can I test my flask application using unittest?

I'm trying to test my flask application using unittest. I want to refrain from flask-testing because I don't like to get ahead of myself.
I've really been struggling with this unittest thing now. It is confusing because there's the request context and the app context and I don't know which one I need to be in when I call db.create_all().
It seems like when I do add to the database, it adds my models to the database specified in my app module (init.py) file, but not the database specified in the setUp(self) method.
I have some methods that must populate the database before every test_ method.
How can I point my db to the right path?
def setUp(self):
#self.db_gd, app.config['DATABASE'] = tempfile.mkstemp()
app.config['TESTING'] = True
# app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE']
basedir = os.path.abspath(os.path.dirname(__file__))
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + \
os.path.join(basedir, 'test.db')
db = SQLAlchemy(app)
db.create_all()
#self.app = app.test_client()
#self.app.testing = True
self.create_roles()
self.create_users()
self.create_buildings()
#with app.app_context():
# db.create_all()
# self.create_roles()
# self.create_users()
# self.create_buildings()
def tearDown(self):
#with app.app_context():
#with app.request_context():
db.session.remove()
db.drop_all()
#os.close(self.db_gd)
#os.unlink(app.config['DATABASE'])
Here is one of the methods that populates my database:
def create_users(self):
#raise ValueError(User.query.all())
new_user = User('Some User Name','xxxxx#gmail.com','admin')
new_user.role_id = 1
new_user.status = 1
new_user.password = generate_password_hash(new_user.password)
db.session.add(new_user)
Places I've looked at:
http://kronosapiens.github.io/blog/2014/08/14/understanding-contexts-in-flask.html
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvi-debugging-testing-and-profiling
And the flask documentation:
http://flask.pocoo.org/docs/0.10/testing/
one issue that your hitting is the limitations of flask contexts, this is the primary reason i think long and hard before including a flask extension into my project, and flask-sqlalchemy is one of the biggest offenders. i say this because in most cases it is completely unnecessary to depend on the flask app context when dealing with your database. Sure it can be nice, especially since flask-sqlalchemy does a lot behind the scenes for you, mainly you dont have to manually manage your session, metadata or engine, but keeping that in mind those things can easily be done on your own, and for doing that you get the benefit of unrestricted access to your database, with no worry about the flask context. here is an example of how to setup your db manually, first i will show the flask-sqlalchemy way, then the manual plain sqlalchemy way:
the flask-sqlalchemy way:
import flask
from flask_sqlalchemy import SQLAlchemy
app = flask.Flask(__name__)
db = SQLAlchemy(app)
# define your models using db.Model as base class
# and define columns using classes inside of db
# ie: db.Column(db.String(255),nullable=False)
# then create database
db.create_all() # <-- gives error if not currently running flask app
the standard sqlalchemy way:
import flask
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
# first we need our database engine for the connection
engine = sa.create_engine(MY_DB_URL,echo=True)
# the line above is part of the benefit of using flask-sqlalchemy,
# it passes your database uri to this function using the config value
# SQLALCHEMY_DATABASE_URI, but that config value is one reason we are
# tied to the application context
# now we need our session to create querys with
Session = sa.orm.scoped_session(sa.orm.sessionmaker())
Session.configure(bind=engine)
session = Session()
# now we need a base class for our models to inherit from
Model = declarative_base()
# and we need to tie the engine to our base class
Model.metadata.bind = engine
# now define your models using Model as base class and
# anything that would have come from db, ie: db.Column
# will be in sa, ie: sa.Column
# then when your ready, to create your db just call
Model.metadata.create_all()
# no flask context management needed now
if you set your app up like that, any context issues your having should go away.
as a separate answer, to actually just force what you need to work, you can just use the test_request_context function, ie: in setup do: self.ctx = app.test_request_context() then just activate it, self.ctx.push() and when your done get rid of it, ie in tearDown: self.ctx.pop()

Flask and SQLAlchemy and the MetaData object

it's the first time i am using this environment.
The part of SQLAlchemy i am willing to use is just the one that allows me to query the database using Table objects with autoload = True. I am doing this as my tables already exist in the DB (mysql server) and were not created by defining flask models.
I have gone through all the documentation and i don't seem to find an answer. Here is some code:
app = Flask(__name__)
app.config.from_object(__name__)
metadata = None
def connect_db():
engine = create_engine(app.config['DATABASE_URI'])
global metadata
metadata = MetaData(bind=engine)
return engine.connect()
#app.before_request
def before_request():
g.db = connect_db()
#app.teardown_request
def teardown_request(exception):
g.db.close()
Now you could be wondering why i use that global var named metadata. Ok some more code:
#app.route('/test/<int:id>')
def test(test_result_id):
testTable = Table('test_table', metadata , autoload=True)
As you can see i need that object to be global in order to access it from within a function.
Also I am declaring the same var testTable in each function that needs it. I have the feeling this is not the right approach. I coudn't find any best practice advice for a case like mine.
Thanks all!
Have you seen this snippet in the SQLAlchemy docs?
Maybe this would work:
# This is fine as a global global
metadata = MetaData()
#app.before_first_request
def autoload_tables():
meta.reflect(bind=g.db.bind)
#app.route('/')
def index():
users_table = meta.tables['users']
That way your tables are reflected only once per process which is probably what you want. Note that your engine should be a global too, so you needn't create a new engine in #app.before_request - app creation is a more appropriate place.
If your case is very special you might need one engine per request, in which case you should consider the ThreadLocalMetaData class.

Categories

Resources