How can I test my flask application using unittest? - python

I'm trying to test my flask application using unittest. I want to refrain from flask-testing because I don't like to get ahead of myself.
I've really been struggling with this unittest thing now. It is confusing because there's the request context and the app context and I don't know which one I need to be in when I call db.create_all().
It seems like when I do add to the database, it adds my models to the database specified in my app module (init.py) file, but not the database specified in the setUp(self) method.
I have some methods that must populate the database before every test_ method.
How can I point my db to the right path?
def setUp(self):
#self.db_gd, app.config['DATABASE'] = tempfile.mkstemp()
app.config['TESTING'] = True
# app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE']
basedir = os.path.abspath(os.path.dirname(__file__))
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + \
os.path.join(basedir, 'test.db')
db = SQLAlchemy(app)
db.create_all()
#self.app = app.test_client()
#self.app.testing = True
self.create_roles()
self.create_users()
self.create_buildings()
#with app.app_context():
# db.create_all()
# self.create_roles()
# self.create_users()
# self.create_buildings()
def tearDown(self):
#with app.app_context():
#with app.request_context():
db.session.remove()
db.drop_all()
#os.close(self.db_gd)
#os.unlink(app.config['DATABASE'])
Here is one of the methods that populates my database:
def create_users(self):
#raise ValueError(User.query.all())
new_user = User('Some User Name','xxxxx#gmail.com','admin')
new_user.role_id = 1
new_user.status = 1
new_user.password = generate_password_hash(new_user.password)
db.session.add(new_user)
Places I've looked at:
http://kronosapiens.github.io/blog/2014/08/14/understanding-contexts-in-flask.html
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvi-debugging-testing-and-profiling
And the flask documentation:
http://flask.pocoo.org/docs/0.10/testing/

one issue that your hitting is the limitations of flask contexts, this is the primary reason i think long and hard before including a flask extension into my project, and flask-sqlalchemy is one of the biggest offenders. i say this because in most cases it is completely unnecessary to depend on the flask app context when dealing with your database. Sure it can be nice, especially since flask-sqlalchemy does a lot behind the scenes for you, mainly you dont have to manually manage your session, metadata or engine, but keeping that in mind those things can easily be done on your own, and for doing that you get the benefit of unrestricted access to your database, with no worry about the flask context. here is an example of how to setup your db manually, first i will show the flask-sqlalchemy way, then the manual plain sqlalchemy way:
the flask-sqlalchemy way:
import flask
from flask_sqlalchemy import SQLAlchemy
app = flask.Flask(__name__)
db = SQLAlchemy(app)
# define your models using db.Model as base class
# and define columns using classes inside of db
# ie: db.Column(db.String(255),nullable=False)
# then create database
db.create_all() # <-- gives error if not currently running flask app
the standard sqlalchemy way:
import flask
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
# first we need our database engine for the connection
engine = sa.create_engine(MY_DB_URL,echo=True)
# the line above is part of the benefit of using flask-sqlalchemy,
# it passes your database uri to this function using the config value
# SQLALCHEMY_DATABASE_URI, but that config value is one reason we are
# tied to the application context
# now we need our session to create querys with
Session = sa.orm.scoped_session(sa.orm.sessionmaker())
Session.configure(bind=engine)
session = Session()
# now we need a base class for our models to inherit from
Model = declarative_base()
# and we need to tie the engine to our base class
Model.metadata.bind = engine
# now define your models using Model as base class and
# anything that would have come from db, ie: db.Column
# will be in sa, ie: sa.Column
# then when your ready, to create your db just call
Model.metadata.create_all()
# no flask context management needed now
if you set your app up like that, any context issues your having should go away.

as a separate answer, to actually just force what you need to work, you can just use the test_request_context function, ie: in setup do: self.ctx = app.test_request_context() then just activate it, self.ctx.push() and when your done get rid of it, ie in tearDown: self.ctx.pop()

Related

How can I share a python Class between applications without installing it?

I have a flask application, that accesses a DB via Class that encapsulates DB access. I need to use this same class outside of the flask application for some regular jobs that access the same db.
/databases/database.db
/website/application/myblueprint/views.py
/website/application/myblueprint/db_class.py
/scripts/log_reading.py
Both views.py and log_reading.py need to use db_class.py, but you can't import from above your own package.
I could make db_class.py it's own application and install it each
venv, but then every time I edit I have to reinstall it in each
place. Plus there's the overhead of the setup stuff for a single module.
I could put the file in python site path, either by moving
it or by adding to the path, but that feels wrong and I'm not sure
would work with venvs.
I could sym link, that also feels wrong.
I'm not using flask models for the DB, but I don't think that would solve my problem anyway.
if you want to use db or session then you have to create application context, outside of application context you can't use it.
from flask import Flask
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
import os
APP_SETTINGS = os.getenv("APP_SETTINGS", "your config path")
def _create_db_session(self, connection_string):
engine = create_engine(connection_string)
session_factory = sessionmaker(bind=self.engine)
session = scoped_session(session_factory)
return session
app = Flask(__name__)
app.config.from_object(APP_SETTINGS)
db_url = app.config.get("SQLALCHEMY_DATABASE_URI")
session =_create_db_session(db_url)
#then you can use
# session.query to query database
# session.commit(), session.remove()
# same operations you do normally with DB session

Tell Flask-Migrate / Alembic to NOT drop any tables it doesn't know about

I have a database with existing tables that are not used by my Python code. I generated a migration using Flask-Migrate and ran it, and it deleted my existing tables while creating the user table. How can I run migrations without removing any existing tables?
I read the answer to the question "Preserve existing tables in database when running Flask-Migrate", but it doesn't work for me because I do not own the database, and I do not know which tables might exist at the time of deployment... Which means I cannot whitelist the tables that should be preserved.
Is there a way to tell Flask-migrate/Alembic not to drop any tables that it doesn't know about?
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_script import Manager
from flask_migrate import Migrate, MigrateCommand
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'my_data'
db = SQLAlchemy(app)
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(128))
if __name__ == '__main__':
manager.run()
You just need this :
# in env.py
def include_object(object, name, type_, reflected, compare_to):
if type_ == "table" and reflected and compare_to is None:
return False
else:
return True
context.configure(
# ...
include_object = include_object
See here for the documentation : https://alembic.sqlalchemy.org/en/latest/cookbook.html#don-t-generate-any-drop-table-directives-with-autogenerate
you can use a Rewriter and do an automatic check before deletion.
override the ops.DropTableOp operation.
If you want, you can also enter a provision to only drop tables that you do have control over. these will be the ones that inherit from your Base (in case of pure alembic) or db.Model (for flask).
example
from alembic.autogenerate import rewriter
from alembic.operations import ops
writer = rewriter.Rewriter()
#writer.rewrites(ops.DropTableOp)
def add_column(context, revision, op):
if op.table_name in Base.metadata.tables.keys():
return op # only return an operation when you want
return [] # we need to return an iterable
Note that you need to pass the writer object to the process_revision_directives kwarg when doing context.configure in your env.py file. (see the doc)

Reflecting different databases in Flask factory setup

I'd like to use Flask's application factory mechanism fpr my application. I have is that the databases I use within some blueprints are located differently, so I'm using binds for pointing to them. The tables itself are in production and already in use, so I need to reflect them in order to use them within my application.
Problem is that I can't get the reflect function working because of the application context. I always get the message, that I'm working outside the application context. I fully understand that and see, that db is really outside, but don't have any idea anymore on how to involve it.
I tried different variations on passing app via current_app to my models.py, but nothing was working.
config.py:
class Config(object):
#Secret key
SECRET_KEY = 'my_very_secret_key'
ITEMS_PER_PAGE = 25
SQLALCHEMY_BINDS = {
'mysql_bind': 'mysql+mysqlconnector://localhost:3306/tmpdb'
}
SQLALCHEMY_TRACK_MODIFICATIONS = False
main.py:
from webapp import create_app
app = create_app('config.Config')
if __name__ == '__main__':
app.run(debug=true)
webapp/init.py:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def create_app(config_object):
app=Flask(__name__)
app.config.from_object(config_object)
db.init_app(app)
from main import create_module as main_create_module
main_create_module(app)
return app
webapp/main/init.py:
def create_module(app):
from .controller import blueprint
app.register(blueprint)
webapp/main/controller.py:
from flask import Blueprint, render_template, current_app as app
from .models import db, MyTable # <-- Problem might be here ...
bluerint = Blueprint('main', __name__)
#blueprint.route('/'):
def index():
resp = db.session.query(MyTable)\
.db.func.count(MyTable.versions)\
.filter(MyTable.versions =! '')\
.group_by(MyTable.name).all()
if resp:
return render_template('index.html', respo=respo)
else:
return 'Nothing happend'
webapp/main/models.py:
from .. import db # <-- and here ...
db.reflect(bind='mysql_bind')
class MyTable(db.Model):
__bind_key__ = 'mysql_bind'
__table__ = db.metadata.tables['my_table']
Expected result would be to get the reflection working in different blueprints.
Got it working, full solution here:
https://github.com/researcher2/stackoverflow_56885380
I have used sqllite3 for the test, run create_db.py script to setup db. Run flask with debug.sh, since recent versions you can't seem to just app.run() inside __main__ anymore.
Explanation
As I understand it a blueprint is just a way to group together several views if you need to use them multiple times in a single app or across multiple apps. You can add different route prefix as you desire.
A db object is not associated with a blueprint, it is associated with an app, which provide the configuration information. Once inside the blueprint views you will have access to the db object with the relevant app context automatically available.
Regarding the db.reflect, you need to make the call inside create_app and pass it the app object(preferred) or import the app inside the model which is spaghetti.
Multiple DBs can be accessed using binding as you've shown.
So your blueprints will have access to all tables imported and flask-sqlalchemy knows which db connection to use based on the binding.
I'm normally a fan of explicitly defining tables so you have access to the ORM objects and fields in code completion. Do you have lots of tables/fields or maybe you are creating something to query table metadata for total automation on any schema? Like a schema viewer or something like that.
This might be useful for others coming to this post:
https://flask-sqlalchemy.palletsprojects.com/en/2.x/contexts/
Brilliant! Thank you very much. Got it also working. Your tip gave me a hint to find another way:
#blueprint.route('/')
def index():
# pushing app_context() to import MyTable
# now I can use db.reflect() also in models.py
with app.app_context():
from .models import MyTable
results = db.session.query(MyTable).all()
print(results)
for row in results:
print (row)
print(row.versions)
print(row.name)
if results:
return render_template('my_table.html', results=results)
else:
return 'Nothing happend'
Then the reflection can be done inside models.py. The link you posted is really helpful, don't know why I did not stumble over it myself ...
Anyway, I do now have a lot more possibilities than before!
Cheers, mate!

Is it safe to re-create a SQLAlchemy object while server is running?

Our application uses flask + gunicorn. Now we want to make it able to reload db configuration while it is runing, which means it can switch to a new db without restart process. With the help of config center we can dispatch config at runtime, but how can I re-init the global varibale db?
db = SQLAlchemy()
def create_app():
app = Flask(__name__)
app.config.from_object(dynamic_config)
db.init_app(app)
And assume at some time. We dispatch new config, how can db be init with new config? Or is it safe to just replace it with new SQLAlchemy() instance? Like do this:
from models import set_db # which will set global db to new instance
from app import app
def callback(odl, new):
new_db = SQLAlchemy()
# re-construct config with old, and new
# now app.config is updated
new_db.init_app(app)
set_db(new_db)
Is it ok to do this? As I'm concerned, it will cause something like thread safety and may destroy Transaction.
Help me with this, many thanks
If i were you I would use SQLALCHEMY_BINDS config or different instances of db with different configs instead of changing configuration every time.
and I think it isn't good practice to change db structure using application(in case you are doing that)

Calling flask app configuration inside models.py

OK, I am trying to do something a little bit esoteric with my Flask application.
I want to have some conditional logic in the model structure that is based on information in a configuration file.
At present when I call my Flask App, a configuration object is specified like this:
app = create_app('swarm.settings.DevConfig')
The db object is created in the models.py :
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class MyClass(db.Model):
...
I would like the models.py to accommodate a variety of ORM and OGM (not limited to SQLAlchemy and py2neo) so that I can develop a Flask app to be SQL/Graph agnostic.
if __SOME_CONFIG__['db_mapper'] = 'SQLAlchemy':
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class MapperModel(db.Model):
...
elif __SOME_CONFIG__['db_mapper'] = 'Py2Neo':
from py2neo import Graph, Node, Relation
db = Graph()
class MapperModel(py2neo.Node):
...
class MyClass(MapperModel):
...
I can't see a way to use current_app to achieve this because I am creating my db object before the code is aware of an app object.
Is there a simple way to load the current configuration object from inside models.py ? Should I just load the configuration in models.py from a seperate file in the without reference to the app's current configuration object?
Create a function which will return a db object, and initialize this object when you instantiate flask application:
app = create_app(...)
db = create_dbobject('someconfig')
def create_dbobject(someconfig):
if someconfig == 'Py2Neo':
return Py2Neo()
#default to sqlchemy
return SQLAlchemy()
So no more you have to worry about extension initialization. And its good to keep extensions initialization in place where app exists.

Categories

Resources