I am writing a script that during development should delete the database and populate it with some dummy values. Unfortunately, the drop_all() part of it does not work:
from flask_sqlalchemy import SQLAlchemy
from my_app import create_app
from my_app.models import Block
db = SQLAlchemy()
def main():
app = create_app()
db.init_app(app)
with app.app_context():
db.session.commit()
db.drop_all() # <- I would expect this to drop everything, but it does not
db.session.commit()
db.create_all() # <- This works even if the database is empty
b1 = Block(name="foo")
db.session.add(b1) # <- Every time I run the script, another copy is added
db.session.commit()
blocks = Block.query.all()
for b in blocks:
print(b) # <- this should contain exactly one record every time, but keeps getting longer
And my_app.models.py contains:
from . import db
class Block(db.Model):
__tablename__ = "block"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(30))
The drop all does apparently not drop the right tables. Examples I find on SO and other sources tend to feature class definitions in the same file, based on db.Model within the same file (such as here), which I cannot do. Do I need to somehow bind the imported classes to db? If so, how?
After some more searching I found the answer: instead of db = SQLAlchemy() in the script, I had to import it like in the models part: from my_app import db. Any explanation of this would be highly appreciated.
Related
I have a database with existing tables that are not used by my Python code. I generated a migration using Flask-Migrate and ran it, and it deleted my existing tables while creating the user table. How can I run migrations without removing any existing tables?
I read the answer to the question "Preserve existing tables in database when running Flask-Migrate", but it doesn't work for me because I do not own the database, and I do not know which tables might exist at the time of deployment... Which means I cannot whitelist the tables that should be preserved.
Is there a way to tell Flask-migrate/Alembic not to drop any tables that it doesn't know about?
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_script import Manager
from flask_migrate import Migrate, MigrateCommand
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'my_data'
db = SQLAlchemy(app)
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(128))
if __name__ == '__main__':
manager.run()
You just need this :
# in env.py
def include_object(object, name, type_, reflected, compare_to):
if type_ == "table" and reflected and compare_to is None:
return False
else:
return True
context.configure(
# ...
include_object = include_object
See here for the documentation : https://alembic.sqlalchemy.org/en/latest/cookbook.html#don-t-generate-any-drop-table-directives-with-autogenerate
you can use a Rewriter and do an automatic check before deletion.
override the ops.DropTableOp operation.
If you want, you can also enter a provision to only drop tables that you do have control over. these will be the ones that inherit from your Base (in case of pure alembic) or db.Model (for flask).
example
from alembic.autogenerate import rewriter
from alembic.operations import ops
writer = rewriter.Rewriter()
#writer.rewrites(ops.DropTableOp)
def add_column(context, revision, op):
if op.table_name in Base.metadata.tables.keys():
return op # only return an operation when you want
return [] # we need to return an iterable
Note that you need to pass the writer object to the process_revision_directives kwarg when doing context.configure in your env.py file. (see the doc)
I've decided to write a small webapp using Flask, postgresql and leaflet. I wanted to store coordinates (latitude and longitude) using PostGIS extender for postgresql. My flask application uses Flask-SQLAlchemy, blueprint and especially Flask-Migrate for the database migration process.
Here is an excerpt of my database model:
from . import db
from geoalchemy2 import Geometry
class Station(db.Model):
__tablename__ = 'stations'
id = db.Column(db.Integer, primary_key=True, unique=True)
name = db.Column(db.String(255))
position = db.Column(Geometry('Point', srid=4326))
Here an excerpt of my app/init.py
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
from config import config
db = SQLAlchemy()
migrate = Migrate()
def create_app(config_name=None, main=True):
if config_name is None:
config_name = os.environ.get('FLASK_CONFIG', 'development')
app = Flask(__name__)
app.config.from_object(config[config_name])
db.init_app(app)
migrate.init_app(app, db)
from .home import home as home_blueprint
app.register_blueprint(home_blueprint)
from .admin import admin as admin_blueprint
app.register_blueprint(admin_blueprint, url_prefix='/admin')
return app
Before trying to use the specific extender, I didn't have any issue to adapt my model. Since then, the migration is doing alright but the upgrade doesn't work (python manage.py db upgrade).
Here the log I obtain from the webserver:
sa.Column('position', geoalchemy2.types.Geometry(geometry_type='POINT', srid=4326), nullable=True),
`enter code here`NameError: name 'geoalchemy2' is not defined
I'm a bit lost and I didn't find much help on this specific subject. Any idea on what might cause this issue?
Alembic does not try to determine and render all imports for custom types in the migration scripts. Edit the generated script to include from geoalchemy2.types import Geometry and change the column def to just use Geometry.
You should always review the auto-generated scripts before running them. There's even comments in the script saying so.
Another way to do this which doesn't require manually editing the revision is adding import geoalchemy2 into alembic/script.py.mako and then alembic adds the module every time.
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from alembic import op
import sqlalchemy as sa
import geoalchemy2 # <--- right here
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}
I wonder how SQLAlchemy tracks changes that are made outside of SQLAlchemy (manual change for example)?
Until now, I used to put db.session.commit() before each value that can be changed outside of SQLAlchemy. Is this a bad practice? If yes, is there a better way to make sure I'll have the latest value? I've actually created a small script below to check that and apparently, SQLAlchemy can detect external changes without db.session.commit() being called each time.
Thanks,
P.S: I really want to understand how all the magics happen behind SQLAlchemy work. Does anyone has a pointer to some docs explaining the behind-the-scenes work of SQLAlchemy?
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
# Use SQLlite so this example can be run anywhere.
# On Mysql, the same behaviour is observed
basedir = os.path.abspath(os.path.dirname(__file__))
db_path = os.path.join(basedir, "app.db")
app.config["SQLALCHEMY_DATABASE_URI"] = 'sqlite:///' + db_path
db = SQLAlchemy(app)
# A small class to use in the test
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
# Create all the tables and a fake data
db.create_all()
user = User(name="old name")
db.session.add(user)
db.session.commit()
#app.route('/')
def index():
"""The scenario: the first request returns "old name" as expected.
Then, I modify the name of User:1 to "new name" directly on the database.
On the next request, "new name" will be returned.
My question is: how SQLAlchemy knows that the value has been changed?
"""
# Before, I always use db.session.commit()
# to make sure that the latest value is fetched.
# Without db.session.commit(),
# SQLAlchemy still can track change made on User.name
# print "refresh db"
# db.session.commit()
u = User.query.filter_by(id=1).first()
return u.name
app.run(debug=True)
The "cache" of a session is a dict in its identity_map (session.identity_map.dict) that only caches objects for the time of "a single business transaction" , as answered here https://stackoverflow.com/a/5869795.
For different server requests, you have different identity_map. It is not a shared object.
In your scenario, you requested the server 2 separated times. The second time, the identity_map is a new one (you can easily check it by printing out its pointer), and has nothing in cache. Consequently the session will request the database and get you the updated answer. It does not "track change" as you might think.
So, to your question, you don't need to do session.commit() before a query if you have not done a query for the same object in the same server request.
Hope it helps.
I'm trying to test my flask application using unittest. I want to refrain from flask-testing because I don't like to get ahead of myself.
I've really been struggling with this unittest thing now. It is confusing because there's the request context and the app context and I don't know which one I need to be in when I call db.create_all().
It seems like when I do add to the database, it adds my models to the database specified in my app module (init.py) file, but not the database specified in the setUp(self) method.
I have some methods that must populate the database before every test_ method.
How can I point my db to the right path?
def setUp(self):
#self.db_gd, app.config['DATABASE'] = tempfile.mkstemp()
app.config['TESTING'] = True
# app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE']
basedir = os.path.abspath(os.path.dirname(__file__))
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + \
os.path.join(basedir, 'test.db')
db = SQLAlchemy(app)
db.create_all()
#self.app = app.test_client()
#self.app.testing = True
self.create_roles()
self.create_users()
self.create_buildings()
#with app.app_context():
# db.create_all()
# self.create_roles()
# self.create_users()
# self.create_buildings()
def tearDown(self):
#with app.app_context():
#with app.request_context():
db.session.remove()
db.drop_all()
#os.close(self.db_gd)
#os.unlink(app.config['DATABASE'])
Here is one of the methods that populates my database:
def create_users(self):
#raise ValueError(User.query.all())
new_user = User('Some User Name','xxxxx#gmail.com','admin')
new_user.role_id = 1
new_user.status = 1
new_user.password = generate_password_hash(new_user.password)
db.session.add(new_user)
Places I've looked at:
http://kronosapiens.github.io/blog/2014/08/14/understanding-contexts-in-flask.html
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvi-debugging-testing-and-profiling
And the flask documentation:
http://flask.pocoo.org/docs/0.10/testing/
one issue that your hitting is the limitations of flask contexts, this is the primary reason i think long and hard before including a flask extension into my project, and flask-sqlalchemy is one of the biggest offenders. i say this because in most cases it is completely unnecessary to depend on the flask app context when dealing with your database. Sure it can be nice, especially since flask-sqlalchemy does a lot behind the scenes for you, mainly you dont have to manually manage your session, metadata or engine, but keeping that in mind those things can easily be done on your own, and for doing that you get the benefit of unrestricted access to your database, with no worry about the flask context. here is an example of how to setup your db manually, first i will show the flask-sqlalchemy way, then the manual plain sqlalchemy way:
the flask-sqlalchemy way:
import flask
from flask_sqlalchemy import SQLAlchemy
app = flask.Flask(__name__)
db = SQLAlchemy(app)
# define your models using db.Model as base class
# and define columns using classes inside of db
# ie: db.Column(db.String(255),nullable=False)
# then create database
db.create_all() # <-- gives error if not currently running flask app
the standard sqlalchemy way:
import flask
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
# first we need our database engine for the connection
engine = sa.create_engine(MY_DB_URL,echo=True)
# the line above is part of the benefit of using flask-sqlalchemy,
# it passes your database uri to this function using the config value
# SQLALCHEMY_DATABASE_URI, but that config value is one reason we are
# tied to the application context
# now we need our session to create querys with
Session = sa.orm.scoped_session(sa.orm.sessionmaker())
Session.configure(bind=engine)
session = Session()
# now we need a base class for our models to inherit from
Model = declarative_base()
# and we need to tie the engine to our base class
Model.metadata.bind = engine
# now define your models using Model as base class and
# anything that would have come from db, ie: db.Column
# will be in sa, ie: sa.Column
# then when your ready, to create your db just call
Model.metadata.create_all()
# no flask context management needed now
if you set your app up like that, any context issues your having should go away.
as a separate answer, to actually just force what you need to work, you can just use the test_request_context function, ie: in setup do: self.ctx = app.test_request_context() then just activate it, self.ctx.push() and when your done get rid of it, ie in tearDown: self.ctx.pop()
I'm using a flask setup from a while and now trying to install Flask-Blogging module on it. Current modules:
- Flask-sqlalchemy with postgres
- Flask-login
- Flask-Blogging (new)
My application.py looks like this:
from flask import Flask
from flask import session
from flask.ext.blogging import SQLAStorage, BloggingEngine
from flask.ext.login import LoginManager
from flask.ext.sqlalchemy import SQLAlchemy
'''
The main application setup. The order of things is important
in this file.
'''
app = Flask(__name__)
app.config.from_object('config.base')
app.config.from_envvar('APP_CONFIG_FILE')
'''
Initialize database
'''
db = SQLAlchemy(app)
'''
Initialize blogger
'''
storage = SQLAStorage(db=db)
blog_engine = BloggingEngine(app, storage)
the last two lines are the only new things I added (other than the imports). Suddenly now I'm getting error about duplicate table names:
sqlalchemy.exc.InvalidRequestError: Table 'customer' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.
Any ideas what am I doing wrong? I couldn't find much documentation about Flask-Blogging other than:
http://flask-blogging.readthedocs.org/en/latest/
You get this error because in SQLAStorage.__init__ there is this line:
self._metadata.reflect(bind=self._engine)
This will look at your database and create sqlalchemy table things for all the tables currently in your database.
Thus if your database contains a table called 'customer' the line in your code:
storage = SQLAStorage(db=db)
will automatically model an sqlalchemy table called 'customer' for you.
Now... no doubt you have your own database model definitions somewhere, probably in another python module, something like:
class Customer(db.Model):
id = db.Column(db.Integer, primary_key=True)
...
Since this class definition defines a table called 'customer' and since SQLAStorage has already defined a table called 'customer' you get the exception as soon as your class Customer is imported.
Some ways to work around this problem are:
Import your database definition modules before instantiating SQLAStorage
'''
Initialize database
'''
db = SQLAlchemy(app)
import ankit.db_models # import my db models here so SQLAStorage doesn't do it first
'''
Initialize blogger
'''
storage = SQLAStorage(db=db)
blog_engine = BloggingEngine(app, storage)
or
Tell SQLAStorage to use its own metadata
By passing the db param to SQLAStorage.__init__ you are telling it to use your metadata. You can instead just pass the engine parameter and it will create its own metadata.
storage = SQLAStorage(engine=db.engine)