I want write project using database library (SQLAlchemy). When I use SQLite, all works good, but when I deploy project on server (Heroku with Postgres plugin), It doesn't work. This is code for my database:
class UserBase(Base):
__tablename__ = 'users'
id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
nickname = Column(String(50), unique=True, nullable=False)
server = Column(Integer)
language = Column(String(2))
def __repr__(self):
return "<UserBase(nickname='%s', server='%s', language='%s')>" % \
(self.nickname, self.server, self.language)
I have engine & Base variable:
engine = create_engine(os.environ.get('DATABASE_URL'))
Base = declarative_base()
In DATABASE_URL link to base on Heroku(Postgres). For create database I write in Interactive Python Console:
from user_base import Base, engine
Base.metadata.create_all(engine)
After these manipulations in the server logs it gives this:
sqlalchemy.exc.DataError: (psycopg2.errors.NumericValueOutOfRange) integer out of range
How to fix problem? If you needs in info, asking.
I set type of server as String, indicated that the server is a string and this is helped me.
Related
I have an existing model that I can't change, written in Flask-SQLAlchemy.
I'm writing another app that uses the same model, but without the need for Flask, therefore I'm working with the regular SQLAlchemy module.
Unfortunately, I'm getting a lot of:
'AttributeError: module 'DB' has no attribute 'Model'
for all kind of attributes - such as Column, Integer, etc
Is there a way to use Flask-SQLAlchemy with a regular SQLAlchemy app?
There is an example of one of my Model Class:
class Table_name(Model):
__tablename__ = 'table_name'
id = db.Column(db.INTEGER, primary_key=True)
field1 = db.Column(db.INTEGER, db.ForeignKey('table1.id'), nullable=False)
field2 = db.Column(db.TEXT, nullable=False)
field3 = db.Column(db.INTEGER, db.ForeignKey('table2.id'), nullable=False)
time = db.Column(db.TIMESTAMP, nullable=False)
Unfortunately I can't change them
I've had the same dilemma. If it's a small or one-off, you can hack in a Flask app object without really using it, like so:
from flask_sqlalchemy import SQLAlchemy
throw_away_app = Flask(__name__)
db = SQLAlchemy(throw_away_app)
with self.throw_away_app.app_context():
(perform your db operation)
That works relatively well for simple things like scripts. But if you're sharing a model across multiple projects and you simply cannot alter the existing Flask project, it's unfortunately probably not a good solution.
If that's the case, and you simply cannot alter the existing codebase at all, it probably makes sense to create a new model class and connect to the existing database using vanilla SQLAlchemy.
BTW, for any future programmers wondering how to get some of the benefits of Flask-SQLAlchemy without Flask, consider: sqlalchemy-mixins.
from sqlalchemy import create_engine
from sqlalchemy import Column, Integer, String, Text, TIMESTAMP, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
# connect in memory sqlite database or you can connect your own database
engine = create_engine('sqlite:///:memory:', echo=True)
# create session and bind engine
Session = sessionmaker(bind=engine)
class Table_name(Base):
__tablename__ = 'table_name'
id = Column(Integer, primary_key=True)
field1 = Column(Integer, ForeignKey('table1.id'), nullable=False)
field2 = Column(Text, nullable=False)
field3 = Column(Integer, ForeignKey('table2.id'), nullable=False)
time = Column(TIMESTAMP, nullable=False)
table = Table_name(field1=1, fi....)
session.add(table)
Now you can use your ORM as usual like flask-sqlalchemy .
Docs: https://docs.sqlalchemy.org/en/13/orm/tutorial.html
I am currently working on a project with a pre-existing database. The Server is a clustered server with multiple Catalogs (database), and in each Catalog there are multiple Schemas with Tables. The table name format for the traditional SQL query would be [Catalog].[Schema].[Table]. This structure works for the traditional SQL.
The problem comes in when I try to flask db migrate to an sqlite database for testing. I get a number of errors depending on what I try.
I am using
Python 3.7
Flask 1.0.2
Flask-SQLAlchemy 2.4.0
Flask-Migrate 2.4.0
Windows 10 (not ideal, but its what I have)
I have tried the following with different results:
Schema only method:
class User(db.Model):
__tablename__ = 'user'
__table_args__ = (
db.PrimaryKeyConstraint('userid')
, db.ForeignKeyConstraint(('manageruserid',), ['CatalogA.SchemaA.userid'])
, {'schema': 'CatalogA.SchemaA'}
)
manager_user_id = db.Column('manageruserid', db.Integer())
user_id = db.Column('userid', db.Integer(), nullable=False)
class Tool(db.Model):
__tablename__ = 'tool'
__table_args__ = (
db.PrimaryKeyConstraint('toolid')
, db.ForeignKeyConstraint(('ownerid',), ['CatalogA.SchemaA.user.userid'])
, {'schema': 'CatalogB.SchemaB'}
)
tool_id = db.Column('toolid', db.Integer())
owner_id = db.Column('ownerid', db.Integer(), nullable=False)
when trying to upgrade it creates an error:
"sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unknown database "CatalogA.SchemaA" [SQL: CREATE TABLE "CatalogA.SchemaA".user (
manageruserid INTEGER,
userid INTEGER NOT NULL,
PRIMARY KEY (userid),
FOREIGN KEY(manageruserid) REFERENCES user (userid) )"
Bind With Schema (binds are setup correctly)
class User(db.Model):
__bind_key__ = 'CatalogA'
__tablename__ = 'user'
__table_args__ = (
db.PrimaryKeyConstraint('userid')
, db.ForeignKeyConstraint(('manageruserid',), ['CatalogA.SchemaA.user.userid'])
, {'schema': 'SchemaA'}
)
manager_user_id = db.Column('manageruserid', db.Integer())
user_id = db.Column('userid', db.Integer(), nullable=False)
class Tool(db.Model):
__bind_key__ = 'CatalogB'
__tablename__ = 'tool'
__table_args__ = (
db.PrimaryKeyConstraint('toolid')
, db.ForeignKeyConstraint(('ownerid',), ['CatalogA.SchemaA.user.userid'])
, {'schema': 'SchemaB'}
)
tool_id = db.Column('toolid', db.Integer())
owner_id = db.Column('ownerid', db.Integer(), nullable=False)
when trying to migrate it creates an error:
"sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 'user.manageruserid' could not find table 'CatalogA.SchemaA.user' with which to generate a foreign key to target column 'userid'"
If I do it the Schema only method way then I can run queries on the database, but it doesn't correctly setup my test-db.
I looked for multiple hours trying to find a solution, and would love someone to help me find the way forward (if you find a link to another solution, please tell me what you searched as well to increase my google-fu).
Main questions are:
What is the right way to have a model for this situation?
Was/Is there something in the documentation which I missed for this scenario?
I am using Flask-SQLAlchemy to define my models, and then using Flask-Migrate to auto-generate migration scripts for deployment onto a PostgreSQL database. I have defined a number of SQL Views on the database that I use in my application like below.
However, Flask-Migrate now generates a migration file for the view as it thinks it's a table. How do I correctly get Flask-Migrate / Alembic to ignore the view during autogenerate?
SQL View name: vw_SampleView with two columns: id and rowcount.
class ViewSampleView(db.Model):
__tablename__ = 'vw_report_high_level_count'
info = dict(is_view=True)
id = db.Column(db.String(), primary_key=True)
rowcount = db.Column(db.Integer(), nullable=False)
Which means I can now do queries like so:
ViewSampleView.query.all()
I tried following instructions on http://alembic.zzzcomputing.com/en/latest/cookbook.html and added the info = dict(is_view=True) portion to my model and the following bits to my env.py file, but don't know where to go from here.
def include_object(object, name, type_, reflected, compare_to):
"""
Exclude views from Alembic's consideration.
"""
return not object.info.get('is_view', False)
...
context.configure(url=url,include_object = include_object)
I think (though haven't tested) that you can mark your Table as a view with the __table_args__ attribute:
class ViewSampleView(db.Model):
__tablename__ = 'vw_report_high_level_count'
__table_args__ = {'info': dict(is_view=True)}
id = db.Column(db.String(), primary_key=True)
rowcount = db.Column(db.Integer(), nullable=False)
I'm a beginner with python/Flask/SQLAlchemy so sorry if my questions are dumb.
I want to create an API with Flask using Flask-SQLAlchemy as following:
one sqlite database for users/passwords
SQLALCHEMY_DATABASE_URI = 'sqlite:////path/to/users.db'
class User(DB.Model):
__tablename__ = 'users'
id = DB.Column(DB.Integer, primary_key=True)
username = DB.Column(DB.String(64), index=True)
password = DB.Column(DB.String(128))
Lets say I have multiple "customers" witch a user can create using
$ http POST http://localhost:5000/api/customers/ name=customer1
class Customer(DB.Model):
__tablename__ = 'customer'
customer_id = DB.Column(DB.Integer, primary_key=True)
customer_name = DB.Column(DB.String, unique=True, index=True)
I need to create a separate sqlite file for each "customers" :
SQLALCHEMY_BINDS = {
'customer1' = 'sqlite:////path/customer1.db',
'customer2' = 'sqlite:////path/customer2.db',
...
}
My questions are:
I do not have fixed number of "customers" so I cannot create a model class for each and specify the "bind_key" for each. Is it possible to do this with Flask-SQLAlchemy or I need to use plain SQLAlchemy?
I have 3 "customers" in data/ as customer1.db, customer2.db and customer3.db.
I would start the application and create SQLALCHEMY_BINDS dictionary listing the files in data/ and then DB.create_all() on a request for a specific "customer" .
how can I bind to the right .db file using the Flask-SQLAlchemy
DB.session?
I've read Using different binds in the same class in Flask-SQLAlchemy
Why exactly do you want entirely separate DB files for each customer?
In any case this is easier with straight SQLAlchemy. You can create a getter function which returns a session pointing to your db file.
def get_session(customer_id):
sqlite_url = 'sqlite:////path/customer%s.db' % customer_id
engine = create_engine(sqlite_url)
# initialize the db if it hasn't yet been initialized
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
return session
You can then use and close that session.
But without knowing your specific use case, it is difficult to understand why you would want to do this instead of just using a single SQLite database.
I'm using SQLAlchemy + alembic to manage my database. I had a string field which was 10 characters long and later on found out that it has to be 20. So I updated the model definition.
class Foo(db.Model):
__tablename__ = 'foos'
id = db.Column(db.Integer, primary_key=True)
foo_id = db.Column(db.Integer, db.ForeignKey('users.id'))
name = db.Column(db.String(80))
When I run alembic revision --autogenerate, this was not detected. Now I did read the documentation and suspected that this might not be supported. How do I managed such changes in DB gracefully?
You need to enable optional column type checking.
See this for notes on what is checked by default
context.configure(
# ...
compare_type = True
)