SQLAlchemy AssociationProxy on a clean POPO? - python

Using a non-declarative SQLAlchemy setup, can I use an association_proxy without having to modify my model class?

Preliminary hack:
(in the module where the SQLAlchemy mapping etcetera takes place:)
import ModelClass
ModelClass.associationProperty = association_proxy()
mapper() #etc..
This works but I am not sure how reliable this is.

Related

Is it a bad practice (anti-pattern) to provide a module as an argument when initialising a class?

I've a implemented a BaseScanner inside common/base/scanner.py that is subclasses by Scanner inside stash/scanner.py, jira/scanner.py, etc.
Now, the problem is that BaseScanner needs to access the ORM models in e.g. stash/db/models.py, depending on where it's subclasses (in stash, jira, etc.):
# common package
common/base/scanner.py
# stash package
stash/scanner.py
stash/db/models.py
# jira package
jira/scanner.py
jira/db/models.py
...
Is it an anti-pattern to provide a module as an argument to a class when instantiating it, like I do here in main.py?
import stash, jira
...
if args.command == 'stash':
import stash.db.models as models
scanner = jira.Scanner(args, models, ...)
scanner.run()
and then to access the different ORM models from inside BaseScanner, like self.models.Scan, self.models.Match, etc.?
If it's an anti-pattern, what could an alternative solution be?

Is there a SqlAlchemy database agnostic FROM_UNIXTIME() function?

Currently I have a query similar to the below in flask sqlalchemy:
from sqlalchemy.sql import func
models = (
Model.query
.join(ModelTwo)
.filter(Model.finish_time >= func.from_unixtime(ModelTwo.start_date))
.all()
)
This works fine with MySQL which I am running in production, however when I run tests against the method using an in-memory SqlLite database it fails because from_unixtime is not a SqlLite function.
Aside from the running tests on the same database as production as closely as possible issue and the fact that I have two different ways of representing data in the database, is there a database agnostic method in SqlAlchemy for handling the conversion of dates to unix timestamps and vice-versa?
For anyone else interested in this, I found a way to create custom functions in SqlAlchemy based on the SQL dialect being used. As such the below achieves what I need:
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
class convert_timestamp_to_date(expression.FunctionElement):
name = 'convert_timestamp_to_date'
#compiles(convert_timestamp_to_date)
def mysql_convert_timestamp_to_date(element, compiler, **kwargs):
return 'from_unixtime({})'.format(compiler.process(element.clauses))
#compiles(convert_timestamp_to_date, 'sqlite')
def sqlite_convert_timestamp_to_date(element, compiler, **kwargs):
return 'datetime({}, "unixepoch")'.format(compiler.process(element.clauses))
The query above can now be re-written as such:
models = (
Model.query
.join(ModelTwo)
.filter(Model.finish_time >= convert_timestamp_to_date(ModelTwo.start_date))
.all()
)

Use the SQLAlchemy session in custom Flask-SQLAlchemy query method

I would like to create a custom method to tell me whether a row exists in my database. The SQLAlchemy exists method returns a subquery but doesn't execute anything, so I wanted to create the method does_exist, which will simply return True or False. Here is my code:
from flask_sqlalchemy import SQLAlchemy, BaseQuery, Model
class CustomBaseQuery(BaseQuery):
def does_exist(self):
return db.session.query(self.exists()).scalar()
db = SQLAlchemy(app, query_class=CustomBaseQuery)
This actually does work, but it seems wrong to refer to db.session within the body of the method, which thus depends on later naming the SQLAlchemy instance db. I would like to find a way to reference the eventual db.session object in a more general way.
Full working example here: https://gist.github.com/wbruntra/3db7b630e6ffb86fe792e4ed5a7a9578
Though undocumented, the session used by the Query object is accessible as
self.session
so your more generic CustomBaseQuery could look like
class CustomBaseQuery(BaseQuery):
def does_exist(self):
return self.session.query(self.exists()).scalar()

How to keep VARCHAR in the DB (MYSQL), but ENUM in the sqlalchemy model

I want to add a new Int column to my MYSQL DB, so that in the sqlalchemy ORM it will be converted to an ENUM.
For example, let's say I have this enum:
class employee_type(Enum):
Full_time = 1
Part_time = 2
Student = 3
I want to keep in the DB those params - 1,2,3..., but when developers will write code that involves this model - they will just use the Enum, without having to go through getter and setter functions.
So they will be able to do - instance_of_model.employee_type and get an Enum. And - new_instance = model_name(employee_type=Employee_type.Full_time..)
How should I define my sqlalchemy model so it will work? (I've heard of hybrid types but not sure it will work here)
Thanks!
Apparently the answer is super simple(!), there is nothing special we need to do - SQLAlchemy support it by itself.
Meaning - you can set the specific column to be INT in the DB, but enum in the model, and when querying the DB SQLAlchemy will convert it by itself. same goes when inserting to the DB :)
I used it with declarative enum, it means it's values are strings (and it has a function - from_string()).
So once I used VARCHAR columns, it worked like a charm!
Sqlalchemy checks the db if native type enum is available.
If the db does not support this, the default type is VARCHAR.
So you can
Enum(MyEnum, native_enum=False)

I need a sample of python unit testing sqlalchemy model with nose

Can someone show me how to write unit tests for sqlalchemy model I created using nose.
I just need one simple example.
Thanks.
You can simply create an in-memory SQLite database and bind your session to that.
Example:
from db import session # probably a contextbound sessionmaker
from db import model
from sqlalchemy import create_engine
def setup():
engine = create_engine('sqlite:///:memory:')
session.configure(bind=engine)
# You probably need to create some tables and
# load some test data, do so here.
# To create tables, you typically do:
model.metadata.create_all(engine)
def teardown():
session.remove()
def test_something():
instances = session.query(model.SomeObj).all()
eq_(0, len(instances))
session.add(model.SomeObj())
session.flush()
# ...
Check out the fixture project. We used nose to test that and it's also a way to declaratively define data to test against, there will be some extensive examples for you to use there!
See also fixture documentation.

Categories

Resources