I am having trouble building a Flask-SQLAlchemy query with a like() method, which should build a query using the the SQL LIKE statement.
According the SQLAlchemy docs the like method can be called on a column like this:
select([sometable]).where(sometable.c.column.like("%foobar%"))
I have a ModelClass that subclasses the Flask-SQLAlchemy db.Model class. Defined like this:
class ModelClass(db.Model):
# Some other columns ...
field1 = db.Column(db.Integer(), db.ForeignKey('my_other_class.id'))
rel1 = db.relationship("MyOtherClass", foreign_keys=[field1])
I then have a loop where I am building up filters dynamically. Outside the loop I use these filters to filter a query. The inside of my loop, slightly modified, looks like this:
search_term = '%{}%'.format(search_string)
my_filter = getattr(ModelClass, field_string).like(search_term)
This raises an error at the line with the like method:
NotImplementedError: <function like_op at 0x101c06668>
It raises this error for any text string. The Python docs for a NotImplementedError say:
This exception is derived from RuntimeError. In user defined base
classes, abstract methods should raise this exception when they
require derived classes to override the method.
This isn't an AttributeError, so I think the like method exists, but something else is wrong and I'm not sure what.
Update
Now that I'm looking more closely at the model definition I think the problem might be that I'm doing this on a relationship and not a Column type.
I saw that type(getattr(ModelClass, field_string)) gives:
<sqlalchemy.orm.attributes.InstrumentedAttribute object at 0x102018090>
Since this is not a Column type I looked at the values for field_string and saw that one of the values being passed was actually rel1.
So I guess that's the "answer" but I'm still confused why calling .like() on rel1 didn't raise an AttributeError.
So I've confirmed the issue is that I was trying to apply the .like() method to a relationship attribute instead of a column.
I changed my code to call the child model class directly as opposed to trying to go across the relationship from the parent to access the child class columns. Something like this:
search_term = '%{}%'.format(search_string)
my_filter = getattr(ChildModelClass, field_string).like(search_term)
As #ACV said, calling methods such as like(), is_(), is_not(), etc. on relationship attributes raises NotImplementedError. So, to workaround this problem, I called the method directly on the real column attribute instead of the relationship. E.g. if I have the following two attributes in a Model:
user_id = db.Column(db.Integer, db.ForeignKey('user.id', ondelete='CASCADE'), index=True)
user = db.relationship(
'User', backref=db.backref('readings', lazy='dynamic', cascade='all, delete-orphan'))
I did the following query to filter the instances whose attribute user IS NOT NULL. (Note that I'm using MyModel.user_id instead of MyModel.user to successfully run the query):
MyModel.query.filter(MyModel.user_id.is_not(None))
Related
Flask-SQLALchemy i have a model with columns:
class MyModel(db.Model):
def my_method1(self, arg1):
pass
a = Column(String(), primary_key=True)
Now i have a function which accepts Columns as argument to retrieve some information from them:
def get_column_info(column):
if column.primary_key:
return True
else:
return False
Note that this is just an example, the get_column_info does much more than that in reality.
Now i want to be able to access the originating model in my get_column_info function. That is i want to be able to call my_method1()
from within get_column_info.
Is there a way i can retrieve the originating model from a column instance?
There is no proper out of the box method for doing this. Column object has table attribute which returns __table__ attribute of the model but you can't get actual model from it. However (as this answer suggested) you can use get_class_by_table method in sqlalchemy_utils plugin:
from sqlalchemy_utils.functions import get_class_by_table
def get_model(column):
return get_class_by_table(db.Model, column.__table__)
For example, using Flask-SQLAlchemy and jsontools to serialize to JSON like shown -here-, and given a model like this:
class Engine(db.Model):
__tablename__ = "engines"
id = db.Column(db.Integer, primary_key=True)
this = db.Column(db.String(10))
that = db.Column(db.String(10))
parts = db.relationship("Part")
schema = ["id"
, "this"
, "that"
, "parts"
]
def __json__(self):
return self.schema
class Part(db.Model):
__tablename__ = "parts"
id = db.Column(db.Integer, primary_key=True)
engine_id = db.Column(db.Integer, db.ForeignKey("engines.id"))
code = db.Column(db.String(10))
def __json__(self):
return ["id", "code"]
How do I change the schema attribute before query so that it takes effect on the return data?
enginelist = db.session.query(Engine).all()
return enginelist
So far, I have succeeded with subclassing and single-table inheritance like so:
class Engine_smallschema(Engine):
__mapper_args__ = {'polymorphic_identity': 'smallschema'}
schema = ["id"
, "this"
, "that"
]
and
enginelist = db.session.query(Engine_smallschema).all()
return enginelist
...but it seems there should be a better way without needing to subclass (I'm not sure if this is wise). I've tried various things such as setting an attribute or calling a method to set an internal variable. Problem is, when trying such things, the query doesn't like the instance object given it and I don't know SQLAlchemy well enough yet to know if queries can be executed on pre-made instances of these classes.
I can also loop through the returned objects, setting a new schema, and get the wanted JSON, but this isn't a solution for me because it launches new queries (I usually request the small dataset first).
Any other ideas?
The JSON serialization takes place in flask, not in SQLAlchemy. Thus, the __json__ function is not consulted until after you return from your view function. This has therefore nothing to do with SQLAlchemy, and instead it has to do with the custom encoding function, which presumably you can change.
I would actually suggest not attempting to do it this way if you have different sets of attributes you want to serialize for a model. Setting a magic attribute on an instance that affects how it's serialized violates the principle of least surprise. Instead, you can, for example, make a Serializer class that you can initialize with the list of fields you want to be serialized, then pass your Engine to it to produce a dict that can be readily converted to JSON.
If you insist on doing it your way, you can probably just do this:
for e in enginelist:
e.__json__ = lambda: ["id", "this", "that"]
Of course, you can change __json__ to be a property instead if you want to avoid the lambda.
Is there anything wrong with inheritance in which child class is only used to present parent's values in a different way?
Example:
class Parent(db.Model):
__tablename__ = u'parent'
parent_entry_id = db.Column(db.Integer, primary_key=True)
parent_entry_value = db.Column(db.BigInteger)
class Child(Parent):
__tablename__ = u'child'
#property
def extra_value(self):
return unicode(self.parent_entry_id) + unicode(self.parent_entry_value)
No new values will be added Child class, thus Joined Table, Single Table or Concrete Table Inheritance, as for me, is not needed.
If you're simply changing how you display the data from the class, I'm pretty sure you don't need a __tablename__.
Additionally, though I don't know your exact problem domain, I would simply just add the property on the original class. You could argue that you're adding some extra behavior to your original class, but that seems like a bit of a flimsy argument in this case.
I got two models, for example:
Parent(models.Model):
mytext= models.Chafield(max_lenght=250, blank=True)
Child(Parent):
mytext_comment=models.Chafield(max_lenght=250)
But in child I want mytext to be obligatory.
Do it will be sufficient to invoke mytext.blank=False in child __init__ ?
Caution this are not abstract methods because I want to be able to use Manager on Parent (Parent.objects.all() for example)
I don't think its possible. From Django Documentation:
This restriction only applies to attributes which are Field instances.
Normal Python attributes can be overridden if you wish. It also only
applies to the name of the attribute as Python sees it: if you are
manually specifying the database column name, you can have the same
column name appearing in both a child and an ancestor model for
multi-table inheritance (they are columns in two different database
tables).
PS: I tried to like you suggested, but I get error like unicode object has no attribute blank
Hmm you can try this solution:
Parent(models.Model):
mytext= models.Chafield(max_lenght=250, blank=True)
Child(Parent):
mytext_comment=models.Chafield(max_lenght=250)
Child._meta.get_field('mytext').blank = True
Can you please let me know if it works ?
As the discussion goes on I think the correct answer is:
You don't do it on model level. I should do this kind of validation on form level not in a model. Best places are: form fields parameters or form clean method
I'm getting an error I don't understand with AbstractConcreteBase
in my_enum.py
class MyEnum(AbstractConcreteBase, Base):
pass
in enum1.py
class Enum1(MyEnum):
years = Column(SmallInteger, default=0)
# class MyEnums1:
# NONE = Enum1()
# Y1 = Enum1(years=1)
in enum2.py
class Enum2(MyEnum):
class_name_python = Column(String(50))
in test.py
from galileo.copernicus.basic_enum.enum1 import Enum1
from galileo.copernicus.basic_enum.enum2 import Enum2
#...
If I uncomment the three lines in enum1.py I get the following error on the second import.
AttributeError: type object 'MyEnum' has no attribute 'table'
but without MyEnums1 it works fine or with MyEnums1 in a separate file it works fine. Why would this instantiation affect the import? Is there anyway I can keep MyEnums1 in the same file?
the purpose of the abstractconcretebase is to apply a non-standard order of operations to the standard mapping procedure. normally, mapping works like this:
define a class to be mapped
define a Table
map the class to the Table using mapper().
Declarative essentially combines these three steps, but that's what it does.
When using an abstract concrete base, we have this totally special step that needs to happen - the base class needs to be mapped to a union of all the tables that the subclasses are mapped to. So if you have enum1 and enum2, the "Base" needs to map to essentially "select * from enum1 UNION ALL select * from enum2".
This mapping to a UNION can't happen piecemeal; the MyEnum base class has to present itself to mapper() with the full UNION of every sub-table at once. So AbstractConcreteBase performs the complex task of rearranging how declarative works such that the base MyEnum is not mapped at all until the mapper configuration occurs, which among other places occurs when you first instantiate a mapped class. It then inserts itself as the mapped base for all the existing mapped subclasses.
So basically by instantiating an Enum1() object at the class level like that, you're invoking configure_mappers() way too early, such that by the time Enum2() comes along the abstractconcretebase is baked and the process fails.
All of that aside, it's not at all correct to be instantiating a mapped class like Enum1() at the class level like that. ORM-mapped objects are the complete opposite of global objects and must always be created local to a specific Session.
edit: also those classes are supposed to have {"concrete": True} on them which is part of why you're getting this message. Im trying to see if the message can be improved.
edit 2: yeah the mechanics here are weird. I've committed something else that skips this particular error message, though it will fail differently now and not much better. getting this to fail more gracefully would require a little more work.