I googled for a while but I could not find reference on how to retrieve a Colander Schema from a config file or from a database. I think this is not difficult to implement but I might have overlooked something. Maybe somebody has done or seen something like that and might share some insights.
Here a sample for a Colander Schema:
class PageSchema(colander.MappingSchema):
title = SchemaNode(String(),
title='Page title',
description='The title of the page',
)
description = SchemaNode(String(),
title='A short description',
description='Keep it under 60 characters or so',
missing = u'',
validator=colander.Length(max=79)
)
body = colander.SchemaNode(colander.String(),
description='Tell the world',
missing = u'')
As micheal said, it might not be supported. If you really need it. Here is some pointers.
Save your schema in a database by name for example: "PageSchema". Save all of its record in the database with all the needed parameters.
You'd have to do something like that:
for row in rows:
attrinbutes[row['name']] = build_attribute(row)
schemas[schema_name] = type(schema_name, (colander.MappingSchema,), attributes)
exec('%s = schemas[schema_name]' % schema_name)
In other words, it loads all attributes and build a class using the type operator. That kind of task is pretty simple and should work as good as the habitual class syntax. The exec call is just to push the name in locals. You could probably use locals()[schema_name] = schmea or even other scopes.
That way you can load schemas from anywhere if needed. You could build yourself a factory like:
schemas.get('PageSchema') that would return a schema if possible or None if not present.
That's pretty much it!
This is not supported by colander. The one thing I know of in this area is the "limone" package which does the opposite. It is able to generate arbitrary python objects from a colander schema.
ColanderAlchemy may do what you need. It takes SQLAlchemy objects and generates a Colander schema from them. However, generating from an SQLAlchemy object isn't exactly "from a database".
Related
I have found myself in a situation that does not seem to have an elegant solution. Consider the following (pseudo-ish) REST API code
bp = Blueprint('Something', __name__, url_prefix='/api')
class SomeOutputSchema(ma.SQLAlchemyAutoSchema)
class Meta:
model = MyModel
#pre_dump(pass_many=False)
def resolveFilePath(self, ormObject, many):
# ormObject has a parent via a relationship
ormObject.filePath = os.path.join(ormObject.parent.rootDir, ormObject.filePath)
#bp.route("/someRoute")
class SomeClass(MethodView):
def put(self):
ormObject = MyModel(filePath = "/some/relative/path")
db.session.add(ormObject)
db.session.flush()
outputDump = SomeOutputSchema().dump(ormObject)
# Lots of other code that uses outputDump...
# Only commit here in case
# anything goes wrong above
db.session.commit()
return jsonify({"data": outputDump}), 201
I have
A PUT endpoint that will create a new resource, then return the dump of that resource.
An ORM object that has a filePath property. This must be stored as a relative path.
A Marshmallow schema. It has a #pre_dump method to resolve the file path by the use of another property (parent.rootDir)
So basically the process is
Create the new resource
Create a schema dump of that resource to use
Commit
Return the schema dump
So finally, the problem is: outputDump's #pre_dump actually alters ormObject, so that it is now a fully resolved path by the time db.session.commit() is called. My first instinct here was to create a deep copy of ormObject but that fails with
"Parent instance <MyModel at 0x7f31cdd44240> is not bound to a Session; lazy load operation of attribute 'parent' cannot proceed (Background on this error at: http://sqlalche.me/e/14/bhk3)"
It's not that this is a difficult thing to solve, but it seems to be difficult to solve elegantly with my current knowledge. I need the path to be relative for the database, and resolved otherwise.
My current solution is to tell the SomeOutputSchema to skip the #pre_dump in this case, then take the outputDump and then resolve the file paths just after the schema dump. But this feels really gross to me.
I would love to hear any ideas on this, as currently my code feels messy and I don't like the idea of just leaving it and pushing on.
Solved by using a #post_dump and using pass_original=True to get access to the original object
class SomeOutputSchema(ma.SQLAlchemyAutoSchema)
class Meta:
model = MyModel
#post_dump(pass_original=True)
def resolveFilePath(self, data, ormObject, many):
data['filePath'] = os.path.join(ormObject.parent.rootDir, ormObject.filePath)
The documentation (https://cloud.google.com/appengine/docs/python/ndb/) states that
NDB uses Memcache as a cache service for "hot spots" in the data
I am now using memcache only as follows:
memcache.set(key=(id), value=params, time=0)
That expires (auto flushes) pretty often and so I would like to use NDB Datastore.
I thought I would have to always put the key-value in both NDB and Memcache, then check both.
Is this being done automatically by NDB?
Ie.
ancestor_key = ndb.Key("Book", guestbook_name or "*notitle*")
greetings = Greeting.query_book(ancestor_key).fetch(20)
Would that implicitly set Memcache ?
And when I read from NDB, would it implicitly try a memcache.get(key) first?
Thanks for your patience.
EDIT - What I tried:
As a test I tried something like this:
class Book(ndb.Model):
content = ndb.StringProperty()
class update(webapp2.RequestHandler):
def post(self):
p1='1'
p2='2'
p3='3'
p4='4'
p5='5'
id='test'
paramarray = (p1,p2,p3,p4,p5)
book = Book(name=id,value=paramarray)
# OR likes this - book = Book(ndb.Key(id),value=paramarray)
book.put()
Both versions error out.
Trying to get a key of the var id with the values of paramarray
EDIT 2 Daniel, Thank you for everything.
Have follow up formatting questions, will ask a new question.
Yes; see the full documentation on ndb caching. Basically, every write is cached both in a request-local in-context cache, and in the main memcached store; a get by key will look up in both caches first before falling back to the real datastore.
Edit I can't understand why you think your example would work. You defined a model with a content property, but then try to set name and value properties on it; naturally that will fail.
You should go through the ndb documentation, which gives a good introduction to using the model class.
I'm using sqlalchemy and am trying to integrate alembic for database migrations.
My database currently exists and has a number of ForeignKeys defined without names. I would like to add a naming convention to allow for migrations that affect ForeignKey columns.
I've added the naming convention given here to the top of my models.py file:
SQLAlchemy Naming Constraints
convention = {
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}
DeclarativeBase = declarative_base()
DeclarativeBase.metadata = MetaData(naming_convention=convention)
def db_connect():
return create_engine(URL(**settings.DATABASE))
def create_reviews_table(engine):
DeclarativeBase.metadata.create_all(engine)
class Review(DeclarativeBase):
__tablename__ = 'reviews'
id = Column(Integer, primary_key=True)
review_id = Column('review_id', String, primary_key=True)
resto_id = Column('resto_id', Integer, ForeignKey('restaurants.id'),
nullable=True)
url = Column('url', String),
resto_name = Column('resto_name', String)
I've set up alembic/env.py as per the tutorial instructions, feeding my model's metadata into target_metadata.
When I run
$: alembic current
I get the following error:
sqlalchemy.exc.InvalidRequestError: Naming convention including %(constraint_name)s token requires that constraint is explicitly named.
In the docs they say that "This same feature [generating names for columns using a naming convention] takes effect even if we just use the Column.unique flag:" 1, so I'm thinking that there shouldn't be a problem (they go on to give an example using a ForeignKey that isn't named too).
Do I need to go back and give all my constraints explicit names, or is there a way to do it automatically?
just modify th "ck" in convention to "ck": "ck_%(table_name)s_%(column_0_name)s"。it works for me .
refer to see sqlalchemy docs
What this error message is telling you is that you should name constraints explicitly. The constraints it's referring to are Boolean, Enum etc but not foreignkeys nor primary keys.
So go through your table, wherever you have a Boolean or Enum add a name to it. For example:
is_active = Column(Boolean(name='is_active'))
That's what you need to do.
This does not aim to be definitive answer and also fails to answer your immediate technical question, but could it be a "philosophical problem"? Either your SQLAlchemy code is the source of truth as far as the database is concerned, or the RDMS is the source. In front of this a mixed situation, where each of the two have part of it, I would see two avenues:
The one that you are exploring: you modify the database's schema to match the SQLAlchemy model and you make your Python code the master. This is the most intuitive, but this may not always be possible, both for technical and administrative reasons.
Accepting that the RDMS has info that SQLAlchemy doesn't have, but is fortunately not relevant for day-to-day work. Your best chance is to use another migration tool (ETL) that will reverse engineer the database before migrating it. After the migration is complete you could give back control of the new instance to SQLAlchemy (which may require some adjustments to the new DB or to the model).
There is no way to tell which approach will work, since both have their own challenges. But I would give some thought to the second method?
I've had some luck altering naming_convention back to {} in each older migration so that they run with the correct historical context.
Still entirely unsure what kind of interesting side-effects this might have.
I believe this is trival but fairly new to Python.
I am trying to create a model using google app engine.
Basically from a E/R point of view
I have 2 objects with a join table (the join table captures the point in time of the join)
Something like this
Person | Idea | Person_Idea
-------------------------------
person.key idea.key person.key
idea.key
date_of_idea
my Python code would look like
class Person (db.Model):
#some properties here....
class Idea(db.Model):
#some properties here....
class IdeaCreated(db.Model):
person= db.ReferenceProperty(Person)
idea= db.ReferenceProperty(Idea)
created = db.DateTimeProperty(auto_now_add = True)
What I want to be able to do is have a convient way to get all ideas a person has (bypass idea created objects) -sometimes I will need the list of ideas directly.
The only way I can think to do this is to add the follow method on the User class
def allIdeas(self):
ideas = []
for ideacreated in self.ideacreated_set:
ideas.append(ideacreated.idea)
return ideas
Is this the only way to do this? I is there a nicer way that I am missing?
Also assuming I could have a GQL and bypass hydrating the ideaCreated instances (not sure the exact syntax) but putting a GQL query smells wrong to me.
you should use the person as an ancestor/parent for the idea.
idea = Idea(parent=some_person, other_field=field_value).put()
then you can query all ideas where some_person is the ancestor
persons_ideas = Idea.all().ancestor(some_person_key).fetch(1000)
the ancestor key will be included in the Idea entities key and you won't be able to change that the ancestor once the entity is created.
i highly suggest you to use ndb instead of db https://developers.google.com/appengine/docs/python/ndb/
with ndb you could even use StructuredProperty or LocalStructuredProperty
https://developers.google.com/appengine/docs/python/ndb/properties#structured
EDIT:
if you need a many to many relationship look in to ListProperties and store the Persons keys in that property. then you can query for all Ideas with that Key in that property.
class Idea(db.Model):
person = db.StringListProperty()
idea = Idea(person = [str(person.key())], ....).put()
add another person to the idea
idea.person.append(str(another_person.key())).put()
ideas = Idea.filter(person=str(person.key())).fetch(1000)
look into https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#ListProperty
Pretty recent (but not newborn) to both Python, SQLAlchemy and Postgresql, and trying to understand inheritance very hard.
As I am taking over another programmer's code, I need to understand what is necessary, and where, for the inheritance concept to work.
My questions are:
Is it possible to rely only on SQLAlchemy for inheritance? In other words, can SQLAlchemy apply inheritance on Postgresql database tables that were created without specifying INHERITS=?
Is the declarative_base technology (SQLAlchemy) necessary to use inheritance the proper way. If so, we'll have to rewrite everything, so please don't discourage me.
Assuming we can use Table instance, empty Entity classes and mapper(), could you give me a (very simple) example of how to go through the process properly (or a link to an easily understandable tutorial - I did not find any easy enough yet).
The real world we are working on is real estate objects. So we basically have
- one table immobject(id, createtime)
- one table objectattribute(id, immoobject_id, oatype)
- several attribute tables: oa_attributename(oa_id, attributevalue)
Thanks for your help in advance.
Vincent
Welcome to Stack Overflow: in the future, if you have more than one question; you should provide a separate post for each. Feel free to link them together if it might help provide context.
Table inheritance in postgres is a very different thing and solves a different set of problems from class inheritance in python, and sqlalchemy makes no attempt to combine them.
When you use table inheritance in postgres, you're doing some trickery at the schema level so that more elaborate constraints can be enforced than might be easy to express in other ways; Once you have designed your schema; applications aren't normally aware of the inheritance; If they insert a row; it just magically appears in the parent table (much like a view). This is useful, for instance, for making some kinds of bulk operations more efficient (you can just drop the table for the month of january).
This is a fundamentally different idea from inheritance as seen in OOP (in python or otherwise, with relational persistence or otherwise). In that case, the application is aware that two types are related, and that the subtype is a permissible substitute for the supertype. "A holding is an address, a contact has an address therefore a contact can have a holding."
Which of these, (mostly orthogonal) tools you need depends on the application. You might need neither, you might need both.
Sqlalchemy's mechanisms for working with object inheritance is flexible and robust, you should use it in favor of a home built solution if it is compatible with your particular needs (this should be true for almost all applications).
The declarative extension is a convenience; It allows you to describe the mapped table, the python class and the mapping between the two in one 'thing' instead of three. It makes your code more "DRY"; It is however only a convenience layered on top of "classic sqlalchemy" and it isn't necessary by any measure.
If you find that you need table inheritance that's visible from sqlalchemy; your mapped classes won't be any different from not using those features; tables with inheritance are still normal relations (like tables or views) and can be mapped without knowledge of the inheritance in the python code.
For your #3, you don't necessarily have to declare empty entity classes to use mapper. If your application doesn't need fancy properties, you can just use introspection and metaclasses to model the existing tables without defining them. Here's what I did:
mymetadata = sqlalchemy.MetaData()
myengine = sqlalchemy.create_engine(...)
def named_table(tablename):
u"return a sqlalchemy.Table object given a SQL table name"
return sqlalchemy.Table(tablename, mymetadata, autoload=True, autoload_with=myengine)
def new_bound_class(engine, table):
u"returns a new ORM class (processed by sqlalchemy.orm.mapper) given a sqlalchemy.Table object"
fieldnames = table.c.__dict__['_data']
def format_attributes(obj, transform):
attributes = [u'%s=%s' % (x, transform(x)) for x in fieldnames]
return u', '.join(attributes)
class DynamicORMClass(object):
def __init__(self, **kw):
u"Keyword arguments may be used to initialize fields/columns"
for key in kw:
if key in fieldnames: setattr(self, key, kw[key])
else: raise KeyError, '%s is not a valid field/column' % (key,)
def __repr__(self):
return u'%s(%s)' % (self.__class__.__name__, format_attributes(self, repr))
def __str__(self):
return u'%s(%s)' % (str(self.__class__), format_attributes(self, str))
DynamicORMClass.__doc__ = u"This is a dynamic class created using SQLAlchemy based on table %s" % (table,)
return sqlalchemy.orm.mapper(DynamicORMClass, table)
def named_orm_class(table):
u"returns a new ORM class (processed by sqlalchemy.orm.mapper) given a table name or object"
if not isinstance(table, Table):
table = named_table(table)
return new_bound_class(table)
Example of use:
>>> myclass = named_orm_class('mytable')
>>> session = Session()
>>> obj = myclass(name='Fred', age=25, ...)
>>> session.add(obj)
>>> session.commit()
>>> print str(obj) # will print all column=value pairs
I beefed up my versions of new_bound_class and named_orm_class a little more with decorators, etc. to provide extra capabilities, and you can too. Of course, under the covers, it is declaring an empty entity class. But you don't have to do it, except this one time.
This will tide you over until you decide that you're tired of doing all those joins yourself, and why can't I just have an object attribute that does a lazy select query against related classes whenever I use it. That's when you make the leap to declarative (or Elixir).