Technologies I am using:
python 3.6
postgres 10.4
flask
flask_migrate
flask_sqlalchemy
So far I have relied on the autogenerated migration scripts that result from calling python app.py db migrate. I then apply these migration scripts by calling python app.py db upgrade. However my most recent change involves modifying an Enum that I have used to create a column. Here is a simplified example of my enum:
class EventType(Enum):
started = 1
completed = 2
aborted = 2
(Note the typo with the repeated value of 2.)
Here is what I am attempting to change the Enum to:
class EventType(Enum):
started = 1
completed = 2
aborted = 3
failed = 4
My changes were to fix the typo and to add a new value to the enum.
Here is the (simplified) model that makes use of that enum:
class Event(db.Model):
id = db.Column(db.Integer, primary_key=True)
type = db.Column(db.Enum(EventType))
A call to python app.py db migrate did not detect any changes and I have read in the documentation that alembic (which is used under the hood in flask_migrate) does not automatically detect enum changes. 1
This question from ~6 years ago seems to indicate there is a better way to handle this problem after Postgres 9.4
I am looking for the specific steps I need to take to either manually write my own migration script or to get flask_migrate to detect this change and generate the script for me. Please let me know if I need to provide any more information to help with answering this question.
Here was my ultimate solution to my problem:
1) Generate a blank revision file with python app.py db revision --message "new event types"
2) Modify the new file by entering the following lines into the upgrade() method:
op.execute("COMMIT")
op.execute("ALTER TYPE eventtype ADD VALUE 'failed'")
3) Save and apply the usual way with python app.py db upgrade.
Notes:
This does not deal with the typo value. From what I could tell Postgres does not store the python enum value anywhere and doesn't care what it is.
This does not remove the values in the downgrade() method. I couldn't find a straightforward way to do that so I just ignored it. In my case I do not think it will matter if the downgrade does not remove those extra values.
I learned the name of my type (eventtype) by reading through the migration file that initially created the table that included that type.
I find myself stuck on this problem, and repeated Googling, checking SO, and reading numerous docs has not helped me get the right answer, so I hope this isn't a bad question.
One entity I want to create is an event taking place during a convention. I'm giving it the property start_time = ndb.TimeProperty(). I also have a property date = messages.DateProperty(), and I'd like to keep the two discrete (in other words, not using DateTimeProperty).
When a user enters information to create an event, I want to specify defaults for any fields they do not enter at creation and I'd like to set the default time as midnight, but I can't seem to format it correctly so the service accepts it (constant 503 Service Unavailable response when I try it using the API explorer).
Right now I've set things up like this (some unnecessary details removed):
event_defaults = {...
...
"start_time": 0000,
...
}
and then I try looping over my default values to enter them into a dictionary which I'll use to .put() the info on the server.
data = {field.name: getattr(request, field.name) for field in request.all_fields()
for default in event_defaults:
if data[default] in (None, []):
data[default] = event_defaults[default]
setattr(request, default, event_defaults[default])
In the logs, I see the error Encountered unexpected error from ProtoRPC method implementation: BadValueError (Expected time, got 0). I have also tried using the time and datetime modules, but I must be using them incorrectly, because I still receive errors.
I suppose I could work around this problem by using ndb.StringProperty() instead, and just deal with strings, but then I'd feel like I would be missing out on a chance to learn more about how GAE and NDB work (all of this is for a project on udacity.com, so learning is certainly the point).
So, how can I structure my default time properly for midnight? Sorry for the wall of text.
Link to code on github. The conference.py file contains the code I'm having the trouble with, and models.py contains my definitions for the entities I'm working with.
Update: I'm a dummy. I had my model class using a TimeProperty() and the corresponding message class using a StringField(), but I was never making the proper conversion between expected types. That's why I could never seem to give it the right thing, but it expected two different things at different points in the code. Issue resolved.
TimeProperty expects a datetime.time value
import datetime
event_defaults = {...
...
"start_time": datetime.time(),
...
}
More in the docs: https://cloud.google.com/appengine/docs/python/ndb/entity-property-reference#Date_and_Time
Use the datetime() library to convert it into a valid ndb time property value
if data['time']:
data['time'] = datetime.strptime(data['time'][:10], "%H:%M").time()
else:
data['time'] = datetime.datetime.now().time()
ps: Don't forget to change data['time'] with your field name
I'm attempting to do a bulk update with sqlalchemy.
What does work, is selecting the objects to update, then setting the attributes inside of a with session.begin_nested(): statement. The time it takes to do the actual save is slow, however.
When I attempt to use a bulk operation instead via session.bulk_save_objects or session.bulk_update_mappings I get the following exception:
A value is required for bind parameter 'schema_table_table_id'
[SQL: u'UPDATE schema.table SET updated_col=%(updated_col)s
WHERE schema.table.table_id = %(schema_table_table_id)s']
[parameters: [{'updated_col': 'some_val'}]]
It looks like bulk_save_objects uses the same logic path as bulk_update_mappings.
I actually don't even understand how bulk_update_mappings is supposed to work, because you are providing the updated values and a reference class but the primary key associated with those values are missing from your list. That essentially seems to be problem here. I tried using bulk_update_mappings and I provided the generated dictionary key used for the primary key param (schema_table_table_id in my example) and it just ended up being ignored. If I used the id attribute name instead, it would update the primary key in the generated SQL but still did not provide the needed parameter in the where clause.
This is using SQLAlchemy 1.0.12, which is the latest version on pip.
My suspicion is that this is a bug.
In old google appengine datastore API "required" and "default" could be used together for property definitions. Using ndb I get a
ValueError: repeated, required and default are mutally exclusive.
Sample code:
from google.appengine.ext import ndb
from google.appengine.ext import db
class NdbCounter(ndb.Model):
# raises ValueError
count = ndb.IntegerProperty(required=True, default=1)
class DbCounter(db.Model):
# Doesn't raise ValueError
count = db.IntegerProperty(required=True, default=1)
I want to instantiate a Counter without having to specify a value. I also want to avoid someone to override that value to None. The example above is constructed. I could probably live without a required attribute and instead add an increment() method. Still I don't see the reason why required and default are mutually exclusive.
Is it a bug or a feature?
I think you are right. Perhaps I was confused when I write that part of the code. It makes sense that "required=True" means "do not allow writing the value None" so it should be possible to combine this with a default value. Please file a feature request in the NDB tracker: http://code.google.com/p/appengine-ndb-experiment/issues/list
Note that for repeated properties things are more complicated, to repeated will probably continue to be incompatible with either required or default, even if the above feature is implemented.
Im not sure what was intended, heres is the "explanation" from appengine.ext.ndb.model.py:
The repeated, required and default options are mutually exclusive: a
repeated property cannot be required nor can it specify a default
value (the default is always an empty list and an empty list is always
an allowed value), and a required property cannot have a default.
Beware that ndb has some other really annoying behaviour ( Text>500 Bytes not possible without monkey-patching the expando-model, filtering by .IN( [] ) raises exception, ..).
So unless you need the speed-improvements by its caching you should might consider staying with ext.db atm.
I have some database structure; as most of it is irrelevant for us, i'll describe just some relevant pieces. Let's lake Item object as example:
items_table = Table("invtypes", gdata_meta,
Column("typeID", Integer, primary_key = True),
Column("typeName", String, index=True),
Column("marketGroupID", Integer, ForeignKey("invmarketgroups.marketGroupID")),
Column("groupID", Integer, ForeignKey("invgroups.groupID"), index=True))
mapper(Item, items_table,
properties = {"group" : relation(Group, backref = "items"),
"_Item__attributes" : relation(Attribute, collection_class = attribute_mapped_collection('name')),
"effects" : relation(Effect, collection_class = attribute_mapped_collection('name')),
"metaGroup" : relation(MetaType,
primaryjoin = metatypes_table.c.typeID == items_table.c.typeID,
uselist = False),
"ID" : synonym("typeID"),
"name" : synonym("typeName")})
I want to achieve some performance improvements in the sqlalchemy/database layer, and have couple of ideas:
1) Requesting the same item twice:
item = session.query(Item).get(11184)
item = None (reference to item is lost, object is garbage collected)
item = session.query(Item).get(11184)
Each request generates and issues SQL query. To avoid it, i use 2 custom maps for an item object:
itemMapId = {}
itemMapName = {}
#cachedQuery(1, "lookfor")
def getItem(lookfor, eager=None):
if isinstance(lookfor, (int, float)):
id = int(lookfor)
if eager is None and id in itemMapId:
item = itemMapId[id]
else:
item = session.query(Item).options(*processEager(eager)).get(id)
itemMapId[item.ID] = item
itemMapName[item.name] = item
elif isinstance(lookfor, basestring):
if eager is None and lookfor in itemMapName:
item = itemMapName[lookfor]
else:
# Items have unique names, so we can fetch just first result w/o ensuring its uniqueness
item = session.query(Item).options(*processEager(eager)).filter(Item.name == lookfor).first()
itemMapId[item.ID] = item
itemMapName[item.name] = item
return item
I believe sqlalchemy does similar object tracking, at least by primary key (item.ID). If it does, i can wipe both maps (although wiping name map will require minor modifications to application which uses these queries) to not duplicate functionality and use stock methods. Actual question is: if there's such functionality in sqlalchemy, how to access it?
2) Eager loading of relationships often helps to save alot of requests to database. Say, i'll definitely need following set of item=Item() properties:
item.group (Group object, according to groupID of our item)
item.group.items (fetch all items from items list of our group)
item.group.items.metaGroup (metaGroup object/relation for every item in the list)
If i have some item ID and no item is loaded yet, i can request it from the database, eagerly loading everything i need: sqlalchemy will join group, its items and corresponding metaGroups within single query. If i'd access them with default lazy loading, sqlalchemy would need to issue 1 query to grab an item + 1 to get group + 1*#items for all items in the list + 1*#items to get metaGroup of each item, which is wasteful.
2.1) But what if i already have Item object fetched, and some of the properties which i want to load are already loaded? As far as i understand, when i re-fetch some object from the database - its already loaded relations do not become unloaded, am i correct?
2.2) If i have Item object fetched, and want to access its group, i can just getGroup using item.groupID, applying any eager statements i'll need ("items" and "items.metaGroup"). It should properly load group and its requested relations w/o touching item stuff. Will sqlalchemy properly map this fetched group to item.group, so that when i access item.group it won't fetch anything from the underlying database?
2.3) If i have following things fetched from the database: original item, item.group and some portion of the items from the item.group.items list some of which may have metaGroup loaded, what would be best strategy for completing data structure to the same as eager list above: re-fetch group with ("items", "items.metaGroup") eager load, or check each item from items list individually, and if item or its metaGroup is not loaded - load them? It seems to depend on the situation, because if everything has already been loaded some time ago - issuing such heavy query is pointless. Does sqlalchemy provide a way to track if some object relation is loaded, with the ability to look deeper than just one level?
As an illustration to 2.3 - i can fetch group with ID 83, eagerly fetching "items" and "items.metaGroup". Is there a way to determine from an item (which has groupID of an 83), does it have "group", "group.items" and "group.items.metaGroup" loaded or not, using sqlalchemy tools (in this case all of them should be loaded)?
To force loading lazy attributes just access them. This the simplest way and it works fine for relations, but is not as efficient for Columns (you will get separate SQL query for each column in the same table). You can get a list of all unloaded properties (both relations and columns) from sqlalchemy.orm.attributes.instance_state(obj).unloaded.
You don't use deferred columns in your example, but I'll describe them here for completeness. The typical scenario for handling deferred columns is the following:
Decorate selected columns with deferred(). Combine them into one or several groups by using group parameter to deferred().
Use undefer() and undefer_group() options in query when desired.
Accessing deferred column put in group will load all columns in this group.
Unfortunately this doesn't work reverse: you can combine columns into groups without deferring loading of them by default with column_property(Column(…), group=…), but defer() option won't affect them (it works for Columns only, not column properties, at least in 0.6.7).
To force loading deferred column properties session.refresh(obj, attribute_names=…) suggested by Nathan Villaescusa is probably the best solution. The only disadvantage I see is that it expires attributes first so you have to insure there is not loaded attributes among passed as attribute_names argument (e.g. by using intersection with state.unloaded).
Update
1) SQLAlchemy does track loaded objects. That's how ORM works: there must be the only object in the session for each identity. Its internal cache is weak by default (use weak_identity_map=False to change this), so the object is expunged from the cache as soon as there in no reference to it in your code. SQLAlchemy won't do SQL request for query.get(pk) when object is already in the session. But this works for get() method only, so query.filter_by(id=pk).first() will do SQL request and refresh object in the session with loaded data.
2) Eager loading of relations will lead to fewer requests, but it's not always faster. You have to check this for your database and data.
2.1) Refetching data from database won't unload objects bound via relations.
2.2) item.group is loaded using query.get() method, so there won't lead to SQL request if object is already in the session.
2.3) Yes, it depends on situation. For most cases it's the best is to hope SQLAlchemy will use the right strategy :). For already loaded relation you can check if related objects' relations are loaded via state.unloaded and so recursively to any depth. But when relation is not loaded yet you can't get know whether related objects and their relations are already loaded: even when relation is not yet loaded the related object[s] might be already in the session (just imagine you request first item, load its group and then request other item that has the same group). For your particular example I see no problem to just check state.unloaded recursively.
1)
From the Session documentation:
[The Session] is somewhat used as a cache, in that
it implements the identity map
pattern, and stores objects keyed to
their primary key. However, it doesn’t
do any kind of query caching. ... It’s only
when you say query.get({some primary
key}) that the Session doesn’t have to
issue a query.
2.1) You are correct, relationships are not modified when you refresh an object.
2.2) Yes, the group will be in the identity map.
2.3) I believe your best bet will be to attempt to reload the entire group.items in a single query. From my experience it is usually much quicker to issue one large request than several smaller ones. The only time it would make sense to only reload a specific group.item is there was exactly one of them that needed to be loaded. Though in that case you are doing one large query instead of one small one so you don't actually reduce the number of queries.
I have not tried it, but I believe you should be able to use the sqlalchemy.orm.util.identity_key method to determine whether an object is in sqlalchemy's identiy map. I would be interested to find out what calling identiy_key(Group, 83) returns.
Initial Question)
If I understand correctly you have an object that you fetched from the database where some of its relationships were eagerloaded and you would like to fetch the rest of the relationships with a single query? I believe you may be able to use the Session.refresh() method passing in the the names of the relationships that you want to load.