SQLAlchemy commit changes to object modified through __dict__ - python

I am developing a multiplayer game. When I use an object from inventory, it should update the user creature's stats with the values of the attributes of an object.
This is my code:
try:
obj = self._get_obj_by_id(self.query['ObjectID']).first()
# Get user's current creature
cur_creature = self.user.get_current_creature()
# Applying object attributes to user attributes
for attribute in obj.attributes:
cur_creature.__dict__[str(attribute.Name)] += attribute.Value
dbObjs.session.commit()
except (KeyError, AttributeError) as err:
self.query_failed(err)
Now, this doesn't commit things properly for some reason, so I tried:
cur_creature.Health = 100
logging.warning(cur_creature.Health)
dbObjs.session.commit()
Which works, but is not very convenient (since I would need a big if statement to updated the different stats of the creature)
So I tried:
cur_creature.__dict__['Health'] = 100
logging.warning(cur_creature.Health)
dbObjs.session.commit()
I get 100 in logs, but no changes, so I tried:
cur_creature.__dict__['Health'] = 100
cur_creature.Health = cur_creature.__dict__['Health']
logging.warning(cur_creature.Health)
dbObjs.session.commit()
Still '100' in logs, but no changes, so I tried:
cur_creature.__dict__['Health'] = 100
cur_creature.Health = 100
logging.warning(cur_creature.Health)
dbObjs.session.commit()
Which still writes 100 in the logs, but doesn't commit changes to the database.
Now, this is weird, since it only differ by the working version for the fact that it has this line on top:
cur_creature.__dict__['Health'] = 100
Summary: If I modify an attribute directly, commit works fine. Instead, if I modify an attribute through the class' dictionary, then, no matter how I modify it afterwards, it doesn't commit changes to the db.
Any ideas?
Thanks in advance
UPDATE 1:
Also, this updates Health in the db, but not Hunger:
cur_creature.__dict__['Hunger'] = 0
cur_creature.Health = 100
cur_creature.Hunger = 0
logging.warning(cur_creature.Health)
dbObjs.session.commit()
So just accessing the dictionary is not a problem for attributes in general, but modifying an attribute through the dictionary, prevents the changes to that attributes from being committed.
Update 2:
As a temporary fix, I've overridden the function __set_item__(self) in the class Creatures:
def __setitem__(self, key, value):
if key == "Health":
self.Health = value
elif key == "Hunger":
self.Hunger = value
So that the new code for 'use object' is:
try:
obj = self._get_obj_by_id(self.query['ObjectID']).first()
# Get user's current creature
cur_creature = self.user.get_current_creature()
# Applying object attributes to user attributes
for attribute in obj.attributes:
cur_creature[str(attribute.Name)] += attribute.Value
dbObjs.session.commit()
except (KeyError, AttributeError) as err:
self.query_failed(err)
Update 3:
By having a look at the suggestions in the answers, I settled down for this solution:
In Creatures
def __setitem__(self, key, value):
if key in self.__dict__:
setattr(self, key, value)
else:
raise KeyError(key)
In the other method
# Applying object attributes to user attributes
for attribute in obj.attributes:
cur_creature[str(attribute.Name)] += attribute.Value

The problem does not reside in SQLAlchemy but is due to Python's descriptors mechanism. Every Column attribute is a descriptor: this is how SQLAlchemy 'hooks' the attribute retrieval and modification to produce database requests.
Let's try with a simpler example:
class Desc(object):
def __get__(self, obj, type=None):
print '__get__'
def __set__(self, obj, value):
print '__set__'
class A(object):
desc = Desc()
a = A()
a.desc # prints '__get__'
a.desc = 2 # prints '__set__'
However, if you go through a instance dictionary and set another value for 'desc', you bypass the descriptor protocol (see Invoking Descriptors):
a.__dict__['desc'] = 0 # Does not print anything !
Here, we just created a new instance attribute called 'desc' with a value of 0. The Desc.__set__ method was never called, and in your case SQLAlchemy wouldn't get a chance to 'catch' the assignment.
The solution is to use setattr, which is exactly equivalent to writing a.desc:
setattr(a, 'desc', 1) # Prints '__set__'

Don't use __dict__. Use getattr and setattr to modify attributes by name:
for attribute in obj.attributes:
setattr(cur_creature,str(attribute.Name), getattr(cur_creature,str(attribute.Name)) + attribute.Value)
More info:
setattr
getattr

Related

ZODB transactions for nested objects not working

I know that there is little development on ZODB but it might be useful for someone if using ZODB in 2022, or there might be some obvious thing I'm missing:
when trying to store changes to persistent objects inside a ZODB.DB.transaction with block.
they are not stored, and no error is raised.
while doing the same between transaction.begin() and transaction.commit() calls does work.
that is, the only way to currently use a with block is to change objects directly through conn.root(),
that means all persistent objects which want to store changes on themselves must know the full path from root to themselves, which is impractical.
there is also another weird behavior. where after storing an object for the first time, and retrieving it returns the same object, while the 2nd call and up will return a different object.
this trips tests trying to check if something is stored successfully, as it only happens once.
the following code tries to store attributes in a 2-level persistent hierarchy (simplified dev code)
import ZODB
import ZODB.FileStorage
from persistent.mapping import PersistentMapping
import transaction
store=ZODB.FileStorage.FileStorage("temp1.db")
db=ZODB.DB(store)
def get_init(name, obj):
with db.transaction(f"creating root[{name}]") as conn:
try:
return conn.root()[name]
except KeyError:
conn.root()[name] = obj()
return conn.root()[name]
class A:
def __init__(self):
self.cfg = PersistentMapping()
def __setitem__(self, key, value) -> None:
transaction.begin()
self.cfg[key+", inside block"] = value
transaction.commit()
with db.transaction():
self.cfg[key+", inside with"] = value #does not work
#these should be equivalent, no?
def __iter__(self):
return iter(self.cfg)
class Manager:
def __init__(self):
self.a1=get_init("testing", PersistentMapping) # set up the db, should only happen once
def __setitem__(self, name, obj) -> None:
"""Registers in persistent storage"""
with db.transaction(f"Adding testing:{name}") as conn:
if name in conn.root()["testing"]:
print(f"testing with same name {name} already exists in storage")
return
conn.root()["testing"][name] = obj
def __getitem__(self, name: str):
return db.open().root()["testing"][name]
dm=Manager()
initial=A() #only relevant for forst run
dm['a']=initial #only relevant for forst run
fromdb1= dm['a']
fromdb2= dm['a']
with db.transaction() as conn:
fromdb1.cfg['updated from outer txn, directly'] = 1 #doed not work
conn.root()['testing']['a'].cfg['updated from outer txn,through conn'] = 1
#this should be equivalent but only the second one works
initial['new txn updated on initial'] = 1
fromdb1['new txn updated on retrieved 1']= 1
fromdb2['new txn updated on retrieved 2']= 1
print(f"initial obj - {initial.cfg}")
print(f"from db obj 1- {fromdb1.cfg}")
print(f"from db obj 2- {fromdb2.cfg}")
print(f"\nnew from db obj- {dm['a'].cfg}")
print(f"\nis the initial obj and the first obj from db the same: {initial is fromdb1}")
print(f"is the initial obj and the second obj from db the same: {initial is fromdb2}")
Unless I'm missing something the expected result is for all those methods to work.
Any advice from people using ZODB?

Is it possible to catch empty nested attributes in python?

I'm trying to create a custom object that supports nested attributes.
I need to implement a specific kind of search.
If an attribute doesn't exist at the lowest level, I want to recurse and see if the attribute exists at a higher level.
I've spent all day trying to do this. The closest I've come is being able to print the attribute search path.
class MyDict(dict):
def __init__(self):
super(MyDict, self).__init__()
def __getattr__(self, name):
return self.__getitem__(name)
def __getitem__(self, name):
if name not in self:
print name
self[name] = MyDict()
return super(MyDict, self).__getitem__(name)
config = MyDict()
config.important_key = 'important_value'
print 'important key is: ', config.important_key
print config.random.path.to.important_key
Output:
important key is: important_value
random
path
to
important_key
{}
What I need to happen is instead to see if important_key exists at the lowest level (config.random.path.to), then go up a level (config.random.path) and only return None if it doesn't exist at the top level.
Do you think this is possible?
Thank you so much!
Yes, it's possible. In your search routine, recur to the end of the path, checking for the desired attribute. At the bottom level, return the attribute if found, None otherwise. At each non-terminal level, recur to the next level down.
if end of path # base case
if attribute exists here
return attribute
else
return None
else # some upper level
exists_lower = search(next level down)
if exists_lower
return exists_lower
else
if attribute exists here
return attribute
else
return None
Does this pseudo code get you moving toward a solution?

Inverse of hasattr in Python

hasattr(obj, attribute) is used to check if an object has the specified attribute but given an attribute is there a way to know where (all) it is defined?
Assume that my code is getting the name of an attribute (or a classmethod) as string and I want to invoke classname.attribute but I don't have the classname.
One solution that comes to my mind is this
def finder(attr):
for obj in globals():
try:
if globals()[obj].__dict__[attr]:
return(globals()[obj])
except:
...
usage:
class Lime(object):
#classmethod
def lfunc(self):
print('Classic')
getattr(finder('lfunc'),'lfunc')() #Runs lfunc method of Lime class
I am quite sure that this is not the best (oe even proper way) to do it. Can someone please provide a better way.
It is always "possible". Wether it is desirable is another history.
A quick and dirty way to do it is to iterate linearly over all classes and check if any define the attribute you have. Of course, that is subject to conflicts, and it will yield the first class that has such a named attribute. If it exists in more than one, it is up to you to decide which you want:
def finder(attr):
for cls in object.__subclasses__():
if hasattr(cls, attr):
return cls
raise ValueError
Instead of searching in "globals" this searches all subclasses of "object" - thus the classes to be found don't need to be in the namespace of the module where the finder function is.
If your methods are unique in teh set of classes you are searching, though, maybe you could just assemble a mapping of all methods and use it to call them instead.
Let's suppose all your classes inehrit from a class named "Base":
mapper = {attr_name:getattr(cls, attr_name) for cls in base.__subclasses__() for attr_name, obj in cls.__dict__.items()
if isinstance(obj, classmethod) }
And you call them with mapper['attrname']()
This avoids a linear search at each method call and thus would be much better.
- EDIT -
__subclassess__ just find the direct subclasses of a class, not the inheritance tree - so it won't be usefull in "real life" - maybe it is in the specifc case the OP has in its hands.
If one needs to find things across a inheritance tree, one needs to recurse over the each subclass as well.
As for old-style classes: of course this won't work - that is one of the motives for which they are broken by default in new code.
As for non-class attributes: they can only be found inspecting instances anyway - so another method has to be thought of - does not seem to be the concern of the O.P. here.
This might help:
import gc
def checker(checkee, maxdepth = 3):
def onlyDict(ls):
return filter(lambda x: isinstance(x, dict), ls)
collection = []
toBeInspected = {}
tBI = toBeInspected
gc.collect()
for dic in onlyDict(gc.get_referrers(checkee)):
for item, value in dic.iteritems():
if value is checkee:
collection.append(item)
elif item != "checker":
tBI[item] = value
def _auxChecker(checkee, path, collection, checked, current, depth):
if current in checked: return
checked.append(current)
gc.collect()
for dic in onlyDict(gc.get_referents(current)):
for item, value in dic.iteritems():
currentPath = path + "." + item
if value is checkee:
collection.append(currentPath)
else:
try:
_auxChecker(checkee, currentPath, collection,
checked, value, depth + 1)
if depth < maxdepth else None
except TypeError:
continue
checked = []
for item, value in tBI.iteritems():
_auxChecker(checkee, item, collection, checked, value, 1)
return collection
How to use:
referrer = []
class Foo:
pass
noo = Foo()
bar = noo
import xml
import libxml2
import sys
import os
op = os.path
xml.foo = bar
foobar = noo
for x in checker(foobar, 5):
try:
y= eval(x)
referrer.append(x)
except:
continue
del x, y
ps: attributes of the checkee will not be further checked, for recursive or nested references to the checkee itself.
This should work in all circumstances, but still needs a lot of testing:
import inspect
import sys
def finder(attr, classes=None):
result = []
if classes is None:
# get all accessible classes
classes = [obj for name, obj in inspect.getmembers(
sys.modules[__name__])]
for a_class in classes:
if inspect.isclass(a_class):
if hasattr(a_class, attr):
result.append(a_class)
else:
# we check for instance attributes
if hasattr(a_class(), attr):
result.append(a_class)
try:
result += finder(attr, a_class.__subclasses__())
except:
# old style classes (that don't inherit from object) do not
# have __subclasses; not the best solution though
pass
return list(set(result)) # workaround duplicates
def main(attr):
print finder(attr)
return 0
if __name__ == "__main__":
sys.exit(main("some_attr"))

How can I get a SQLAlchemy ORM object's previous state after a db update?

The issue is that I can't figure out how to use SQLAlchemy to notify me when an object goes into a new state.
I'm using SQLAlchemy ORM (Declarative) to update an object:
class Customer(declarative_base()):
__table_name__ = "customer"
id = Column(Integer, primary_key=True)
status = Column(String)
I want to know when an object enters a state. Particularly after an UPDATE has been issued and when the state changes. E.g. Customer.status == 'registered' and it previously had a different state.
I'm currently doing this with an 'set' attribute event:
from sqlalchemy import event
from model import Customer
def on_set_attribute(target, value, oldvalue, initiator):
print target.status
print value
print oldvalue
event.listen(
Customer.status,
'set',
on_set_attribute,
propagate=True,
active_history=True)
My code fires every time 'set' is called on that attribute, and I check if the value and the oldvalue are different. The problem is that the target parameter isn't fully formed so it doesn't have all the attribute values populated yet.
Is there a better way to do this? Thanks!
My solution was to use 'after_flush' SessionEvent instead of 'set' AttributeEvent.
Many thanks to agronholm who provided example SessionEvent code that specifically checked an object's value and oldvalue.
The solution below is a modification of his code:
def get_old_value(attribute_state):
history = attribute_state.history
return history.deleted[0] if history.deleted else None
def trigger_attribute_change_events(object_):
for mapper_property in object_mapper(object_).iterate_properties:
if isinstance(mapper_property, ColumnProperty):
key = mapper_property.key
attribute_state = inspect(object_).attrs.get(key)
history = attribute_state.history
if history.has_changes():
value = attribute_state.value
# old_value is None for new objects and old value for dirty objects
old_value = get_old_value(attribute_state)
handler = registry.get(mapper_property)
if handler:
handler(object_, value, old_value)
def on_after_flush(session, flush_context):
changed_objects = session.new.union(session.dirty)
for o in changed_objects:
trigger_attribute_change_events(o)
event.listen(session, "after_flush", on_after_flush)
The registry is a dictionary whose keys are MapperProperty's and whose values are event handlers.
session, event, inspect, and object_mapper are all sqlalchemy classes and functions.
use the before_update event or the before_flush event to intercept this happening at a later point in time.

Returning an Object (class) in Parallel Python

I have created a function that takes a value, does some calculations and return the different answers as an object. However when I try to parallelize the code, using pp, I get the following error.
File "trmm.py", line 8, in getattr
return self.header_array[name]
RuntimeError: maximum recursion depth exceeded while calling a Python object
Here is a simple version of what I am trying to do.
class DataObject(object):
"""
Class to handle data objects with several arrays.
"""
def __getattr__(self, name):
try:
return self.header_array[name]
except KeyError:
try:
return self.line[name]
except KeyError:
raise AttributeError("%s instance has no attribute '%s'" %(self.__class__.__name__, name))
def __setattr__(self, name, value):
if name in ('header_array', 'line'):
object.__setattr__(self, name, value)
elif name in self.line:
self.line[name] = value
else:
self.header_array[name] = value
class TrmmObject(DataObject):
def __init__(self):
DataObject.__init__(self)
self.header_array = {
'header': None
}
self.line = {
'longitude': None,
'latitude': None
}
if __name__ == '__main__':
import pp
ppservers = ()
job_server = pp.Server(2, ppservers=ppservers)
def get_monthly_values(value):
tplObj = TrmmObject()
tplObj.longitude = value
tplObj.latitude = value * 2
return tplObj
job1 = job_server.submit(get_monthly_values, (5,), (DataObject,TrmmObject,),("numpy",))
result = job1()
If I change return tplObj to return [tplObj.longitude, tplObj.latitude] there is no problem. However, as I said before this is a simple version, in reality this change would complicate the program a lot.
I am very grateful for any help.
You almost never need to use getattr and setattr, and it almost always ends up with something blowing up, and infinite recursions is a typical effect of that. I can't really see any reason for using them here either. Be explicit and use the line and header_array dictionaries directly.
If you want a function that looks up a value over all arrays, create a function for that and call it explicitly. Calling the function __getitem__ and using [] is explicit. :-)
(And please don't call a dictionary "header_array", it's confusing).

Categories

Resources