problems using observer pattern in django - python

I'm working on a website where I sell products (one class Sale, one class Product). Whenever I sell a product, I want to save that action in a History table and I have decided to use the observer pattern to do this.
That is: my class Sales is the subject and the History class is the observer, whenever I call the save_sale() method of the Sales class I will notify the observers. (I've decided to use this pattern because later I'll also send an email, notify the admin, etc.)
This is my subject class (the Sales class extends from this)
class Subject:
_observers = []
def attach(self, observer):
if not observer in self._observers:
self._observers.append(observer)
def detach(self, observer):
try:
self._observers.remove(observer)
except ValueError:
pass
def notify(self,**kargs):
for observer in self._observers:
observer.update(self,**kargs)
on the view I do something like this
sale = Sale()
sale.user = request.user
sale.product = product
h = History() #here I create the observer
sale.attach(h) #here I add the observer to the subject class
sale.save_sale() #inside this class I will call the notify() method
This is the update method on History
def update(self,subject,**kargs):
self.action = "sale"
self.username = subject.user.username
self.total = subject.product.total
self.save(force_insert=True)
It works fine the first time, but when I try to make another sale, I get an error saying I can't insert into History because of a primary key constraint.
My guess is that when I call the view the second time, the first observer is still in the Subject class, and now I have two history observers listening to the Sales, but I'm not sure if that's the problem (gosh I miss the print_r from php).
What am I doing wrong? When do I have to "attach" the observer? Or is there a better way of doing this?
BTW: I'm using Django 1.1 and I don't have access to install any plugins.

This may not be an acceptable answer since it's more architecture related, but have you considered using signals to notify the system of the change? It seems that you are trying to do exactly what signals were designed to do. Django signals have the same end-result functionality as Observer patterns.
http://docs.djangoproject.com/en/1.1/topics/signals/

I think this is because _observers = [] acts like static shared field. So every instance of Subject changes the _observers instance and it has unwanted side effect.
Initialize this variable in constructor:
class Subject:
def __init__(self):
self._observers = []

#Andrew Sledge's answer indicates a good way of tackling this problem. I would like to suggest an alternate approach.
I had a similar problem and started out using signals. They worked well but I found that my unit tests had become slower as the signals were called each time I loaded an instance of the associated class using a fixture. This added tens of seconds to the test run. There is a work around but I found it clumsy. I defined a custom test runner and disconnected my functions from the signals before loading fixtures. I reconnected them afterwards.
Finally I decided to ditch signals altogether and overrode the appropriate save() methods of models instead. In my case whenever an Order is changed a row is automatically created in and OrderHistory table, among other things. In order to do this I added a function to create an instance of OrderHistory and called it from within the Order.save() method. this also made it possible to test the save() and the function separately.
Take a look at this SO question. It has a discussion about when to override save() versus when to use signals.

Thank you all for your answers, reading about signals gave me another perspective but i dont want to use them because of learning purposes (i wanted to use the observer pattern in web development :P) In the end, i solved doing something like this:
class Sales(models.Model,Subject):
...
def __init__(self):
self._observers = [] #reset observers
self.attach(History()) #attach a History Observer
...
def save(self):
super(Sales,self).save()
self.notify() # notify all observers
now every time i call the save(), the observers will be notified and if i need it, i could add or delete an observer
what do you think? is this a good way to solve it?

Related

Adding class level permission to methods

So I have a class name Player defined like below:
class Player:
def insert(score):
# implementation of score insert
def retrieve(player_id)
# implementation of retrieval
Now, I want to modify the insert and retrieve functions so that a username is passed and only if username has admin role I should be able to insert or retrieve (I have a user class). So I modified the methods like below:
def insert(score, username):
u = UserModel()
if not u.is_admin_user(username):
print('User does not have access to retrieve data')
return False
#implementation of score insert
This works fine, however I have various other models, with many different methods who all need this restriction and I would need to add this snippet everywhere. So is there something cool in Python OOPS using which I can use? I am talking in terms of interface or decorator or something like that. I am not very strong with OOPS...
Any help is much appreciated.
I think the best you can do without too much of a redesign is a simple function definition for your check.
def check_permissions(username):
u = UserModel()
if not u.is_admin_user(username):
raise PermissionError('User does not have access to retrieve data')
class Player:
def insert(self, score, username):
check_permissions(username)
# implementation of score insert
def retrieve(self, player_id, username):
check_permissions(username)
# implementation of retrieval
This way you don't need to repeat whole authentication logic in every place you want to add the check and only need to add an argument and a single line of code in every place that needs it.
That being said, I feel like your structure could use a bit of redesign and moving the authentication to a different place to avoid breaking good code practices.

Run code when "foreign" object is added to set

I have a foreign key relationship in my Django (v3) models:
class Example(models.Model):
title = models.CharField(max_length=200) # this is irrelevant for the question here
not_before = models.DateTimeField(auto_now_add=True)
...
class ExampleItem(models.Model):
myParent = models.ForeignKey(Example, on_delete=models.CASCADE)
execution_date = models.DateTimeField(auto_now_add=True)
....
Can I have code running/triggered whenever an ExampleItem is "added to the list of items in an Example instance"? What I would like to do is run some checks and, depending on the concrete Example instance possibly alter the ExampleItem before saving it.
To illustrate
Let's say the Example's class not_before date dictates that the ExampleItem's execution_date must not be before not_before I would like to check if the "to be saved" ExampleItem's execution_date violates this condition. If so, I would want to either change the execution_date to make it "valid" or throw an exception (whichever is easier). The same is true for a duplicate execution_date (i.e. if the respective Example already has an ExampleItem with the same execution_date).
So, in a view, I have code like the following:
def doit(request, example_id):
# get the relevant `Example` object
example = get_object_or_404(Example, pk=example_id)
# create a new `ExampleItem`
itm = ExampleItem()
# set the item's parent
itm.myParent = example # <- this should trigger my validation code!
itm.save() # <- (or this???)
The thing is, this view is not the only way to create new ExampleItems; I also have an API for example that can do the same (let alone that a user could potentially "add ExampleItems manually via REPL). Preferably the validation code must not be duplicated in all the places where new ExampleItems can be created.
I was looking into Signals (Django docu), specifically pre_save and post_save (of ExampleItem) but I think pre_save is too early while post_save is too late... Also m2m_changed looks interesting, but I do not have a many-to-many relationship.
What would be the best/correct way to handle these requirements? They seem to be rather common, I imagine. Do I have to restructure my model?
The obvious solution here is to put this code in the ExampleItem.save() method - just beware that Model.save() is not invoked by some queryset bulk operations.
Using signals handlers on your own app's models is actually an antipattern - the goal of signal is to allow for your app to hook into other app's lifecycle without having to change those other apps code.
Also (unrelated but), you can populate your newly created models instances directly via their initializers ie:
itm = ExampleItem(myParent=example)
itm.save()
and you can even save them directly:
# creates a new instance, populate it AND save it
itm = ExampleItem.objects.create(myParent=example)
This will still invoke your model's save method so it's safe for your use case.

Detecting app engine datastore model attribute changes

How to trigger a method call every time a datastore entity attribute changes?
One way to do this I looked into was monkeypatching db.Model.put. That involved overriding the put method. While that allows me to react to every put(), it wasn't clear how I would detect if the address attribute has changed, since self.address would be already set in the beginning of .put().
Elaboration:
I have users and each user has a physical address.
class User(db.Model):
...
address = db.StringProperty() # for example "2 Macquarie Street, Sydney"
...
I would like to verify that the entered addresses are correct. For this I have an expensive address checking function (it contacts a remote API) and a boolean field.
class User(db.Model):
...
address = db.StringProperty()
address_is_valid = db.BooleanProperty(default=False)
def address_has_changed(self):
self.address_is_valid = False
Task(
url = "/check_address", # this would later set .address_is_valid
params = {
'user' : self.key()
}
).add()
...
But how can I get the address_has_changed method to trigger every time the address changes, without having to explicitly call it everywhere?
# It should work when changing an address
some_user = User.all().get()
some_user.address = "Santa Claus Main Post Office, FI-96930 Arctic Circle"
some_user.put()
# It should also work when multiple models are changed
...
db.put([some_user, another_user, yet_another_user])
# It should even work when creating a user
sherlock = User(address='221 B Baker St, London, England')
sherlock.put() # this should trigger address_has_changed
What about a Hook?
NDB offers a lightweight hooking mechanism. By defining a hook, an
application can run some code before or after some type of operations;
for example, a Model might run some function before every get().
from google.appengine.ext import ndb
class Friend(ndb.Model):
name = ndb.StringProperty()
def _pre_put_hook(self):
# inform someone they have new friend
#classmethod
def _post_delete_hook(cls, key, future):
# inform someone they have lost a friend
f = Friend()
f.name = 'Carole King'
f.put() # _pre_put_hook is called
fut = f.key.delete_async() # _post_delete_hook not yet called
fut.get_result() # _post_delete_hook is called
You could build in some further logic so that the original and new versions of the address are checked, and if they differ then run the expensive operation, otherwise just save.
Alright this might be 2 years too late, but here it is anyway you can always create class local variables in the __init()__ method that store old values and while you're calling the put method compare the values from these old variables.
class User(db.Model):
def __init__(self, *args, **kwargs):
super(User, self).__init__(*args, **kwargs)
self._old_address = self.address
address = db.StringProperty()
def put(self, **kwargs):
if self._old_address != self.address:
# ...
# do your thing
# ...
super(User, self).put(**kwargs)
Use a python property. This makes it easy to call address_has_changed whenever it is actually changed.
Neither Nick's article you refer too or ndb hooks solve the problem of tracking explicit changes in entities, they just make it easier to solve.
You would normally call your address_is_changed method inside the pre put hook rather all over the code base when you call put().
I have code in place that uses the these hook strategies to create audit trails of every change to a record,
However your code doesn't actually detect a change to the address.
You should consider changing to ndb, then use a post_get hook (to squirrel away orginal property values you wish to check - for instance in a session or request object) then use pre_put_hook to check the current property vs the orginal value, to see if you should then take any action, then call you address_has_changed method. You can use this strategy using db (by following Nicks article) but then you have to do a lot more heavy lifting yourself.

Google App Engine base and subclass gets

I want to have a base class called MBUser that has some predefined properties, ones that I don't want to be changed. If the client wants to add properties to MBUser, it is advised that MBUser be subclassed, and any additional properties be put in there.
The API code won't know if the client actually subclasses MBUser or not, but it shouldn't matter. The thinking went that we could just get MBUser by id. So I expected this to work:
def test_CreateNSUser_FetchMBUser(self):
from nsuser import NSUser
id = create_unique_id()
user = NSUser(id = id)
user.put()
# changing MBUser.get.. to NSUser.get makes this test succeed
get_user = MBUser.get_by_id(id)
self.assertIsNotNone(get_user)
Here NSUser is a subclass of MBUser. The test fails.
Why can't I do this?
What's a work around?
Models are defined by their "kind", and a subclass is a different kind, even if it seems the same.
The point of subclassing is not to share values, but to share the "schema" you've created for a given "kind".
A kind map is created on base class ndb.Model (it seems like you're using ndb since you mentioned get_by_id) and each kind is looked up when you do queries like this.
For subclasses, the kind is just defined as the class name:
#classmethod
def _get_kind(cls):
return cls.__name__
I just discovered GAE has a solution for this. It's called the PolyModel:
https://developers.google.com/appengine/docs/python/ndb/polymodelclass

Pyramid resource: In plain English

I've been reading on the ways to implement authorization (and authentication) to my newly created Pyramid application. I keep bumping into the concept called "Resource". I am using python-couchdb in my application and not using RDBMS at all, hence no SQLAlchemy. If I create a Product object like so:
class Product(mapping.Document):
item = mapping.TextField()
name = mapping.TextField()
sizes = mapping.ListField()
Can someone please tell me if this is also called the resource? I've been reading the entire documentation of Pyramids, but no where does it explain the term resource in plain simple english (maybe I'm just stupid). If this is the resource, does this mean I just stick my ACL stuff in here like so:
class Product(mapping.Document):
__acl__ = [(Allow, AUTHENTICATED, 'view')]
item = mapping.TextField()
name = mapping.TextField()
sizes = mapping.ListField()
def __getitem__(self, key):
return <something>
If I were to also use Traversal, does this mean I add the getitem function in my python-couchdb Product class/resource?
Sorry, it's just really confusing with all the new terms (I came from Pylons 0.9.7).
Thanks in advance.
I think the piece you are missing is the traversal part. Is Product
the resource? Well it depends on what your traversal produces, it
could produce products.....
Perhaps it might be best to walk this through from the view back to
how it gets configured when the application is created...
Here's a typical view.
#view_config(context=Product, permission="view")
def view_product(context, request):
pass # would do stuff
So this view gets called when context is an instance of Product. AND
if the acl attribute of that instance has the "view"
permission. So how would an instance of Product become context?
This is where the magic of traversal comes in. The very logic of
traversal is simply a dictionary of dictionaries. So one way that this
could work for you is if you had a url like
/product/1
Somehow, some resource needs to be traversed by the segments of the
url to determine a context so that a view can be determined. What if
we had something like...
class ProductContainer(object):
"""
container = ProductContainer()
container[1]
>>> <Product(1)>
"""
def __init__(self, request, name="product", parent=None):
self.__name__ = name
self.__parent__ = parent
self._request = request
def __getitem__(self, key):
p = db.get_product(id=key)
if not p:
raise KeyError(key)
else:
p.__acl__ = [(Allow, Everyone,"view")]
p.__name__ = key
p.__parent__ = self
return p
Now this is covered in the documentation and I'm attempting to boil it
down to the basics you need to know. The ProductContainer is an object
that behaves like a dictionary. The "name" and "parent"
attributes are required by pyramid in order for the url generation
methods to work right.
So now we have a resource that can be traversed. How do we tell
pyramid to traverse ProductContainer? We do that through the
Configurator object.
config = Configurator()
config.add_route(name="product",
path="/product/*traverse",
factory=ProductContainer)
config.scan()
application = config.make_wsgi_app()
The factory parameter expects a callable and it hands it the current
request. It just so happens that ProductContainer.init will do
that just fine.
This might seem a little much for such a simple example, but hopefully
you can imagine the possibilities. This pattern allows for very
granular permission models.
If you don't want/need a very granular permission model such as row
level acl's you probably don't need traversal, instead you can use
routes with a single root factory.
class RootFactory(object):
def __init__(self, request):
self._request = request
self.__acl__ = [(Allow, Everyone, "view")] # todo: add more acls
#view_config(permission="view", route_name="orders")
def view_product(context, request):
order_id, product_id = request.matchdict["order_id"], request.matchdict["product_id"]
pass # do what you need to with the input, the security check already happened
config = Configurator(root_factory=RootFactory)
config.add_route(name="orders",
path="/order/{order_id}/products/{product_id}")
config.scan()
application = config.make_wsgi_app()
note: I did the code example from memory, obviously you need all the necessary imports etc. in other words this isn't going to work as a copy/paste
Have you worked through http://michael.merickel.org/projects/pyramid_auth_demo/ ? If not, I suspect it may help. The last section http://michael.merickel.org/projects/pyramid_auth_demo/object_security.html implements the pattern you're after (note the example "model" classes inherit from nothing more complex than object).

Categories

Resources