post_save.disconnect does not work at all - python

I'm trying to make Django not to send signal in one case. When adding a new instance of model Delivery (right after creating a Job) as an attribute of model Job, I don't want to send signal because the signal should alert admin that Job has been edited.
Unfortunately I can't make it work.
#receiver(post_save,sender=Job) # When Job is created or edited
def alert_admin(sender,instance,created,**kwargs):
if created:
email.AdminNotifications.new_order(instance)
else:
email.AdminNotifications.edited_order(instance)
#receiver(post_save,sender=Job) # When job is created, I want to create a delivery object as an attribute of Job
def create_delivery(sender,instance,created,**kwargs):
if created:
delivery,created_delivery = Delivery.objects.get_or_create(job=instance)
instance.delivery = delivery
delivery.save()
post_save.disconnect(alert_admin)
instance.save() # I DONT WANT TO SEND SIGNAL IN THIS CASE
post_save.connect(alert_admin)
Where is the problem? I did this but I still recieve two alerts - New Order and Edited Order.

The problem is that you are listening to the same signal twice.
#receiver(post_save,sender=Job) # When Job is created or edited
def alert_admin(sender,instance,created,**kwargs):
###
#receiver(post_save,sender=Job):
def create_delivery(sender,instance,created,**kwargs):
###
You are assuing that create_delivery will be called first. But that does not seem to happen. alert_admin appears to be called first. So what ever signal disabling that you do in create_delivery just goes waste.
Django does not provide any guarantees or controls over the order in which signals are fired (what's the order of post_save receiver in django?)
You can add a simple flag to your instance to tell the signal processor that this signal does not need further processing.
if hasattr(instance,'signal_processed'):
return
else:
# do whatever processing
instance.signal_processed = True

Related

Python waiting until condition is met

I've created a GUI that ask the user an User/Password. The class that creates the GUI calls another class that creates a web browser and try to login in a website using a method of the same class. If the login is successful, a variable of the GUI's object becomes 'True'
My main file is the next one :
from AskUserPassword import AskGUI
from MainInterfaceGUI import MainGUI
Ask = AskGUI()
Ask = Ask.show()
MainInterface = MainGUI()
if Ask.LoginSuccesful == True:
Ask.close()
MainInterface.show()
If the login is successful, I would like to hide the User/Password GUI and show the main GUI. The code above clearly doesn't work.
How can I make Python to wait until some condition of this type is met ?
Instead of constantly checking for the condition to be met, why not supply what you want upon login to do as a callback to your AskGUI object, and then have the AskGUI object call the callback when login is attempted? Something like:
def on_login(ask_gui):
if ask_gui.LoginSuccesful:
ask_gui.close()
MainInterface.show()
Ask = AskGUI()
Ask.login_callback = on_login
Then, in AskGUI, when the login button is clicked and you check the credentials, you do:
def on_login_click(self):
...
# Check login credentials.
self.LoginSuccessful = True
# Try to get self.login_callback, return None if it doesn't exist.
callback_function = getattr(self, 'login_callback', None)
if callback_function is not None:
callback_function(self)
Re.
I prefer to have all the structure in the main file. This is a reduced example but If I start to trigger from a method inside a class that is also inside another class... it's going to be hard to understand
I recommend this way because all the code to handle something that happens upon login is included in the class that needs to do the logging in. The code to handle which UI elements to display (on_login()) is included in the class that handles that.
You don't need anything in the background that keeps checking to see if Ask.LoginSuccessful has changed.
When you use a decent IDE, it's pretty easy to keep track of where each function is defined.

possible django race condition

#receiver(post_save, sender=MyRequestLog)
def steptwo_launcher(sender, instance, **kwargs):
GeneralLogging(calledBy='MyRequestLog', logmsg='enter steptwo_launcher').save() # remember to remove this line
if instance.stepCode == 100:
GeneralLogging(calledBy='MyRequestLog', logmsg='step code 100 found. launch next step').save()
nextStep.delay(instance.requestId,False)
I think I just witness my code losing a race condition. The backend of my application updates the status of task one, and it writes a stepCode of 100 to the log when the next task should be started. The front end of the application polls to report the current step back to the end user.
It appears that after the backend created an entry with stepCode 100, the front request came in so soon after, that the if instance.stepCode == 100: was never found to be True. The GeneralLogging only reports one entry at the time of the suspected collision and does not launch the nextstep.
My question is to 1) Confirm this is possible, which I already suspect. and 2) A way to fix this so nextStep is not skipped due to the race condition.
This question lacks a bunch of potentially useful information (e.g. missing code, missing output), but any code of the form
if state == x:
change_state
has a potential race condition when multiple control paths hit this code.
Two of the most common ways to handle this problem are (1) locks:
with some_lock:
if state:
change_state
i.e. stop everyone else from hitting this code until we're done, and (2) queues:
queue.push(item_to_be_processed)
# somewhere else
item_to_be_processed = queue.pop()
A queue/lock implementation in a db could use select_for_update and use an extra processed field, i.e. let the "writer" process save the model with processed = False and have the "reader" process do:
from django.db import transaction
...
with transaction.atomic():
items = MyRequestLog.objects.select_for_update(skip_locked=True).filter(
stepCode=100,
processed=False
)
for item in items:
do_something_useful(item) # you might want to pull this outside of the atomic block if your application allows (so you don't keep rows locked for an extended period)
item.processed = True
item.save()
ps: check your database for support (https://docs.djangoproject.com/en/2.0/ref/models/querysets/#select-for-update)

Execute some code when an SQLAlchemy object's deletion is actually committed

I have a SQLAlchemy model that represents a file and thus contains the path to an actual file. Since deletion of the database row and file should go along (so no orphaned files are left and no rows point to deleted files) I added a delete() method to my model class:
def delete(self):
if os.path.exists(self.path):
os.remove(self.path)
db.session.delete(self)
This works fine but has one huge disadvantage: The file is deleted immediately before the transaction containing the database deletion is committed.
One option would be committing in the delete() method - but I don't want to do this since I might not be finished with the current transaction. So I'm looking for a way to delay the deletion of the physical file until the transaction deleting the row is actually committed.
SQLAlchemy has an after_delete event but according to the docs this is triggered when the SQL is emitted (i.e. on flush) which is too early. It also has an after_commit event but at this point everything deleted in the transaction has probably been deleted from SA.
When using SQLAlchemy in a Flask app with Flask-SQLAlchemy it provides a models_committed signal which receives a list of (model, operation) tuples. Using this signal doing what I'm looking for is extremely easy:
#models_committed.connect_via(app)
def on_models_committed(sender, changes):
for obj, change in changes:
if change == 'delete' and hasattr(obj, '__commit_delete__'):
obj.__commit_delete__()
With this generic function every model that needs on-delete-commit code now simply needs to have a method __commit_delete__(self) and do whatever it needs to do in that method.
It can also be done without Flask-SQLAlchemy, however, in this case it needs some more code:
A deletion needs to be recorded when it's performed. This is be done using the after_delete event.
Any recorded deletions need to be handled when a COMMIT is successful. This is done using the after_commit event.
In case the transaction fails or is manually rolled back the recorded changes also need to be cleared. This is done using the after_rollback() event.
This follows along with the other event-based answers, but I thought I'd post this code, since I wrote it to solve pretty much your exact problem:
The code (below) registers a SessionExtension class that accumulates all new, changed, and deleted objects as flushes occur, then clears or evaluates the queue when the session is actually committed or rolled back. For the classes which have an external file attached, I then implemented obj.after_db_new(session), obj.after_db_update(session), and/or obj.after_db_delete(session) methods which the SessionExtension invokes as appropriate; you can then populate those methods to take care of creating / saving / deleting the external files.
Note: I'm almost positive this could be rewritten in a cleaner manner using SqlAlchemy's new event system, and it has a few other flaws, but it's in production and working, so I haven't updated it :)
import logging; log = logging.getLogger(__name__)
from sqlalchemy.orm.session import SessionExtension
class TrackerExtension(SessionExtension):
def __init__(self):
self.new = set()
self.deleted = set()
self.dirty = set()
def after_flush(self, session, flush_context):
# NOTE: requires >= SA 0.5
self.new.update(obj for obj in session.new
if hasattr(obj, "after_db_new"))
self.deleted.update(obj for obj in session.deleted
if hasattr(obj, "after_db_delete"))
self.dirty.update(obj for obj in session.dirty
if hasattr(obj, "after_db_update"))
def after_commit(self, session):
# NOTE: this is rather hackneyed, in that it hides errors until
# the end, just so it can commit as many objects as possible.
# FIXME: could integrate this w/ twophase to make everything safer in case the methods fail.
log.debug("after commit: new=%r deleted=%r dirty=%r",
self.new, self.deleted, self.dirty)
ecount = 0
if self.new:
for obj in self.new:
try:
obj.after_db_new(session)
except:
ecount += 1
log.critical("error occurred in after_db_new: obj=%r",
obj, exc_info=True)
self.new.clear()
if self.deleted:
for obj in self.deleted:
try:
obj.after_db_delete(session)
except:
ecount += 1
log.critical("error occurred in after_db_delete: obj=%r",
obj, exc_info=True)
self.deleted.clear()
if self.dirty:
for obj in self.dirty:
try:
obj.after_db_update(session)
except:
ecount += 1
log.critical("error occurred in after_db_update: obj=%r",
obj, exc_info=True)
self.dirty.clear()
if ecount:
raise RuntimeError("%r object error during after_commit() ... "
"see traceback for more" % ecount)
def after_rollback(self, session):
self.new.clear()
self.deleted.clear()
self.dirty.clear()
# then add "extension=TrackerExtension()" to the Session constructor
this seems to be a bit challenging, Im curious if a sql trigger AFTER DELETE might be the best route for this, granted it won't be dry and Im not sure the sql database you are using supports it, still AFAIK sqlalchemy pushes transactions to the db but it really doesn't know when they have being committed, if Im interpreting this comment correctly:
its the database server itself that maintains all "pending" data in an ongoing transaction. The changes aren't persisted permanently to disk, and revealed publically to other transactions, until the database receives a COMMIT command which is what Session.commit() sends.
taken from SQLAlchemy: What's the difference between flush() and commit()? by the creator of sqlalchemy ...
If your SQLAlchemy backend supports it, enable two-phase commit. You will need to use (or write) a transaction model for the filesystem that:
checks permissions, etc. to ensure that the file exists and can be deleted during the first commit phase
actually deletes the file during the second commit phase.
That's probably as good as it's going to get. Unix filesystems, as far as I know, do not natively support XA or other two-phase transactional systems, so you will have to live with the small exposure from having a second-phase filesystem delete fail unexpectedly.

Writing a custom Django BaseCommand class that logs the command details

I have a a number of management commands in my Django application. I'd like a entry to be logged to one my models, every time a management command is run. The model contains four fields:
started - the time when the command was started.
ended - the time when the command was ended/stopped.
success - a boolean value indicating whether the command ran successfully or not.
name - the name of the command.
I thought the best way to implement this would be write a custom class called LoggedBaseCommand which derives from django.core.management.base.BaseCommand. My management commands would derive from this class which would contain the simple logging functionality.
I know some bits and pieces on how to implement this but I can't seem to figure out how to glue this together.
Any help?
Thanks
from commandlog.models import CommandLogEntry
class LoggedBaseCommand(Command):
def handle(self, *args, **options):
# 'started' ought to be automatic, 'ended' and 'success' not yet determined.
c = CommandLogEntry(name = __file__)
result = "FAIL"
try:
result = self.handle_and_log(*args, **options)
except:
pass
c.success = result
c.ended = datetime.datetime.now()
c.save()
def handle_and_log(self, *args, **options):
# All of your child classes use this.
You might have to fiddle with the __file__ entry, perhaps using re, to strip out paths and terminating '.py', but since every command is predicated on its file name in the hierarchy, this ought to do the trick. Note that I assume failure, and only record success if the handle_and_log() function passes back a success string. (Changes to suit your needs). If it throws an exception, failure is recorded. Again, change to suit your needs (i.e. record the exception string).

Setting a timeout function in django

So I'm creating a django app that allows a user to add a new line of text to an existing group of text lines. However I don't want multiple users adding lines to the same group of text lines concurrently. So I created a BoolField isBeingEdited that is set to True once a user decides to append a specific group. Once the Bool is True no one else can append the group until the edit is submitted, whereupon the Bool is set False again. Works alright, unless someone decides to make an edit then changes their mind or forgets about it, etc. I want isBeingEdited to flip back to False after 10 minutes or so. Is this a job for cron, or is there something easier out there? Any suggestions?
Change the boolean to a "lock time"
To lock the model, set the Lock time to the current time.
To unlock the model, set the lock time to None
Add an "is_locked" method. That method returns "not locked" if the current time is more than 10 minutes after the lock time.
This gives you your timeout without Cron and without regular hits into the DB to check flags and unset them. Instead, the time is only checked if you are interest in wieither this model is locked. A Cron would likely have to check all models.
from django.db import models
from datetime import datetime, timedelta
# Create your models here.
class yourTextLineGroup(models.Model):
# fields go here
lock_time = models.DateTimeField(null=True)
locked_by = models.ForeignKey()#Point me to your user model
def lock(self):
if self.is_locked(): #and code here to see if current user is not locked_by user
#exception / bad return value here
pass
self.lock_time = datetime.now()
def unlock(self):
self.lock_time = None
def is_locked(self):
return self.lock_time and datetime.now() - self.lock_time < timedelta(minutes=10)
Code above assumes that the caller will call the save method after calling lock or unlock.

Categories

Resources