Writing a custom Django BaseCommand class that logs the command details - python

I have a a number of management commands in my Django application. I'd like a entry to be logged to one my models, every time a management command is run. The model contains four fields:
started - the time when the command was started.
ended - the time when the command was ended/stopped.
success - a boolean value indicating whether the command ran successfully or not.
name - the name of the command.
I thought the best way to implement this would be write a custom class called LoggedBaseCommand which derives from django.core.management.base.BaseCommand. My management commands would derive from this class which would contain the simple logging functionality.
I know some bits and pieces on how to implement this but I can't seem to figure out how to glue this together.
Any help?
Thanks

from commandlog.models import CommandLogEntry
class LoggedBaseCommand(Command):
def handle(self, *args, **options):
# 'started' ought to be automatic, 'ended' and 'success' not yet determined.
c = CommandLogEntry(name = __file__)
result = "FAIL"
try:
result = self.handle_and_log(*args, **options)
except:
pass
c.success = result
c.ended = datetime.datetime.now()
c.save()
def handle_and_log(self, *args, **options):
# All of your child classes use this.
You might have to fiddle with the __file__ entry, perhaps using re, to strip out paths and terminating '.py', but since every command is predicated on its file name in the hierarchy, this ought to do the trick. Note that I assume failure, and only record success if the handle_and_log() function passes back a success string. (Changes to suit your needs). If it throws an exception, failure is recorded. Again, change to suit your needs (i.e. record the exception string).

Related

Python waiting until condition is met

I've created a GUI that ask the user an User/Password. The class that creates the GUI calls another class that creates a web browser and try to login in a website using a method of the same class. If the login is successful, a variable of the GUI's object becomes 'True'
My main file is the next one :
from AskUserPassword import AskGUI
from MainInterfaceGUI import MainGUI
Ask = AskGUI()
Ask = Ask.show()
MainInterface = MainGUI()
if Ask.LoginSuccesful == True:
Ask.close()
MainInterface.show()
If the login is successful, I would like to hide the User/Password GUI and show the main GUI. The code above clearly doesn't work.
How can I make Python to wait until some condition of this type is met ?
Instead of constantly checking for the condition to be met, why not supply what you want upon login to do as a callback to your AskGUI object, and then have the AskGUI object call the callback when login is attempted? Something like:
def on_login(ask_gui):
if ask_gui.LoginSuccesful:
ask_gui.close()
MainInterface.show()
Ask = AskGUI()
Ask.login_callback = on_login
Then, in AskGUI, when the login button is clicked and you check the credentials, you do:
def on_login_click(self):
...
# Check login credentials.
self.LoginSuccessful = True
# Try to get self.login_callback, return None if it doesn't exist.
callback_function = getattr(self, 'login_callback', None)
if callback_function is not None:
callback_function(self)
Re.
I prefer to have all the structure in the main file. This is a reduced example but If I start to trigger from a method inside a class that is also inside another class... it's going to be hard to understand
I recommend this way because all the code to handle something that happens upon login is included in the class that needs to do the logging in. The code to handle which UI elements to display (on_login()) is included in the class that handles that.
You don't need anything in the background that keeps checking to see if Ask.LoginSuccessful has changed.
When you use a decent IDE, it's pretty easy to keep track of where each function is defined.

Robot Framework: Do something after the execution ends using a library as a listener

I am trying to gather the files: output.xml, report.html and log.html when the whole execution ends.
I can use a listener for this purpose but I want to avoid write a line like:
robot --listener "SomeListener.py" YourTestSuite.robot
Then, I looked at the documentation and I found that I can use a Test Library as a Listener and import it in my Test Suite, for example:
class SomeLibrary(object):
ROBOT_LISTENER_API_VERSION = 2 # or 3
ROBOT_LIBRARY_SCOPE = "GLOBAL" # or TEST SUITE
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
def start_suite(self, data, result):
pass
def close(self):
pass
My problem is that the close method is called when the library goes out of the scope. So, I cannot gather the report files because they don't exist in that moment and I receive an Error. I also tried with the method:
def report_file(self, path):
pass
But, nothing happens, I guess that it could be because a Library as a Listener cannot use this methods from the Listener API or because the file is still not created.
Any idea about How can I gather those files using a library as a Listener?
I am open to ideas or suggestions.
Thanks.
The Listener specification actually specifies three methods that are triggered when the three files are generated:
startSuite
endSuite
startTest
endTest
startKeyword
endKeyword
logMessage
message
outputFile
logFile
reportFile
debugFile
Have a look at the last 4 ones, which should be helpful. For more details look at the listener API specification. 4.3.3 Listener interface methods

Execute some code when an SQLAlchemy object's deletion is actually committed

I have a SQLAlchemy model that represents a file and thus contains the path to an actual file. Since deletion of the database row and file should go along (so no orphaned files are left and no rows point to deleted files) I added a delete() method to my model class:
def delete(self):
if os.path.exists(self.path):
os.remove(self.path)
db.session.delete(self)
This works fine but has one huge disadvantage: The file is deleted immediately before the transaction containing the database deletion is committed.
One option would be committing in the delete() method - but I don't want to do this since I might not be finished with the current transaction. So I'm looking for a way to delay the deletion of the physical file until the transaction deleting the row is actually committed.
SQLAlchemy has an after_delete event but according to the docs this is triggered when the SQL is emitted (i.e. on flush) which is too early. It also has an after_commit event but at this point everything deleted in the transaction has probably been deleted from SA.
When using SQLAlchemy in a Flask app with Flask-SQLAlchemy it provides a models_committed signal which receives a list of (model, operation) tuples. Using this signal doing what I'm looking for is extremely easy:
#models_committed.connect_via(app)
def on_models_committed(sender, changes):
for obj, change in changes:
if change == 'delete' and hasattr(obj, '__commit_delete__'):
obj.__commit_delete__()
With this generic function every model that needs on-delete-commit code now simply needs to have a method __commit_delete__(self) and do whatever it needs to do in that method.
It can also be done without Flask-SQLAlchemy, however, in this case it needs some more code:
A deletion needs to be recorded when it's performed. This is be done using the after_delete event.
Any recorded deletions need to be handled when a COMMIT is successful. This is done using the after_commit event.
In case the transaction fails or is manually rolled back the recorded changes also need to be cleared. This is done using the after_rollback() event.
This follows along with the other event-based answers, but I thought I'd post this code, since I wrote it to solve pretty much your exact problem:
The code (below) registers a SessionExtension class that accumulates all new, changed, and deleted objects as flushes occur, then clears or evaluates the queue when the session is actually committed or rolled back. For the classes which have an external file attached, I then implemented obj.after_db_new(session), obj.after_db_update(session), and/or obj.after_db_delete(session) methods which the SessionExtension invokes as appropriate; you can then populate those methods to take care of creating / saving / deleting the external files.
Note: I'm almost positive this could be rewritten in a cleaner manner using SqlAlchemy's new event system, and it has a few other flaws, but it's in production and working, so I haven't updated it :)
import logging; log = logging.getLogger(__name__)
from sqlalchemy.orm.session import SessionExtension
class TrackerExtension(SessionExtension):
def __init__(self):
self.new = set()
self.deleted = set()
self.dirty = set()
def after_flush(self, session, flush_context):
# NOTE: requires >= SA 0.5
self.new.update(obj for obj in session.new
if hasattr(obj, "after_db_new"))
self.deleted.update(obj for obj in session.deleted
if hasattr(obj, "after_db_delete"))
self.dirty.update(obj for obj in session.dirty
if hasattr(obj, "after_db_update"))
def after_commit(self, session):
# NOTE: this is rather hackneyed, in that it hides errors until
# the end, just so it can commit as many objects as possible.
# FIXME: could integrate this w/ twophase to make everything safer in case the methods fail.
log.debug("after commit: new=%r deleted=%r dirty=%r",
self.new, self.deleted, self.dirty)
ecount = 0
if self.new:
for obj in self.new:
try:
obj.after_db_new(session)
except:
ecount += 1
log.critical("error occurred in after_db_new: obj=%r",
obj, exc_info=True)
self.new.clear()
if self.deleted:
for obj in self.deleted:
try:
obj.after_db_delete(session)
except:
ecount += 1
log.critical("error occurred in after_db_delete: obj=%r",
obj, exc_info=True)
self.deleted.clear()
if self.dirty:
for obj in self.dirty:
try:
obj.after_db_update(session)
except:
ecount += 1
log.critical("error occurred in after_db_update: obj=%r",
obj, exc_info=True)
self.dirty.clear()
if ecount:
raise RuntimeError("%r object error during after_commit() ... "
"see traceback for more" % ecount)
def after_rollback(self, session):
self.new.clear()
self.deleted.clear()
self.dirty.clear()
# then add "extension=TrackerExtension()" to the Session constructor
this seems to be a bit challenging, Im curious if a sql trigger AFTER DELETE might be the best route for this, granted it won't be dry and Im not sure the sql database you are using supports it, still AFAIK sqlalchemy pushes transactions to the db but it really doesn't know when they have being committed, if Im interpreting this comment correctly:
its the database server itself that maintains all "pending" data in an ongoing transaction. The changes aren't persisted permanently to disk, and revealed publically to other transactions, until the database receives a COMMIT command which is what Session.commit() sends.
taken from SQLAlchemy: What's the difference between flush() and commit()? by the creator of sqlalchemy ...
If your SQLAlchemy backend supports it, enable two-phase commit. You will need to use (or write) a transaction model for the filesystem that:
checks permissions, etc. to ensure that the file exists and can be deleted during the first commit phase
actually deletes the file during the second commit phase.
That's probably as good as it's going to get. Unix filesystems, as far as I know, do not natively support XA or other two-phase transactional systems, so you will have to live with the small exposure from having a second-phase filesystem delete fail unexpectedly.

convention to represent the exit status and actual result in XMLRPC

in the C world, a function can return error code to represent the exit status, and use INOUT/OUT parameter to carry the actual fruit of the process. when it comes to xmlrpc, no INOUT/OUT parameter, is there any best practice/conventions to represent the exit status and actual result?
the context is i am trying to write an agent/daemon (python SimpleXMLRPCServer) running on the Server, and want to design the "protocol" to interact with it.
any advice is appreciated.
EDIT:
per S.Lott's comment, make the problem more clear.
it is more about os convention rather
than C convention. I agree with that.
the job of the agent is more or less run some cmd on the server, inherently with an exit code/result idiom
.
One simple way to implement this in Python is with a tuple. Have your function return a tuple of: (status, result) where the status can be numeric or a string, and the result can be any Python data structure you fancy.
Here's an example, adapted from the module documentation. Server code:
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
# Restrict to a particular path.
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
# Create server
server = SimpleXMLRPCServer(("localhost", 8000),
requestHandler=RequestHandler)
def myfunction(x, y):
status = 1
result = [5, 6, [4, 5]]
return (status, result)
server.register_function(myfunction)
# Run the server's main loop
server.serve_forever()
Client code:
import xmlrpclib
s = xmlrpclib.ServerProxy('http://localhost:8000')
print s.myfunction(2, 4)
The server function returns a tuple
"in the C world, a function can return error code to represent the exit status, and use INOUT/OUT parameter to carry the actual fruit of the process"
Consider an exit status to be a hack. It's not a C-ism, it's a Linux-ism. C functions return exactly one value. C doesn't have exceptions, so there are several ways to indicate failure, all pretty bad.
Exception handling is what's needed. Python and Java have this, and they don't need exit status.
OS's however, still depend on exit status because shell scripting is still very primitive and some languages (like C) can't produce exceptions.
Consider in/out variables also to be a hack. This is a terrible hack because the function has multiple side-effects in addition to returning a value.
Both of these "features" aren't really the best design patterns to follow.
Ideally, a function is "idempotent" -- no matter how many times you call it, you get the same results. In/Out variables break idempotency in obscure, hard-to-debug ways.
You don't really need either of these features, that's why you don't see many best practices for implementing them.
The best practice is to return a value or raise an exception. If you need to return multiple values you return a tuple. If things didn't work, you don't return an exit status, you raise an exception.
Update. Since the remote process is basically RSH to run a remote command, you should do what remctl does.
You need to mimic: http://linux.die.net/man/1/remctl precisely. You have to write a Python client and server. The server returns a message with a status code (and any other summary, like run-time). The client exits with that same status code.

Setting a timeout function in django

So I'm creating a django app that allows a user to add a new line of text to an existing group of text lines. However I don't want multiple users adding lines to the same group of text lines concurrently. So I created a BoolField isBeingEdited that is set to True once a user decides to append a specific group. Once the Bool is True no one else can append the group until the edit is submitted, whereupon the Bool is set False again. Works alright, unless someone decides to make an edit then changes their mind or forgets about it, etc. I want isBeingEdited to flip back to False after 10 minutes or so. Is this a job for cron, or is there something easier out there? Any suggestions?
Change the boolean to a "lock time"
To lock the model, set the Lock time to the current time.
To unlock the model, set the lock time to None
Add an "is_locked" method. That method returns "not locked" if the current time is more than 10 minutes after the lock time.
This gives you your timeout without Cron and without regular hits into the DB to check flags and unset them. Instead, the time is only checked if you are interest in wieither this model is locked. A Cron would likely have to check all models.
from django.db import models
from datetime import datetime, timedelta
# Create your models here.
class yourTextLineGroup(models.Model):
# fields go here
lock_time = models.DateTimeField(null=True)
locked_by = models.ForeignKey()#Point me to your user model
def lock(self):
if self.is_locked(): #and code here to see if current user is not locked_by user
#exception / bad return value here
pass
self.lock_time = datetime.now()
def unlock(self):
self.lock_time = None
def is_locked(self):
return self.lock_time and datetime.now() - self.lock_time < timedelta(minutes=10)
Code above assumes that the caller will call the save method after calling lock or unlock.

Categories

Resources