So, I'm pulling my hair out here, and maybe someone has an insight.
I have a cronjob that loops over all my Link objects, does some stuff, might change properties on the object and does a save(). That's it.
Every so often (around once an hour), one of my rows just disappears. Poof. Nothing in the logs.
So, I'm trying to add debugging statements everywhere, but are there any glaring reasons for an entry to disapear? Is the only way to remove an entry by calling delete()?
Just any general directions to go would be wonderful, thank you.
Some ideas I've had:
git push while the cronjob is running
some cascading delete is wiping them out
some django method is calling delete on an exception
You could override the delete method on your Link class and dump a stack trace or log a message to see if it's indeed happening from within your Django application.
import sys, traceback
def delete(self):
super(Link, self).delete()
try:
assert False
except AssertionError, e
traceback.print_tb(file=sys.stdout)
There may be a better way to get and log a stack trace, but that's the first thing that came to mind.
You could use django-logging with LOGGING_LOG_SQL = True to log all the SQL, so you can see if any DELETE's are occurring.
Related
I'm not sure exactly how to ask what I'm asking, and I don't know if any sample code would really be relevant here, so if clarification is necessary just ask.
I have a non-trivial program in Python that does a couple of things. It reads from some SQL Server database tables, executes some DDL in SQL Server (both of these with pyodbc), analyzes some data, and has a GUI to orchestrate everything so users in the future besides me can automate the process.
Everything is functioning as it should, but obviously I don't expect future users to always play by the rules. I can explicitly indicate what input is wrong (i.e. fields left empty), but there are quite a bit of things that can go wrong. Try-catch structures are out of the question because they cause a few issues in the web of things happening in my program, some of which are embedded in the resources I'm using, not to mention I feel like it's probably not good form to have them everywhere.
That being said, I'm wondering if there's a way to cause an event (likely just a dialog box saying that an error occurred), so that a user without a view of the output window would know something had gone wrong. It would be nice if I could also grab the error message itself, but that isn't necessary. I'm alright with the error still occurring so long as the program can keep going if the issue is corrected.
I'm not sure if this is possible, or if it is possible what form it would take, like something that monitors the output or listens for errors, but I'm open to all information about this.
Thank you
You can wrap your code with a try/except block and name the Exception to print it in the dialog, for example:
try:
# your code
except Exception as e:
print(e) # change this to be in your dialog
This way you will not use try/except many times in different places and you will catch basically any Exception. The other way is to raise custom exceptions for each error and catch them with except.
If you still don't want to use try/except at all, maybe start a thread to keep checking for certain variables (through a loop) and whenever you want to start an error event you just set that variable to True and the thread will start the error dialog. For instance:
import time
import threading
test1 = False
test2 = False
def errorCheck():
while True:
if test1:
# start error dialog for error1
if test2:
# start error dialog for error2
time.sleep(0.1)
if __name__ == '__main__':
t = threading.Thread(target=errorCheck)
t.daemon = True
t.start()
# start your app
However, I recommend using try/except instead.
Is it possible to add a Custom Message globally on runtime-erros? I would like to have a time-stamp as this would help figuring out if a file eventually was written by that execution process.
Replacing sys.excepthook with an appropriate function will allow you to do whatever you like upon every occurrence of an uncaught exception.
Take a look at the Python docs (2, 3) on handling exceptions. You can catch the RuntimeError and print the original message along with a custom timestamp. For more information on accessing the stack trace and exception messages, check out this question.
Ignacio's solution is also great if you'd like to set the message globally.
Is there a way that I can handle some sort of "catch-all" error handling in a Pyramid web app? I currently have implemented exception logging to a database (via the docs at http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/logging/sqlalchemy_logger.html) and I'll return messages to my views to put a "friendly" face on what happened.
But is there something I can implement that would show some sort of generic "Oops, you ran into a problem and we're looking into it" for anything else I'm not explicitly catching, and I could use the above error handler behind the scenes to log whatever to the database? Or, what sort of thing should I be looking for in searches?
Thanks,
edit, since I can't fit it all into a comment:
.
Thanks, that seems to be exactly what I'm looking for!
One thing I'm running into, I don't know if it's related or not....
So I'm implementing the SQL logger as above like so:
class SQLAlchemyHandler(logging.Handler):
# A very basic logger that commits a LogRecord to the SQL Db
def emit(self, record):
trace = None
exc = record.__dict__['exc_info']
if exc:
trace = traceback.format_exc(exc)
log = Log(
logger=record.__dict__['name'],
level=record.__dict__['levelname'],
trace=trace,
msg=record.__dict__['msg'],)
DBSession.add(log)
DBSession.flush()
#transaction.commit()
I had to take out the 'transaction.commit()' call and instead use .flush() because I was getting a SQLAlchemy DetachedInstanceError exception when using transaction. I think it's because I'm playing some games with passing a request to a helper function and that's where it seems to be throwing it. So it works by flushing the session. Buuuut, what happens is if I have a log.error() statement in my exception view, if an exception is actually thrown the view catches it (great!) but the log statement in the view doesn't get committed. The debugging logs in Pyramid show it being written, but never committed.
If I change the logging handler back to transaction.commit then the exceptions do get committed, but I'm back at my original problem. I think I need to focus back on what I'm doing in my helper function that's causing it in the first place, but I'm still learning SQLAlchemy in general, too. Sometimes it can be a little strange.
You can set up an exception view. For example:
#view_config(context=Exception)
def error_view(exc, request):
#log or do other stuff to exc...
return Response("Sorry there was an error")
I'm thinking about where to write the log record around an operation. Here are two different styles. The first one, write log before the operation.
Before:
log.info("Perform operation XXX")
operation()
And here is a different style, write the log after the operation.
After:
operation()
log.info("Operation XXX is done.")
With the before-style, the logging records say what is going to do now. The pro of this style is that when something goes wrong, developer can detect it easily, because they know what is the program doing now. But the con is that you are not sure is the operation finished correctly, if something wrong is inside the operation, for example, a function call gets blocked there and never return, you can't never know it by reading the logging records. With the after-style, you are sure the operation is done.
Of course, we can mix those two style together
Both:
log.info("Perform operation XXX")
operation()
log.info("Operation XXX is done.")
But I feel that is kinda verbose, it makes double logging records. So, here is my question - what is the good logging style? I would like to know how do you think.
I'd typically use two different log levels.
The first one I put on a "debug" level, and the second one on an "info" level. That way typical production machines would only log what's being done, but I can turn on the debug logging and see what it tries to do before it errors out.
It all depends what you want to log. If you're interested in the code getting to the point where it's about to do an operation. If you want to make sure the operation succeeded, do it after. If you want both, do both.
Maybe you could use something like a try catch ? Here 's a naive python example :
try :
operation()
log.info("Operation XXX is done.")
except Exception:
log.info("Operation xxx Failed")
raise Exception() # optional : if you want to propagate failure to another try catch statement and/or crash eventually.
Operation will be launched.
If it doesn't fail (no exception raised) you get a success statement in the logs.
If it fails (by raising an exception. Like disc full or whatever you are trying to do), Exception is caught and you get a failure statement.
Log is more meaning full. You get to keep the verbosity to a oneliner and get to know if operation succeeded. Best of all choices.
Oh and you get a hook point where you can add some code to be executed in case of failure.
I hope it help.
There's another style that I've seen used in Linux boot scripts and in strace. It's got the advantages of your combined style with less verbosity, but you've got to make sure that your logging facility isn't doing any buffering. I don't know log.info, so here's a rough example with print:
print "Doing XXX... ", # Note lack of newline :)
operation()
print "Done."
(Since in most cases print uses buffering, using this example verbatim won't work properly. You won't see "Doing XXX" until you see the "Done". But you get the general idea.)
The other disadvantage of this style is that things can get mixed up if you have multiple threads writing to the same log.
this one is hard to explain!
I am writing a python application to be ran through mod_python. At each request, the returned output differs, even though the logic is 'fixed'.
I have two classes, classA and classB. Such that:
class ClassA:
def page(self, req):
req.write("In classA page")
objB = ClassB()
objB.methodB(req)
req.write("End of page")
class ClassB:
def methodB(self, req):
req.write("In methodB")
return None
Which is a heavily snipped version of what I have. But the stuff I have snipped doesn't change the control flow. There is only one place where MethodB() is called. That is from __init__() in classA.
You would expect the following output:
In classA __init__
In methodB
End of __init__
However, seemingly randomly either get the above correct output or:
In classA __init__
In methodB
End of __init__
In methodB
The stacktrace shows that methodB is being called the second time from __init__. methodB should only be called once. If it is called a second time, you would expect that the other logic in __init__ be done twice too. But nothing before or after methodB executes and there is no recursion.
I wouldn't usually resort to using SO for my debugging, but I have been scratching my head for a while on this.
Version: 2.5.2 r252:60911
thanks in advance
Edit
Some clues that the problem might be elsewhere .... The above changes to the snippet result in the weird output 1 in every 250 or so hits. Which is odd.
The more output prior to printing "In methodB", the more it is printed subsequently incorrectly ... on average, not in direct ratio. It even does it in Lynx.
Im going back to the drawing board.
:(
In response to answer
It seems mod_python and Apache are having marital problems. A restart and things are fine for a few requests. Then it all goes increasingly pear-shaped. When issuing
/etc/rc.d/init.d/httpd stop
It takes a weirdly long amount of time. Also RAM is getting eaten up with requests. I am not that familiar with Apache's internals but it feels like (thanks to Nadia) that threads are staying alive and randomly butting in on requests. Which is plain bonkers.
Moving to mod_wsgi as S.Lott and Nadia suggested
thanks again!!
I've seen similar behaviour with mod_python before. Usually it is because apache is running multiple threads and one of them is running an older version of the code. When you refresh the page chances are the thread with the older code is serving the page. I usually fix this by stoping apache and then restarting it again
sudo /etc/init.d/apache stop
sudo /etc/init.d/apache restart
Restart on its own doesn't always work. Sometimes even that doesn't work! That might sound strange but my last resort in those rare cases where nothing is working is to add a raise Exception() statement on the first line in the handler, refresh the page, restart apache and then refresh the page again. That works every time. There must be a better solution. But that what worked for me. mod_python can drive one crazy for sure!
I hope this might help.
I don't really know, but constructors aren't supposed to return anything, so remove the return None. Even if they could return stuff, None is automatically returned if a function doesn't return anything by itself.
And I think you need a self argument in MethodB.
EDIT: Could you show more code? This is working fine.