I'm not sure exactly how to ask what I'm asking, and I don't know if any sample code would really be relevant here, so if clarification is necessary just ask.
I have a non-trivial program in Python that does a couple of things. It reads from some SQL Server database tables, executes some DDL in SQL Server (both of these with pyodbc), analyzes some data, and has a GUI to orchestrate everything so users in the future besides me can automate the process.
Everything is functioning as it should, but obviously I don't expect future users to always play by the rules. I can explicitly indicate what input is wrong (i.e. fields left empty), but there are quite a bit of things that can go wrong. Try-catch structures are out of the question because they cause a few issues in the web of things happening in my program, some of which are embedded in the resources I'm using, not to mention I feel like it's probably not good form to have them everywhere.
That being said, I'm wondering if there's a way to cause an event (likely just a dialog box saying that an error occurred), so that a user without a view of the output window would know something had gone wrong. It would be nice if I could also grab the error message itself, but that isn't necessary. I'm alright with the error still occurring so long as the program can keep going if the issue is corrected.
I'm not sure if this is possible, or if it is possible what form it would take, like something that monitors the output or listens for errors, but I'm open to all information about this.
Thank you
You can wrap your code with a try/except block and name the Exception to print it in the dialog, for example:
try:
# your code
except Exception as e:
print(e) # change this to be in your dialog
This way you will not use try/except many times in different places and you will catch basically any Exception. The other way is to raise custom exceptions for each error and catch them with except.
If you still don't want to use try/except at all, maybe start a thread to keep checking for certain variables (through a loop) and whenever you want to start an error event you just set that variable to True and the thread will start the error dialog. For instance:
import time
import threading
test1 = False
test2 = False
def errorCheck():
while True:
if test1:
# start error dialog for error1
if test2:
# start error dialog for error2
time.sleep(0.1)
if __name__ == '__main__':
t = threading.Thread(target=errorCheck)
t.daemon = True
t.start()
# start your app
However, I recommend using try/except instead.
Related
Hello this question is about python.
Is there a way to ignore all kinds off exceptions. I know i could just put the whole code in an huge try cahtch but i want it to continue even if one part fails and in result of this some other parts fail too. One way to achieve this would be to put every single line in a try except statement. But is there an other more elegant way to do this?
Well , you can
1 - Put every separate part in a try catch
try:
#something
except:
pass
2 - Put everything in a bigger try catch
try:
#do something
#so something else
#do something else
except:
pass
or you can use contextlib.suppress (https://docs.python.org/3/library/contextlib.html#contextlib.suppress) as Random Davis suggested to ignore certain types of exceptions
But ignoring all exception is a really bad idea , instead you should do
try:
#something
except:
#something else
As far as I know , there is no other "elegant" way to ignore exceptions
(This could have been a comment but I lack rep to post comments)
Besides using try, my python professor will generally comment out a function that he isn't testing so he could test a specific function. If part of your code isn't working try adding a breakpoint before debugging, then you can run your code up until the breakpoint to see if a certain line is doing what you want it to. In visual studio code you can do this by clicking in the empty space just to the left of the line numbers, and you will see a red dot on the line if done correctly.
If you are referencing that a function isn't doing what you want it to do, employing these methods will help you find the error in your ways, starting with tracking the input all the way until you get the output you want, going function by function.
If you're referencing that your code is simply too broken to function correctly, your code will always run until it returns an error, and if the error is early in that process between converting an input to an output, you can come across a multitude of reasons behind this that will stop functions that are supposed to run afterwards from working correctly. If that is the case comment out the later functions until your first one is working, and keep working chronologically to debug those errors. In the future, I highly recommend posting your exact code when asking a coding question, otherwise it can be hard for others to extract the necessary information to answer your question effectively. Good luck!
In reference to Global-Occult's answer, although you can basically try something and except extraneous information, you really don't want to be coding like that, because to develop higher level programs that extra information will no longer be extraneous, in fact it could be very important data that will allow you to develop a program much further.
I created a big program that does a lot of different stuff. In this program, I added some error management but I would like to add management for critical errors which should start the critical_error_function().
So basically, I've used :
try :
//some fabulous code
except :
critical error(error_type)
But I am here to ask if a better way to do this...
In Python exceptions are the intended way of error handling. Assuming you wrap your whole program in one try-except block, a better way would be to
only try-except-wrap the lines that can generate exceptions instead of your complete program
catch them with a specific exception such as ValueError or even your own custom exception instead of the blank except statement
handle them appropriately. Handling could mean skipping this value, logging the error or calling your critical_error_function.
I had a program developed in Python (2.7 & 3.2) that reads three files and generates some code based on those files. In the code, I had several input file checks to capture any input errors by the user. If the program catches an input error, I used os.sys.exit() command to stop processing and issue an error message. I was primarily using IDLE for the process and this worked fine.
Now I have developed a GUI for the program for deployment using PYQT4. The user uses the GUI to input all the necessary input files and conditions and then the GUI calls the earlier code I generated with the necessary arguments.
However, I am finding that if the user makes an error in the input files, when the earlier code catches those errors and the os.sys.exit() is executed, the GUI itself is shutdown completely; which is not good.
I introduced the same checks on the input files into the GUI, so if those are caught, they are treated within the GUI and not by the code. But there are certain processing checks that happen inside the code that the GUI does not have access to them.
The Question: Is there a way to make the called code stop from running, print an error message (to a log file for example; which I already use) without causing the GUI to quit altogether?
Thanks,
note: The code is too large at this point for me to integrate it into the GUI as a class.
I assume you can not or prefer not to change your CLI programs and instead wish to catch the exception raised by sys.exit instead in the GUI. Here is how:
import os
try:
os.sys.exit()
except SystemExit as err:
print('Caught ya')
Have you tried handling exceptions in python.
try:
#some code here
except Exception:
print 'Something bad happened'
Better try catching specific exceptions.
List of built-in exceptions http://docs.python.org/library/exceptions.html#bltin-exceptions
I'm thinking about where to write the log record around an operation. Here are two different styles. The first one, write log before the operation.
Before:
log.info("Perform operation XXX")
operation()
And here is a different style, write the log after the operation.
After:
operation()
log.info("Operation XXX is done.")
With the before-style, the logging records say what is going to do now. The pro of this style is that when something goes wrong, developer can detect it easily, because they know what is the program doing now. But the con is that you are not sure is the operation finished correctly, if something wrong is inside the operation, for example, a function call gets blocked there and never return, you can't never know it by reading the logging records. With the after-style, you are sure the operation is done.
Of course, we can mix those two style together
Both:
log.info("Perform operation XXX")
operation()
log.info("Operation XXX is done.")
But I feel that is kinda verbose, it makes double logging records. So, here is my question - what is the good logging style? I would like to know how do you think.
I'd typically use two different log levels.
The first one I put on a "debug" level, and the second one on an "info" level. That way typical production machines would only log what's being done, but I can turn on the debug logging and see what it tries to do before it errors out.
It all depends what you want to log. If you're interested in the code getting to the point where it's about to do an operation. If you want to make sure the operation succeeded, do it after. If you want both, do both.
Maybe you could use something like a try catch ? Here 's a naive python example :
try :
operation()
log.info("Operation XXX is done.")
except Exception:
log.info("Operation xxx Failed")
raise Exception() # optional : if you want to propagate failure to another try catch statement and/or crash eventually.
Operation will be launched.
If it doesn't fail (no exception raised) you get a success statement in the logs.
If it fails (by raising an exception. Like disc full or whatever you are trying to do), Exception is caught and you get a failure statement.
Log is more meaning full. You get to keep the verbosity to a oneliner and get to know if operation succeeded. Best of all choices.
Oh and you get a hook point where you can add some code to be executed in case of failure.
I hope it help.
There's another style that I've seen used in Linux boot scripts and in strace. It's got the advantages of your combined style with less verbosity, but you've got to make sure that your logging facility isn't doing any buffering. I don't know log.info, so here's a rough example with print:
print "Doing XXX... ", # Note lack of newline :)
operation()
print "Done."
(Since in most cases print uses buffering, using this example verbatim won't work properly. You won't see "Doing XXX" until you see the "Done". But you get the general idea.)
The other disadvantage of this style is that things can get mixed up if you have multiple threads writing to the same log.
For my python/django site I need to build a "dashboard" that will update me on the status of dozens of error/heartbeat/unexpected events going on.
There are a few types of "events" that I'm currently tracking by having the Django site send emails to the admin accounts:
1) Something that normally should happen goes wrong. We synch files to different services and other machines every few hours and I send error emails when this goes wrong.
2) When something that should happen actually happens. Sometimes events in item #1 fail so horribly that they don't even send emails (try: except: around an event should always work, but things can get deleted from the crontab, the system configuration can get knocked askew where things won't run, etc. where I won't even get an error email and the lack of a success/heartbeat email will let me know something that should have happened didn't happen.)
3) When anything unexpected happens. We've made a lot of assumptions on how backend operations will run and if any of these assumptions are violated (e.g. we find two users who have the same email address) we want to know about it. These events aren't necessarily errors, more like warnings to investigate.
So I want to build a dashboard that I can easily update from python/django to give me a bird's eye view of all of these types of activity so I can stop sending hundreds of emails out per week (which is already unmanageble.)
Sounds like you want to create a basic logging system that outputs to a web page.
So you could write something simple app called, say, systemevents that creates an Event record each time something happens on the site. You'd add a signal hook so that anywhere in the rest of the site you could code something like:
from systemevents.signals import record_event
...
try:
# code goes here
except Exception, inst:
record_event("Error occurred while taunting %s: %s" % (obj, inst,), type="Error")
else:
record_event("Successfully taunted %s" % (obj, ), type="Success")
Then you can pretty easily create a view that lists these events.
However, keep in mind that this is adding a layer of complexity that is highly problematic. What if the error lies in your database? Then each time you try to record an error event, another error occurs!
Far better to use something like a built-in logging system to create a text-based log file, then whip up something that can import that text file and lay it out in a somewhat more readable fashion.
One more tip: in order to change how Django handles exceptions, you have to write a custom view for 500 errors. If using systemevents, you'd write something like:
from django.views.defaults import server_error
def custom_error_view(request)
try:
import sys
type, value, tb = sys.exc_info()
error_message = "" # create an error message from the values above
record_event("Error occurred: %s" % (error_message,), type="Error")
except Exception:
pass
return server_error(request)
Note that none of this code has been tested for correctness. It's just meant as a guide.
Have you tried looking at django-sentry?
http://dcramer.github.com/django-sentry/