python - Selective handling of exception traceback - python

I'm trying to have an exception handling mechanism with several layers of information to display to the user for my application, using python's logging module.
In the application, the logging module has 2 handlers: a file handler for keeping DEBUG information and a stream handler for keeping INFO information. By default, the logging level is set to INFO. What I'm trying to achieve is a setup where if any exception occurs, the user gets shown a simple error message without any tracebacks by default. If the logging level is set to DEBUG, the user should still get the simple message only, but this time the exception traceback is logged into a log file through the file handler.
Is it possible to achieve this?
I tried using logger.exception(e), but it always prints the traceback onto the console.

The traceback module may help you. At the top level of your application, you should put a catch all statement:
setup_log_and_other_basic_services()
try:
run_your_app()
except Exception as e:
if is_debug():
traceback.print_stack()
else:
traceback.print_stack(get_log_file())
print e
the code outside the try/catch block should not be allowed to crash.

Write your custom exception handling function, and use it every time you write catch.
In this function you should check which mode is on (INFO or DEBUG) and then extract info about exception and feed it to logger manually when needed.

Related

Exception handling in AWS - Explicitly ignore some errors?

I have a Lambda function that's used to verify the integrity of a file, but for simplicity sake, let's just assume that it copies a file from a bucket into a bucket upon trigger (lambda gets triggered when it detects file ingestion).
The problem that I am currently facing is that whenever I ingest a lot of files, the function gets triggered as many times as the number of files and that leads to unnecessary invocations. One invocation can handle multiple files so the earlier invocations typically process more than 1 file and the latter realising that there are no more files to process, yield a NoSuchKey error.
So I added in the try-except logic and indicate it to log NoSuchKey error as an info log so that it does not trigger another function that catches ERROR keyword and alert prod support team.
To do that, i used this code which is taken from AWS documentation:
import botocore
import boto3
import logging
# Set up our logger
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
client = boto3.client('s3')
try:
logger.info('Executing copy statement')
# main logic
except botocore.exceptions.ClientError as error:
if error.response['Error']['Code'] == 'NoSuchKey':
logger.info('No object found - File has been moved')
else:
raise error
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/error-handling.html?force_isolation=true
Despite that, I still get the NoSuchKey error. From CloudWatch, it seems that the logic in the "try" section gets executed, followed by an ERROR statement and then the info log 'No object found - File has been moved' that's written in my "except" section.
I thought either "try" or "except" gets executed, but in this case both seemed to run.
Any pros willing to shed some lights?

W0703: Catching too general exception Exception (broad-except)

I'm using pylint to review one of my .py scripts and the below warning shows:
W0703: Catching too general exception Exception (broad-except)
This is an extract of the code.
try:
# Execute function1
function1()
logger.info('Function 1 successfully executed')
except Exception as e:
send_mail_error(str(e))
logger.error(str(e))
I would like to understand what is pylint suggesting here in order to improve it.
It is good practice to catch specific exception, read here.
In your case you want to log unknown exception and send email. In such case I am using SMTPHandler and custom sys.excepthook - like this.

Python exception handling of logging itself

While I use logging to record exceptions, it occurred to me the methods of the logging library itself can throw exceptions.
For example, if a "bad" path is set for the log file, like
import logging
logging.basicConfig(filename='/bad/path/foo.log')
a FileNotFoundError will be thrown.
Suppose my goal of handling these logging exceptions is to keep the program running (and not exit, which the program would otherwise do). The first thing that comes to mind is
try:
logging.basicConfig(filename='/bad/path/foo.log')
except Exception:
pass
But this is considered by some to be an antipattern. It's also quite ugly to wrap every logging.error with try and except blocks.
What's a good pattern to handle exceptions thrown by logging methods like basicConfig(), and possibly by debug(), error(), etc.?
Unless you re-initialise your logger mid-way through your code, why not just check whether the file exists during logger initialisation:
import os
if os.path.isfile(filename):
# Initialise the logger
This should work normally, unless of course some part of the later code will attemp to delete the file, but I hope that it's not the case.

How to log full call-stack with context in raven/sentry?

When raised exception is caught on the root of call-stack I can see the whole context at every level of call-stack in Sentry.
But, when I use captureMessage() I can't see any context in Sentry.
If I use captureException() as in the code below I can see only the top of call-stack.
try:
raise Exception('Breakpoint!')
except:
raven_client.captureException()
In other words I want to see in Sentry a logged message with full stacktrace and context.
The Python SDK has the ability to capture arbitrary stacktraces by passing stack=True to captureMessage:
raven_client.captureMessage('hello world', stack=True)
There is additionally an auto_log_stacks value that can be turned on when configuring the Client:
raven_client = Client(..., auto_log_stacks=True)
Caveat: Automatically logging stacks is useful, but it's not guaranteed accurate in some common situations. It's also a performance hit, albeit a minor one, as it has to constantly call out to inspect.stack().

How should I use try...except while defining a function?

I find I've been confused by the problem that when I needn't to use try..except.For last few days it was used in almost every function I defined which I think maybe a bad practice.For example:
class mongodb(object):
def getRecords(self,tname,conditions=''):
try:
col = eval("self.db.%s" %tname)
recs = col.find(condition)
return recs
except Exception,e:
#here make some error log with e.message
What I thought is ,exceptions may be raised everywhere and I have to use try to get them.
And my question is,is it a good practice to use it everywhere when defining functions?If not are there any principles for it?Help would be appreciated!
Regards
That may not be the best thing to do. Whole point of exceptions is that you can catch them on very different level than it's raised. It's best to handle them in the place where you have enough information to make something useful with them (that is very application and context dependent).
For example code below can throw IOError("[Errno 2] No such file or directory"):
def read_data(filename):
return open(filename).read()
In that function you don't have enough information to do something with it, but in place where you actually using this function, in case of such exception, you may decide to try different filename or display error to the user, or something else:
try:
data = read_data('data-file.txt')
except IOError:
data = read_data('another-data-file.txt')
# or
show_error_message("Data file was not found.")
# or something else
This (catching all possible exceptions very broadly) is indeed considered bad practice. You'll mask the real reason for the exception.
Catch only 'explicitely named' types of exceptions (which you expect to happen and you can/will handle gracefully). Let the rest (unexpected ones) bubble as they should.
You can log these (uncaught) exceptions (globally) by overriding sys.excepthook:
import sys
import traceback
# ...
def my_uncaught_exception_hook(exc_type, exc_value, exc_traceback):
msg_exc = "".join( \
traceback.format_exception(exc_type, exc_value, exc_traceback) )
# ... log here...
sys.excepthook = my_uncaught_exception_hook # our uncaught exception hook
You must find a balance between several goals:
An application should recover from as many errors as possible by itself.
An application should report all unrecoverable errors with enough detail to fix the cause of the problem.
Errors can happen everywhere but you don't want to pollute your code with all the error handling code.
Applications shouldn't crash
To solve #3, you can use an exception hook. All unhandled exceptions will cause the current transaction to abort. Catch them at the highest level, roll back the transaction (so the database doesn't become inconsistent) and either throw them again or swallow them (so the app doesn't crash). You should use decorators for this. This solves #4 and #1.
The solution for #2 is experience. You will learn with time what information you need to solve problems. The hard part is to still have the information when an error happens. One solution is to add debug logging calls in the low level methods.
Another solution is a dictionary per thread in which you can store some bits and which you dump when an error happens.
another option is to wrap a large section of code in a try: except: (for instance in a web application, one specific GUI page) and then use sys.exc_info() to print out the error and also the stack where it occurred
import sys
import traceback
try:
#some buggy code
x = ??
except:
print sys.exc_info()[0] #prints the exception class
print sys.exc_info()[1] #prints the error message
print repr(traceback.format_tb(sys.exc_info()[2])) #prints the stack

Categories

Resources