Some IO operations produce some set of errors. It is important, that it is not one exception, but set. So, we have set for socket errors, set for file io. How to handle group of exceptions without intersection for different io operations?
For example, OSError handles file io errors and some(?) Socket errors.
I have only one solution: wrap io operations with try-except and raise user-defined exception.
def foo():
try:
# some file io
except:
raise MyFileIOException(reason=sys.exc_info())
try:
# some socket io
except:
raise MySocketIOException(reason=sys.exc_info())
def bar():
try:
foo()
except MyFileIOException as exc:
# handle
except MySocketIOException as exc:
# handle
Is there some better and elegant solution?
I think, you have written a decent exception-handler. Moreover, it depends upon scenario. Say for eg. If you are making a function in library, then it's advisable not to introduce any application/domain specific exception types in the library function and simply raise raw/native exception types. When application code will use that function, it's its responsibility to declare its own set of exception types and wrap the exception generated in library into its own type.
eg. Library's code
def WriteToFile(strFile, strCnt):
objFile = open(strFile, "w")
objFile.write(strCnt)
objFile.close()
StudentRecord Application code
import library # import our library in which SaveStudent is present
import sys
def SaveStudent(objStudent):
try:
WriteToFile("studentRec.txt", str(objStudent))
except IOError as e:
raise StudentSaveFailedException, None, sys.exc_info()[2]
Now let this exception be handled in the application's Exception-handler component, which will meaningfully handle this exception and take necessary corrective actions.
Though I know that it's a naive explaination, but this is the strategy used in larger application, which uses several in-house and 3rd party libraries for proper exception handling.
Hope it helps you.
I'm ended with:
def decorator(f):
#functools.wraps(f)
#asyncio.coroutine
def wrapper(*args, **kwargs):
try:
return (yield from f(*args, **kwargs))
except asyncio.CancelledError:
raise
except:
raise CustomError(reason=sys.exc_info())
return wrapper
Related
So I have multiple functions that perform different actions. Error handling is pretty much similar among functions, with slight variations though.
ErrorA and ErrorB are being handled in all functions. I would like to refactor this to avoid repeating the except clauses for ErrorA and B in every place. Is there a way in Python to get this? I do not want to change code behavior nor define nested try-except blocks. Your answers are very welcome!
def func_a():
try:
do_action_a()
except ErrorA:
handle_error_a()
except ErrorB:
handle_error_b()
except ErrorC:
handle_error_c()
def func_b():
try:
do_action_b()
except ErrorA:
handle_error_a()
except ErrorB:
handle_error_b()
except ErrorD:
handle_error_d()
def func_c():
try:
do_action_c()
except ErrorA:
handle_error_a()
except ErrorB:
handle_error_b()
except Exception:
handle_general_exception()
So, the most straightforwar way would be to refactor the handling of ErrorA and ErrorB into it's own function, something like:
def execute_with_a_b_handling(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except ErrorA:
handle_error_a()
except ErrorB:
handle_error_b()
def func_a():
try:
execute_with_a_b_handling(do_action_a)
except ErrorC:
handle_error_c()
def func_b():
try:
execute_with_a_b_handling(do_action_b)
except ErrorD:
handle_error_d()
def func_c():
try:
execute_with_a_b_handling(do_action_c)
except Exception:
handle_general_exception()
Of course, with a better name.
Personally, I quite like using context managers in this situation. These are best used where there is a point in the code from which it is worth checking the error from, and then another point where the error should be removed.
from contextlib import contextmanager
global_errors = {}
#contextmanager
def error_handler_context(error, function):
# Code to acquire resource, e.g.:
global_errors[error] = function
try:
yield
finally:
# Code to release resource, e.g.:
del global_errors[error]
def handle_errors(function):
try:
function()
except Exception as e:
try:
global_errors[type(e)]()
except Exception:
raise e
def error_1():
print("here")
def value_error_raise():
raise ValueError("Test")
def exception_raise():
raise Exception("test Error")
with error_handler_context(ValueError, error_1):
handle_errors(value_error_raise)
handle_errors(exception_raise)
This is not a perfect solution, and there are definitely a lot of cases where this should not be used. So use caution.
def create_spreadsheet_with_api(connection, filename):
try:
connection.open(filename)
if (no exception):
raise exception file already exists
if (there exception):
connection.create(filename)
Using pygsheets library which uses google api, I'm trying to create spreadsheet with given name, if it's not already exists.
I receive Exception pygsheets.exceptions.SpreadsheetNotFound:
So I need something like reverse Exception, or if there are better practice of doing it in python your advice will be highly appreciated.
The try clause has an else part, which is executed if no exception is raised (similarly named, but totally unrelated to the well known if-else). So
def create_spreadsheet_with_api(connection, filename):
try:
connection.open(filename)
except FileNotFoundError:
connection.create(filename)
else:
raise FileAlreadyExistsError
Is it possible to tell if there was an exception once you're in the finally clause? Something like:
try:
funky code
finally:
if ???:
print('the funky code raised')
I'm looking to make something like this more DRY:
try:
funky code
except HandleThis:
# handle it
raised = True
except DontHandleThis:
raised = True
raise
else:
raised = False
finally:
logger.info('funky code raised %s', raised)
I don't like that it requires to catch an exception, which you don't intend to handle, just to set a flag.
Since some comments are asking for less "M" in the MCVE, here is some more background on the use-case. The actual problem is about escalation of logging levels.
The funky code is third party and can't be changed.
The failure exception and stack trace does not contain any useful diagnostic information, so using logger.exception in an except block is not helpful here.
If the funky code raised then some information which I need to see has already been logged, at level DEBUG. We do not and can not handle the error, but want to escalate the DEBUG logging because the information needed is in there.
The funky code does not raise, most of the time. I don't want to escalate logging levels for the general case, because it is too verbose.
Hence, the code runs under a log capture context (which sets up custom handlers to intercept log records) and some debug info gets re-logged retrospectively:
try:
with LogCapture() as log:
funky_code() # <-- third party badness
finally:
# log events are buffered in memory. if there was an exception,
# emit everything that was captured at a WARNING level
for record in log.captured:
if <there was an exception>:
log_fn = mylogger.warning
else:
log_fn = getattr(mylogger, record.levelname.lower())
log_fn(record.msg, record.args)
Using a contextmanager
You could use a custom contextmanager, for example:
class DidWeRaise:
__slots__ = ('exception_happened', ) # instances will take less memory
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# If no exception happened the `exc_type` is None
self.exception_happened = exc_type is not None
And then use that inside the try:
try:
with DidWeRaise() as error_state:
# funky code
finally:
if error_state.exception_happened:
print('the funky code raised')
It's still an additional variable but it's probably a lot easier to reuse if you want to use it in multiple places. And you don't need to toggle it yourself.
Using a variable
In case you don't want the contextmanager I would reverse the logic of the trigger and toggle it only in case no exception has happened. That way you don't need an except case for exceptions that you don't want to handle. The most appropriate place would be the else clause that is entered in case the try didn't threw an exception:
exception_happened = True
try:
# funky code
except HandleThis:
# handle this kind of exception
else:
exception_happened = False
finally:
if exception_happened:
print('the funky code raised')
And as already pointed out instead of having a "toggle" variable you could replace it (in this case) with the desired logging function:
mylog = mylogger.WARNING
try:
with LogCapture() as log:
funky_code()
except HandleThis:
# handle this kind of exception
else:
# In case absolutely no exception was thrown in the try we can log on debug level
mylog = mylogger.DEBUG
finally:
for record in log.captured:
mylog(record.msg, record.args)
Of course it would also work if you put it at the end of your try (as other answers here suggested) but I prefer the else clause because it has more meaning ("that code is meant to be executed only if there was no exception in the try block") and may be easier to maintain in the long run. Although it's still more to maintain than the context manager because the variable is set and toggled in different places.
Using sys.exc_info (works only for unhandled exceptions)
The last approach I want to mention is probably not useful for you but maybe useful for future readers who only want to know if there's an unhandled exception (an exception that was not caught in any except block or has been raised inside an except block). In that case you can use sys.exc_info:
import sys
try:
# funky code
except HandleThis:
pass
finally:
if sys.exc_info()[0] is not None:
# only entered if there's an *unhandled* exception, e.g. NOT a HandleThis exception
print('funky code raised')
raised = True
try:
funky code
raised = False
except HandleThis:
# handle it
finally:
logger.info('funky code raised %s', raised)
Given the additional background information added to the question about selecting a log level, this seems very easily adapted to the intended use-case:
mylog = WARNING
try:
funky code
mylog = DEBUG
except HandleThis:
# handle it
finally:
mylog(...)
You can easily assign your caught exception to a variable and use it in the finally block, eg:
>>> x = 1
>>> error = None
>>> try:
... x.foo()
... except Exception as e:
... error = e
... finally:
... if error is not None:
... print(error)
...
'int' object has no attribute 'foo'
Okay, so what it sounds like you actually just want to either modify your existing context manager, or use a similar approach: logbook actually has something called a FingersCrossedHandler that would do exactly what you want. But you could do it yourself, like:
#contextmanager
def LogCapture():
# your existing buffer code here
level = logging.WARN
try:
yield
except UselessException:
level = logging.DEBUG
raise # Or don't, if you just want it to go away
finally:
# emit logs here
Original Response
You're thinking about this a bit sideways.
You do intend to handle the exception - you're handling it by setting a flag. Maybe you don't care about anything else (which seems like a bad idea), but if you care about doing something when an exception is raised, then you want to be explicit about it.
The fact that you're setting a variable, but you want the exception to continue on means that what you really want is to raise your own specific exception, from the exception that was raised:
class MyPkgException(Exception): pass
class MyError(PyPkgException): pass # If there's another exception type, you can also inherit from that
def do_the_badness():
try:
raise FileNotFoundError('Or some other code that raises an error')
except FileNotFoundError as e:
raise MyError('File was not found, doh!') from e
finally:
do_some_cleanup()
try:
do_the_badness()
except MyError as e:
print('The error? Yeah, it happened')
This solves:
Explicitly handling the exception(s) that you're looking to handle
Making the stack traces and original exceptions available
Allowing your code that's going to handle the original exception somewhere else to handle your exception that's thrown
Allowing some top-level exception handling code to just catch MyPkgException to catch all of your exceptions so it can log something and exit with a nice status instead of an ugly stack trace
If it was me, I'd do a little re-ordering of your code.
raised = False
try:
# funky code
except HandleThis:
# handle it
raised = True
except Exception as ex:
# Don't Handle This
raise ex
finally:
if raised:
logger.info('funky code was raised')
I've placed the raised boolean assignment outside of the try statement to ensure scope and made the final except statement a general exception handler for exceptions that you don't want to handle.
This style determines if your code failed. Another approach might me to determine when your code succeeds.
success = False
try:
# funky code
success = True
except HandleThis:
# handle it
pass
except Exception as ex:
# Don't Handle This
raise ex
finally:
if success:
logger.info('funky code was successful')
else:
logger.info('funky code was raised')
If exception happened --> Put this logic in the exception block(s).
If exception did not happen --> Put this logic in the try block after the point in code where the exception can occur.
Finally blocks should be reserved for "cleanup actions," according to the Python language reference. When finally is specified the interpreter proceeds in the except case as follows: Exception is saved, then the finally block is executed first, then lastly the Exception is raised.
I am trying to wrap a .NET library in nice pythonic wrappers for use in IronPython.
A pattern used often in this library is a PersistenceBlock to make database CRUD operations clean and 'all or nothing':
try:
Persistence.BeginNewTransaction()
# do stuff here
Persistence.CommitTransaction()
except Exception, e:
Persistence.AbortTransaction()
Log.Error(e)
finally:
Persistence.CloseTransaction()
I would like to wrap this in a class that allows this kind of code:
with PersistenceBlock:
# do stuff here
this is what I've come up with:
class PersistenceBlock():
def __init__(self):
def __enter__(self):
return self
def __exit__(self, exctype, excinst, exctb):
try:
Persistence.BeginNewTransaction()
yield
Persistence.CommitTransaction()
except:
Persistence.AbortTransaction()
Log.Error(e)
finally
Persistence.CloseTransaction()
Is this a proper implementation of PEP343? What might I be missing?
The main thing that is throwing me is that Persistence is a static .NET class, and so there is not 'instance' to manage in the normal sense.
I have tried searching, but the word 'with' overwhelms the resutls :(
You can find the docs via searching for context manager protocol - that's the protocol all objects supposed to work with the with statement should implement.
Context managers (i.e. the __enter__ method) does not need to return anything - only if you want to use the with ... as ... syntax. In your __exit__ method you'll have to do some proper error checks: re-raise the exceptions if there are any and commit if there aren't. Maybe like this:
class PersistenceContext():
def __enter__(self):
# when opening the block, open a transaction
Persistence.BeginNewTransaction()
def __exit__(self, exctype, excinst, exctb):
if excinst is None:
# all went well - commit & close
Persistence.CommitTransaction()
Persistence.CloseTransaction()
else:
# something went wrong - abort, close and raise the error
Persistence.AbortTransaction()
Persistence.CloseTransaction()
raise exctype, excinst, exctb
For completeness, you could also use the contextmanager decorator to implement your context using a simple generator:
import contextlib
#contextlib.contextmanager
def PersisenceContext():
try:
yield Persistence.BeginNewTransaction()
except Exception:
Persistence.AbortTransaction()
raise
else:
Persistence.CommitTransaction()
finally:
Persistence.CloseTransaction()
Firstly, I'm not sure if my approach is proper, so I'm open to a variety of suggestions.
If try/except statements are frequently repeated in code, are there any good ways to shorten them or avoid fully writing them out?
try:
# Do similar thing
os.remove('/my/file')
except OSError, e:
# Same exception handing
pass
try:
# Do similar thing
os.chmod('/other/file', 0700)
except OSError, e:
#Same exception handling
pass
For example, for one line actions you could define a exception handling wrapper and then pass a lambda function:
def may_exist(func):
"Work with file which you are not sure if exists."""
try:
func()
except OSError, e:
# Same exception handling
pass
may_exist(lambda: os.remove('/my/file'))
may_exist(lambda: os.chmod('/other/file', 0700))
Does this 'solution' just make things less clear? Should I just fully write out all the try/except statements?
The best way to abstract exception handling is with a context manager:
from contextlib import contextmanager
#contextmanager
def common_handling():
try:
yield
finally:
# whatever your common handling is
then:
with common_handling():
os.remove('/my/file')
with common_handling():
os.chmod('/other/file', 0700)
This has the advantage that you can put full statements, and more than one of them, in each common_handling block.
Keep in mind though, your need to use the same handling over and over again feels a lot like over-handling exceptions. Are you sure you need to do this much?
It would probably be cleaner to make may_exist a decorator:
from functools import wraps
def may_exist(func):
#wraps(func):
def wrapper(*args, **kwds):
try:
return func(*args, **kwds)
except OSError:
pass
return wrapper
Then you can either do:
may_exist(os.remove)('/my/file')
may_exist(os.chmod)('/other/file', 0700)
for a one-off call, or:
remove_if_exists = may_exist(os.remove)
...
remove_if_exists('somefile')
if you use it a lot.
I think your generic solution is ok, but I wouldn't use those lambdas at the bottom. I'd recommend passing the function and arguments like this
def may_exist(func, *args):
"Work with file which you are not sure if exists."""
try:
func(args)
except OSError, e:
# Same exception handling
pass
may_exist(os.remove, '/my/file')
may_exist(os.chmod, '/other/file', '0700')
Would something like this work:
def may_exist(func, *func_args):
try:
func(*func_args)
except OSError as e:
pass