Given a file of unknown file type, I'd like to open that file with one of a number of handlers. Each of the handlers raises an exception if it cannot open the file.
I would like to try them all and if none succeeds, raise an exception.
The design I came up with is
filename = 'something.something'
try:
content = open_data_file(filename)
handle_data_content(content)
except IOError:
try:
content = open_sound_file(filename)
handle_sound_content(content)
except IOError:
try:
content = open_image_file(filename)
handle_image_content(content)
except IOError:
...
This cascade doesn't seem to be the right way to do it.
Any suggestions?
Maybe you can group all the handlers and evaluate them in a for loop raising an exception at the end if none succeeded. You can also hang on to the raised exception to get some of the information back out of it as shown here:
filename = 'something.something'
handlers = [(open_data_file, handle_data_context),
(open_sound_file, handle_sound_content),
(open_image_file, handle_image_content)
]
for o, h in handlers:
try:
o(filename)
h(filename)
break
except IOError as e:
pass
else:
# Raise the exception we got within. Also saves sub-class information.
raise e
Is checking entirely out of the question?
>>> import urllib
>>> from mimetypes import MimeTypes
>>> guess = MimeTypes()
>>> path = urllib.pathname2url(target_file)
>>> opener = guess.guess_type(path)
>>> opener
('audio/ogg', None)
I know try/except and eafp is really popular in Python, but there are times when a foolish consistency will only interfere with the task at hand.
Additionally, IMO a try/except loop may not necessarily break for the reasons you expect, and as others have pointed out you're going to need to report the errors in a meaningful way if you want to see what's really happening as you try to iterate over file openers until you either succeed or fail. Either way, there's introspective code being written: to dive into the try/excepts and get a meaningful one, or reading the file path and using a type checker, or even just splitting the file name to get the extension... in the face of ambiguity, refuse the temptation to guess.
Like the others, I also recommend using a loop, but with tighter try/except scopes.
Plus, it's always better to re-raise the original exception, in order to preserve extra info about the failure, including traceback.
openers_handlers = [ (open_data_file, handle_data_context) ]
def open_and_handle(filename):
for i, (opener, handler) in enumerate(openers_handlers):
try:
f = opener(filename)
except IOError:
if i >= len(openers_handlers) - 1:
# all failed. re-raise the original exception
raise
else:
# try next
continue
else:
# successfully opened. handle:
return handler(f)
You can use Context Managers:
class ContextManager(object):
def __init__(self, x, failure_handling):
self.x = x
self.failure_handling = failure_handling
def __enter__(self):
return self.x
def __exit__(self, exctype, excinst, exctb):
if exctype == IOError:
if self.failure_handling:
fn = self.failure_handling.pop(0)
with ContextManager(fn(filename), self.failure_handling) as context:
handle_data_content(context)
return True
filename = 'something.something'
with ContextManager(open_data_file(filename), [open_sound_file, open_image_file]) as content:
handle_data_content(content)
Related
Is it possible to tell if there was an exception once you're in the finally clause? Something like:
try:
funky code
finally:
if ???:
print('the funky code raised')
I'm looking to make something like this more DRY:
try:
funky code
except HandleThis:
# handle it
raised = True
except DontHandleThis:
raised = True
raise
else:
raised = False
finally:
logger.info('funky code raised %s', raised)
I don't like that it requires to catch an exception, which you don't intend to handle, just to set a flag.
Since some comments are asking for less "M" in the MCVE, here is some more background on the use-case. The actual problem is about escalation of logging levels.
The funky code is third party and can't be changed.
The failure exception and stack trace does not contain any useful diagnostic information, so using logger.exception in an except block is not helpful here.
If the funky code raised then some information which I need to see has already been logged, at level DEBUG. We do not and can not handle the error, but want to escalate the DEBUG logging because the information needed is in there.
The funky code does not raise, most of the time. I don't want to escalate logging levels for the general case, because it is too verbose.
Hence, the code runs under a log capture context (which sets up custom handlers to intercept log records) and some debug info gets re-logged retrospectively:
try:
with LogCapture() as log:
funky_code() # <-- third party badness
finally:
# log events are buffered in memory. if there was an exception,
# emit everything that was captured at a WARNING level
for record in log.captured:
if <there was an exception>:
log_fn = mylogger.warning
else:
log_fn = getattr(mylogger, record.levelname.lower())
log_fn(record.msg, record.args)
Using a contextmanager
You could use a custom contextmanager, for example:
class DidWeRaise:
__slots__ = ('exception_happened', ) # instances will take less memory
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# If no exception happened the `exc_type` is None
self.exception_happened = exc_type is not None
And then use that inside the try:
try:
with DidWeRaise() as error_state:
# funky code
finally:
if error_state.exception_happened:
print('the funky code raised')
It's still an additional variable but it's probably a lot easier to reuse if you want to use it in multiple places. And you don't need to toggle it yourself.
Using a variable
In case you don't want the contextmanager I would reverse the logic of the trigger and toggle it only in case no exception has happened. That way you don't need an except case for exceptions that you don't want to handle. The most appropriate place would be the else clause that is entered in case the try didn't threw an exception:
exception_happened = True
try:
# funky code
except HandleThis:
# handle this kind of exception
else:
exception_happened = False
finally:
if exception_happened:
print('the funky code raised')
And as already pointed out instead of having a "toggle" variable you could replace it (in this case) with the desired logging function:
mylog = mylogger.WARNING
try:
with LogCapture() as log:
funky_code()
except HandleThis:
# handle this kind of exception
else:
# In case absolutely no exception was thrown in the try we can log on debug level
mylog = mylogger.DEBUG
finally:
for record in log.captured:
mylog(record.msg, record.args)
Of course it would also work if you put it at the end of your try (as other answers here suggested) but I prefer the else clause because it has more meaning ("that code is meant to be executed only if there was no exception in the try block") and may be easier to maintain in the long run. Although it's still more to maintain than the context manager because the variable is set and toggled in different places.
Using sys.exc_info (works only for unhandled exceptions)
The last approach I want to mention is probably not useful for you but maybe useful for future readers who only want to know if there's an unhandled exception (an exception that was not caught in any except block or has been raised inside an except block). In that case you can use sys.exc_info:
import sys
try:
# funky code
except HandleThis:
pass
finally:
if sys.exc_info()[0] is not None:
# only entered if there's an *unhandled* exception, e.g. NOT a HandleThis exception
print('funky code raised')
raised = True
try:
funky code
raised = False
except HandleThis:
# handle it
finally:
logger.info('funky code raised %s', raised)
Given the additional background information added to the question about selecting a log level, this seems very easily adapted to the intended use-case:
mylog = WARNING
try:
funky code
mylog = DEBUG
except HandleThis:
# handle it
finally:
mylog(...)
You can easily assign your caught exception to a variable and use it in the finally block, eg:
>>> x = 1
>>> error = None
>>> try:
... x.foo()
... except Exception as e:
... error = e
... finally:
... if error is not None:
... print(error)
...
'int' object has no attribute 'foo'
Okay, so what it sounds like you actually just want to either modify your existing context manager, or use a similar approach: logbook actually has something called a FingersCrossedHandler that would do exactly what you want. But you could do it yourself, like:
#contextmanager
def LogCapture():
# your existing buffer code here
level = logging.WARN
try:
yield
except UselessException:
level = logging.DEBUG
raise # Or don't, if you just want it to go away
finally:
# emit logs here
Original Response
You're thinking about this a bit sideways.
You do intend to handle the exception - you're handling it by setting a flag. Maybe you don't care about anything else (which seems like a bad idea), but if you care about doing something when an exception is raised, then you want to be explicit about it.
The fact that you're setting a variable, but you want the exception to continue on means that what you really want is to raise your own specific exception, from the exception that was raised:
class MyPkgException(Exception): pass
class MyError(PyPkgException): pass # If there's another exception type, you can also inherit from that
def do_the_badness():
try:
raise FileNotFoundError('Or some other code that raises an error')
except FileNotFoundError as e:
raise MyError('File was not found, doh!') from e
finally:
do_some_cleanup()
try:
do_the_badness()
except MyError as e:
print('The error? Yeah, it happened')
This solves:
Explicitly handling the exception(s) that you're looking to handle
Making the stack traces and original exceptions available
Allowing your code that's going to handle the original exception somewhere else to handle your exception that's thrown
Allowing some top-level exception handling code to just catch MyPkgException to catch all of your exceptions so it can log something and exit with a nice status instead of an ugly stack trace
If it was me, I'd do a little re-ordering of your code.
raised = False
try:
# funky code
except HandleThis:
# handle it
raised = True
except Exception as ex:
# Don't Handle This
raise ex
finally:
if raised:
logger.info('funky code was raised')
I've placed the raised boolean assignment outside of the try statement to ensure scope and made the final except statement a general exception handler for exceptions that you don't want to handle.
This style determines if your code failed. Another approach might me to determine when your code succeeds.
success = False
try:
# funky code
success = True
except HandleThis:
# handle it
pass
except Exception as ex:
# Don't Handle This
raise ex
finally:
if success:
logger.info('funky code was successful')
else:
logger.info('funky code was raised')
If exception happened --> Put this logic in the exception block(s).
If exception did not happen --> Put this logic in the try block after the point in code where the exception can occur.
Finally blocks should be reserved for "cleanup actions," according to the Python language reference. When finally is specified the interpreter proceeds in the except case as follows: Exception is saved, then the finally block is executed first, then lastly the Exception is raised.
I am writing a python script function to backup Wordpress. As part of the script i wrote a function to fetch database details from the config.php file.
Working of my function
function takes Wordpress installation location as an argument and using regex to match db_user,db_host,db_user,db_password from that file, the function will exist if can not find "config.php". I am using sys.exit(1) to exit from the function is that the proper way to exit from a function? I am pasting my function code snippet.
def parsing_db_info(location):
config_path = os.path.normpath(location+'/config.php')
if os.path.exists(config_path):
try:
regex_db = r'define\(\s*?\'DB_NAME\'\s*?,\s*?\'(.*?)\'\s*?'.group(1)
regex_user = r'define\(\s*?\'DB_USER\'\s*?,\s*?\'(.*?)\'\s*?'.group(1)
regex_pass = r'define\(\s*?\'DB_PASSWORD\'\s*?,\s*?\'(.*?)\'\s*?'.group(1)
regex_host = r'define\(\s*?\'DB_HOST\'\s*?,\s*?\'(.*?)\'\s*?'.group(1)
db_name = re.match(regex_db,config_path).group(1)
db_user = re.match(regex_user,config_path).group(1)
db_pass = re.match(regex_pass,config_path).group(1)
db_host = re.match(regex_host,config_path).group(1)
return {'dbname':db_name , 'dbuser':db_user , 'dbpass':db_pass , 'dbhost':db_host}
except exception as ERROR:
print(ERROR)
sys.exit(1)
else:
print('Not Found:',config_path)
sys.exit(1)
AFTER EDITING
def parsing_db_info(location):
config_path = os.path.normpath(location+'/wp-config.php')
try:
with open(config_path) as fh:
content = fh.read()
regex_db = r'define\(\s*?\'DB_NAME\'\s*?,\s*?\'(.*?)\'\s*?'
regex_user = r'define\(\s*?\'DB_USER\'\s*?,\s*?\'(.*?)\'\s*?'
regex_pass = r'define\(\s*?\'DB_PASSWORD\'\s*?,\s*?\'(.*?)\'\s*?'
regex_host = r'define\(\s*?\'DB_HOST\'\s*?,\s*?\'(.*?)\'\s*?'
db_name = re.search(regex_db,content).group(1)
db_user = re.search(regex_user,content).group(1)
db_pass = re.search(regex_pass,content).group(1)
db_host = re.search(regex_host,content).group(1)
return {'dbname':db_name , 'dbuser':db_user , 'dbpass':db_pass , 'dbhost':db_host}
except FileNotFoundError:
print('File Not Found,',config_path)
sys.exit(1)
except PermissionError:
print('Unable To read Permission Denied,',config_path)
sys.exit(1)
except AttributeError:
print('Parsing Error wp-config seems to be corrupt,')
sys.exit(1)
To answer your question, you shouldn't normally use sys.exit inside a function like that. Rather, get it to raise an exception in the case where it fails. Preferably, it should be an exception detailing what went wrong, or you could just let the existing exceptions propagate.
The normal rule in Python is this: deal with exceptions at the place you know how to deal with them.
In your code, you catch an exception, and then don't know what to do, so call sys.exit. Instead of this, you should:
let an exception propagate up to a top-level function which can catch it, and then call sys.exit if appropriate
wrap the exception in something more specific, and re-raise, so that a higher level function will have a specific exception to catch. For example, your function might raise a custom ConfigFileNotFound exception or ConfigFileUnparseable exception.
Also, you have put except exception, you probably mean except Exception. However, this is extremely broad, and will mask other programming errors. Instead, catch the specific exception class you expect.
I have function, which looks for special Element if project files:
def csproj_tag_finder(mod_proj_file):
"""Looking for 'BuildType' element in each module's csproj file passed in `mod_proj_file`
ard return it's value (CLOUD, MAIN, NGUI, NONE)"""
try:
tree = ET.ElementTree(file=mod_proj_file)
root = tree.getroot()
for element in root.iterfind('.//'):
if ('BuildType') in element.tag:
return element.text
except IOError as e:
# print 'WARNING: cant find file: %s' % e
If no file found - it prints 'WARNING: cant find file: %s' % e.
This function called from another one:
def parser(modename, mod_proj_file):
...
# module's tag's from project file in <BuildType> elements, looks like CLOUD
mod_tag_from_csproj = csproj_tag_finder(mod_proj_file)
if not mod_tag_from_csproj:
print('WARNING: module %s have not <BuildType> elements in file %s!' % (modename, mod_proj_file))
...
So - when file doesn't found - csproj_tag_finder()return None type, and print WARNING. Second function - parser() find empty mod_tag_from_csproj variable, and also print WARNING. This is harmless, so I want make csproj_tag_finder() raise special Exception, so parser() except it and pass == check, instead of print text.
I tried add something like:
...
except IOError as e:
# print 'WARNING: cant find file: %s' % e
raise Exception('NoFile')
to csproj_tag_finder() to catch it later in parser() - but it's interrupt next steps immediately.
P.S. Later if not mod_tag_from_csproj: will call another function to add new Element. This task can be solved with just return 'NoFile' and then catch with if/else - but it seems to me that raise will more correct way here. Or not?
raise interrupting the next steps immediately is exactly what it's supposed to do. In fact, that's the whole point of exceptions.
But then return also interrupts the next steps immediately, because returning early is also the whole point of return.
If you want to save an error until later, continue doing some other work, and then raise it at the end, you have to do that explicitly. For example:
def spam():
error = None
try:
do_some_stuff()
except IOError as e:
print 'WARNING: cant find file %s' % e
error = Exception('NoFile')
try:
do_some_more_stuff()
except OtherError as e:
print 'WARNING: cant frob the glotz %s' % e
error = Exception('NoGlotz')
# etc.
if error:
raise error
Now, as long as there's no unexpected exception that you forgot to handle, whatever failed last will be in error, and it'll be raised at the end.
As a side note, instead of raising Exception('NoFile'), then using == to test the exception string later, you probably want to create a NoFileException subclass; then you don't need to test it, you can just handle it with except NoFileException:. And that means you can carry some other useful information (the actual exception, the filename, etc.) in your exception without it getting in the way, too. If this sounds scary to implement, it's not. It's literally a one-liner:
class NoFileException(Exception): pass
Dive into Python -
This is a small snippet from fileinfo.py used in the book. This is opening an MP3 file and reading the last 128 bytes to fetch and later parse the metadata.
try:
fsock = open(filename, "rb", 0)
try:
fsock.seek(-128, 2)
tagdata = fsock.read(128)
finally:
fsock.close()
.
. # process tagdata: will NEVER raise IOError though
.
except IOError:
pass
This can be refactored as:
try:
fsock = open(filename, "rb", 0)
try:
fsock.seek(-128, 2)
tagdata = fsock.read(128)
except IOError:
pass
finally:
fsock.close()
.
. # process tagdata
.
I even used to have this question when I was learning Java. Should we just keep the logic that can actually raise an exception inside the try..except block or for the sake of keeping a code that does one particular job in ONE place; keep the other code that will NEVER raise an exception also within a try...except?
The try/finally clause's primary purpose is to close the file regardless of what happens, it doesn't make sense to move it the outer try/except as I assume you are trying to do:
try:
fsock = open(filename, "rb", 0)
try:
fsock.seek(-128, 2)
tagdata = fsock.read(128)
except:
pass
except IOError:
pass
finally:
fsock.close()
The reason being, if IOError is actually raised, calling fsock.close() would raise another exception, since fsock would not have been assigned. Instead of either, it'd be preferable to use the with statement which will automatically close the file for you:
try:
with open(filename, 'rb') as fsock:
fsock.seek(-128, 2)
tagdata = fsock.read(128)
except IOError:
pass
The most accepted standard is to put as little code in the try..except as possible. Reasoning is that you don't know what the other code will raise if there's a ton of code in a try.. then it becomes really messy.
You can see lots of good styling information in PEP 8, amongst which is:
- Additionally, for all try/except clauses, limit the 'try' clause
to the absolute minimum amount of code necessary. Again, this
avoids masking bugs.
Yes:
try:
value = collection[key]
except KeyError:
return key_not_found(key)
else:
return handle_value(value)
No:
try:
# Too broad!
return handle_value(collection[key])
except KeyError:
# Will also catch KeyError raised by handle_value()
return key_not_found(key)
The second piece of code is syntactically invalid, so you should prefer the first form.
If you'd make it syntactically valid by adding an except or finally clause, it would be semantically invalid: if the open fails, you'd still be trying to close fsock, which would not be assigned.
If the open fails then you can't assign to tagdata, so you should not allow the code to reach the point where you process tagdata. Often the best way to handle this is to process the IOError at a higher level (i.e. wrap this in a function and handle it in the calling context).
BTW, in modern Python we don't need to use finally for this sort of thing - we have a more powerful idiom. We also have an else clause that can be attached to try/except blocks that is executed only if the exception handlers are not invoked.
So we get something like:
def get_data():
with open(filename, "rb", 0) as fsock:
fsock.seek(-128, 2)
return fsock.read(128)
def do_processing():
try: tagdata = get_data()
except IOError: handle_error()
else: process(tagdata)
Sometimes I need the following pattern within a for loop. At times more than once in the same loop:
try:
# attempt to do something that may diversely fail
except Exception as e:
logging.error(e)
continue
Now I don't see a nice way to wrap this in a function as it can not return continue:
def attempt(x):
try:
raise random.choice((ValueError, IndexError, TypeError))
except Exception as e:
logging.error(e)
# continue # syntax error: continue not properly in loop
# return continue # invalid syntax
return None # this sort of works
If I return None than I could:
a = attempt('to do something that may diversely fail')
if not a:
continue
But I don't feel that does it the justice. I want to tell the for loop to continue (or fake it) from within attempt function.
Python already has a very nice construct for doing just this and it doesn't use continue:
for i in range(10):
try:
r = 1.0 / (i % 2)
except Exception, e:
print(e)
else:
print(r)
I wouldn't nest any more than this, though, or your code will soon get very ugly.
In your case I would probably do something more like this as it is far easier to unit test the individual functions and flat is better than nested:
#!/usr/bin/env python
def something_that_may_raise(i):
return 1.0 / (i % 2)
def handle(e):
print("Exception: " + str(e))
def do_something_with(result):
print("No exception: " + str(result))
def wrap_process(i):
try:
result = something_that_may_raise(i)
except ZeroDivisionError, e:
handle(e)
except OverflowError, e:
handle(e) # Realistically, this will be a different handler...
else:
do_something_with(result)
for i in range(10):
wrap_process(i)
Remember to always catch specific exceptions. If you were not expecting a specific exception to be thrown, it is probably not safe to continue with your processing loop.
Edit following comments:
If you really don't want to handle the exceptions, which I still think is a bad idea, then catch all exceptions (except:) and instead of handle(e), just pass. At this point wrap_process() will end, skipping the else:-block where the real work is done, and you'll go to the next iteration of your for-loop.
Bear in mind, Errors should never pass silently.
The whole idea of exceptions is that they work across multiple levels of indirection, i.e., if you have an error (or any other exceptional state) deep inside your call hierarchy, you can still catch it on a higher level and handle it properly.
In your case, say you have a function attempt() which calls the functions attempt2() and attempt3() down the call hierarchy, and attempt3() may encounter an exceptional state which should cause the main loop to terminate:
class JustContinueException(Exception):
pass
for i in range(0,99):
try:
var = attempt() # calls attempt2() and attempt3() in turn
except JustContinueException:
continue # we don't need to log anything here
except Exception, e:
log(e)
continue
foo(bar)
def attempt3():
try:
# do something
except Exception, e:
# do something with e, if needed
raise # reraise exception, so we catch it downstream
You can even throw a dummy exception yourself, that would just cause the loop to terminate, and wouldn't even be logged.
def attempt3():
raise JustContinueException()
Apart from the context I just want to answer the question in a brief fashion. No, a function cannot continue a loop it may be called in. That is because it has no information about this context. Also, it would raise a whole new class of questions like what shall happen if that function is called without a surrounding loop to handle that continue?
BUT a function can signal by various means that it wants the caller to continue any loop it currently performs. One means of course is the return value. Return False or None to signal this for example. Another way of signaling this is to raise a special Exception:
class ContinuePlease(Exception): pass
def f():
raise ContinuePlease()
for i in range(10):
try:
f()
except ContinuePlease:
continue
Maybe you want to do continuations? You could go and look at how Eric Lippert explains them (if you are ready to have your mind blown, but in Python it could look a bit like this:
def attempt(operation, continuation):
try:
operation()
except:
log('operation failed!')
continuation()
Inside your loop you could do:
attempt(attempt_something, lambda: foo(bar)) # attempt_something is a function
You could use this:
for l in loop:
attempt() and foo(bar)
but you should make sure attempt() returns True or False.
Really, though, Johnsyweb's answer is probably better.
Think that you are mapping foo on all items where attempt worked. So attempt is a filter and it's easy to write this as a generator:
def attempted( items ):
for item in items:
try:
yield attempt( item )
except Exception, e:
log(e)
print [foo(bar) for bar in attempted( items )]
I wouldn't normally post a second answer, but this is an alternative approach if you really don't like my first answer.
Remember that a function can return a tuple.
#!/usr/bin/env python
def something_that_mail_fail(i):
failed = False
result = None
try:
result = 1.0 / (i % 4)
except:
failed = True # But we don't care
return failed, result
for i in range(20):
failed, result = something_that_mail_fail(i)
if failed:
continue
for rah in ['rah'] * 3:
print(rah)
print(result)
I maintain that try ... except ... else is the way to go, and you shouldn't silently ignore errors though. Caveat emptor and all that.
Try the for loop outside the try, except block
This answer had Python 3.4 in mind however there are better ways in newer versions. Here is my suggestion
import sys
if '3.4' in sys.version:
from termcolor import colored
def list_attributes(module_name):
'''Import the module before calling this func on it.s '''
for index, method in enumerate(dir(module_name)):
try:
method = str(method)
module = 'email'
expression = module + '.' + method
print('*' * len(expression), '\n')
print( str(index).upper() + '. ',colored( expression.upper(), 'red'),
' ', eval( expression ).dir() , '...' , '\n'2 )
print('' * len(expression), '\n')
print( eval( expression + '.doc' ), '\n'*4,
'END OF DESCRIPTION FOR: ' + expression.upper(), '\n'*4)
except (AttributeError, NameError):
continue
else:
pass
finally:
pass
Edit: Removed all that stupidity I said...
The final answer was to rewrite the whole thing, so that I don't need to code like that.