This problem is partly due to my lack of completely understanding scoping in python, so I'll need to review that. Either way, here is a seriously trivial piece of code that keeps crashing on my Django test app.
Here's a snippet:
#login_required
def someview(request):
try:
usergroup = request.user.groups.all()[0].name
except:
HttpResponseRedirect('/accounts/login')
if 'client' in usergroup:
stafflist = ProxyUserModel.objects.filter(groups__name='staff')
No brain surgery here, the problem is I get an error such as the following:
File "/usr/local/django/myapp/views.py", line 18, in someview
if 'client' in usergroup:
UnboundLocalError: local variable 'usergroup' referenced before assignment
My question here is, why is usergroup unbound? If it's unbound, that means the try statement had an exception thrown at which point an HttpResponseRedirect should happen, but it never does. Instead I get an HTTP 500 error back, which is slightly confusing.
Yes I can write smarter code and ensure that the user logging in definitely has a group associated with them. But this isn't a production app, I'm just trying to understand / learn Python/Django. Why exactly is the above happening when a user that's not associated with a group logs in instead of a redirect to a login page?
In this case I'm intentionally logging in as a user that isn't part of a group. That means that the above code should throw an IndexError exception like the following:
>>> somelist = []
>>> print somelist[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: list index out of range
HttpResponseRedirect('/accounts/login')
You're creating it but not returning it. Flow continues to the next line, which references usergroup despite it never having been assigned due to the exception.
The except is also troublesome. In general you should never catch ‘everything’ (except: or except Exception:) as there are lots of odd conditions in there you could be throwing away, making debugging very difficult. Either catch the specific exception subclass that you think is going to happen when the user isn't logged on, or, better, use an if test to see if they're logged on. (It's not really an exceptional condition.)
eg. in Django normally:
if not request.user.is_authenticated():
return HttpResponseRedirect('/accounts/login')
or if your concern is the user isn't in any groups (making the [0] fail):
groups= request.user.groups.all()
if len(groups)==0:
return HttpResponseRedirect('/accounts/login')
usergroup= groups[0].name
Try moving you if 'client' part inside you try block. Either that or define usergroup = None right above the try.
In cases where you have a try…except suite and you want code to run iff no exceptions have occurred, it's a good habit to write the code as follows:
try:
# code that could fail
except Exception1:
# handle exception1
except Exception2:
# handle exception2
else: # the code-that-could-fail didn't
# here runs the code that depends
# on the success of the try clause
Related
Edit: in retrospect, this is not a well thought-out question, but I'm leaving it here for future readers who may have the same misunderstanding as me.
There may be a lapse in my understanding of how try/except/finally work in Python, but I would expect the following to work as described in comments.
from sys import argv
try:
x = argv[1] # Set x to the first argument if one is passed
finally:
x = 'default' # If no argument is passed (throwing an exception above) set x to 'default'
print(x)
I would expect that the file above (foo.py) should print default when run as python .\foo.py and would print bar if run as python .\foo.py bar.
The bar functionality works as expected, however, the default behaviour does not work; if I run python .\foo.py, I get an IndexError:
Traceback (most recent call last):
File ".\foo.py", line 4, in <module>
x = argv[1]
IndexError: list index out of range
As a result, I have two questions:
Is this a bug or is it an expected behavior in a try-finally block?
Should I just never use try-finally without an except clause?
This is expected behaviour. try:..finally:... alone doesn't catch exceptions. Only the except clause of a try:...except:... does.
try:...finally:... only guarantees that the statements under finally are always executed, whatever happens in the try section, whether the block succeeds or is exited because of a break, continue, return or an exception. So try:...finally:... is great for cleaning up resources; you get to run code no matter what happens in the block (but note that the with statement and context managers let you encapsulate cleanup behaviour too). If you want to see examples, then the Python standard library has hundreds.
If you need to handle an IndexError exception in a try block, then you must use an except clause. You can still use a finally clause as well, it'll be called after the except suite has run.
And if you ever get to work with much older Python code, you'll see that in code that must run with Python 2.4 or older try:....finally:... and try:...except:... are never used together. That's because only as of Python 2.5 have the two forms been unified.
A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?
The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.
Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.
I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning
I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.
Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!
I'm starting to use Behave to implement some tests. I would like to replace some of my existing unittest (which are more feature tests). Some of these uses assertRaises to check that certain calls to the back-end service raise the errors they should. Is it possible to have something similar in Behave (or maybe rather Gherkin)?
The following unittest calls my backend service and as a guest has logged on, is not able to perform the admin task (do_admin_task). It should raise an exception.
def test_mycall(self):
service = myservice('guest', 'pwd')
self.assertRaises(NoPermission, service.do_admin_task, some_param)
In my feature file, how would I create my scenario? Like this?
scenario: test guest can't do an admin task
given I log on to my service as guest / pwd
when I try to perform my admin task
then it should fail saying NoPermission
I believe that this will already raise an exception in the when step, so won't even get to the then step.
One potential way I could imagine around this is to create a specific step that performs both of these steps and does the exception handling. If I however want to mock errors in lower level calls, then I would have to re-write many of these steps, which is exactly what I'm hoping to avoid by switching to Behave in the first place.
How should I approach this?
When thinking on the Gherkin level, the exception is an expected outcome of the when step. So the step definition should have a try block and store the result/exception in the context. The then step can check this result/exception then.
#When(u'I try to perform my admin task')
def step_impl(context):
try:
context.admintaskresult = myservice(context.user, context.pass)
context.admintaskexception = None
except Exception as ex:
context.admintaskresult = None
context.admintaskexception = ex
#Then(u'it should fail saying NoPermission')
def step_impl(context):
assert isinstance(context.admintaskexception, NoPermissionException)
A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?
The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.
Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.
I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning
I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.
Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!
The Python standard library and other libraries I use (e.g. PyQt) sometimes use exceptions for non-error conditions. Look at the following except of the function os.get_exec_path(). It uses multiple try statements to catch exceptions that are thrown while trying to find some environment data.
try:
path_list = env.get('PATH')
except TypeError:
path_list = None
if supports_bytes_environ:
try:
path_listb = env[b'PATH']
except (KeyError, TypeError):
pass
else:
if path_list is not None:
raise ValueError(
"env cannot contain 'PATH' and b'PATH' keys")
path_list = path_listb
if path_list is not None and isinstance(path_list, bytes):
path_list = fsdecode(path_list)
These exceptions do not signify an error and are thrown under normal conditions. When using exception breakpoints for one of these exceptions, the debugger will also break in these library functions.
Is there a way in PyCharm or in Python in general to have the debugger not break on exceptions that are thrown and caught inside a library without any involvement of my code?
in PyCharm, go to Run-->View Breakpoints, and check "On raise" and "Ignore library files".
The first option makes the debugger stop whenever an exception is raised, instead of just when the program terminates, and the second option gives PyCharm the policy to ignore library files, thus searching mainly in your code.
The solution was found thanks to CrazyCoder's link to the feature request, which has since been added.
For a while I had a complicated scheme which involved something like the following:
try( Closeable ignore = Debugger.newBreakSuppression() )
{
... library call which may throw ...
} <-- exception looks like it is thrown here
This allowed me to never be bothered by exceptions that were thrown and swallowed within library calls. If an exception was thrown by a library call and was not caught, then it would appear as if it occurred at the closing curly bracket.
The way it worked was as follows:
Closeable is an interface which extends AutoCloseable without declaring any checked exceptions.
ignore is just a name that tells IntelliJ IDEA to not complain about the unused variable, and it is necessary because silly java does not support try( Debugger.newBreakSuppression() ).
Debugger is my own class with debugging-related helper methods.
newBreakSuppression() was a method which would create a thread-local instance of some BreakSuppression class which would take note of the fact that we want break-on-exception to be temporarily suspended.
Then I had an exception breakpoint with a break condition that would invoke my Debugger class to ask whether it is okay to break, and the Debugger class would respond with a "no" if any BreakSuppression objects were instantiated.
That was extremely complicated, because the VM throws exceptions before my code has loaded, so the filter could not be evaluated during program startup, and the debugger would pop up a dialog complaining about that instead of ignoring it. (I am not complaining about that, I hate silent errors.) So, I had to have a terrible, horrible, do-not-try-this-at-home hack where the break condition would look like this: java.lang.System.err.equals( this ) Normally, this would never return
true, because System.err is not equal to a thrown exception, therefore the debugger would never break. However, when my Debugger class would get initialized, it would replace System.err with a class of its own,
which provided an implementation for equals(Object) and returned true if the debugger should break. So, essentially, I was using System.err as an eternal global variable.
Eventually I ditched this whole scheme because it is overly complicated and it performs very bad, because exceptions apparently get thrown very often in the java software ecosystem, so evaluating an expression every time an exception is thrown tremendously slows down everything.
This feature is not implemented yet, you can vote for it:
add ability to break (add breakpoint) on exceptions only for my files
There is another SO answer with a solution:
Debugging with pycharm, how to step into project, without entering django libraries
It is working for me, except I still go into the "_pydev_execfile.py" file, but I haven't stepped into other files after adding them to the exclusion in the linked answer.