Print() is never displayed inside python try/Except - python

I'm working on a Django app.
Somewhere in my code, I use a try/except like this:
for tag in category.get("tags"):
try:
newTag, created = MyObject.objects.update_or_create(title=tag.get("title"))
print("HAHAHA", newTag)
except:
pass
It works well, newTag is saved, but print("HAHAHA", newTag) is never rendered. I don't know why.
Please help

Since you're in a Django app, try
import logging
...
logging.debug('HAHA {}'.format(newTag))
and watch your server logs.
There lots of additional info on logging in the Django docs, https://docs.djangoproject.com/en/2.0/topics/logging/
Adding
It's also a good idea to log exceptions to flush issues out of hiding.
...
except Exception as ex:
logging.warn("Caught {!r}".format(ex))
The logging module has exception() to help with that, but I prefer having more choice on what level an exception is.

Related

Matching query does not exist error

There are a bunch of people on this site have had a similar questions to mine, but I can't seem to find one that has solved it the way I want it to.
I'm trying to get a password out of a database so that I can send an email automatically. To do this, I am calling Credential.objects.get(username='foo').password.
This is where I get the error.
What's really weird about this is I already had access to this database in a python console, and I've created a row in it. I have no clue why it's not showing up!
Other people seem to recommend doing the following:
try:
Credential.objects.get(username='foo').password
except Exception as e:
return None
All that this is doing is making sure the program doesn't return an error.
This doesn't help me because this can't fail.
An email must be sent and this call must get it.
I'm almost certain I'm calling the right function for this, but I've been wrong before. Any help on this would be greatly appreciated.
credential.py (not putting up max_length for security reasons):
import d_models as models
class Credential(models.Model):
username = models.CharField()
password = EncryptedCharField()
Following may clear your concept.
from django.core.exceptions import ObjectDoesNotExist
try:
Credential.objects.get(username='foo').password
except ObjectDoesNotExist:
#If User with username='foo' does not exist in database then it will comes #in exception.
#Even if there are multiple entries in database with username='foo' then also it will give you error for this you need to use another exception.
return "What ever you want to return on exception"
Or else you can try:
python manage.py shell
for user in Credentials.objects.all():
print username
print password
And check what are the users are available in your database(configured in settings.py file).
After two hours of looking at my code, I realized that I am an idiot.
One thing that I left out was that I ran this function in a test case, which spins up separate blank test databases. The function call wouldn't work because it was trying to get at the wrong database!
To fix this I just called: ./manage.py dumpdata --format json --indent 4 -o ./full/path/to/file/credential.json and put fixtures = ['./full/path/to/file/credential.json'] at the top of the test class. Thanks to everyone for the suggestions! Hopefully I didn't waste too much of your time!

Being pythonic with errors [duplicate]

A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?
The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.
Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.
I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning
I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.
Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!

What's the Pythonic way to report nonfatal errors in a parser?

A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?
The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.
Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.
I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning
I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.
Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!

Ignore exceptions thrown and caught inside a library

The Python standard library and other libraries I use (e.g. PyQt) sometimes use exceptions for non-error conditions. Look at the following except of the function os.get_exec_path(). It uses multiple try statements to catch exceptions that are thrown while trying to find some environment data.
try:
path_list = env.get('PATH')
except TypeError:
path_list = None
if supports_bytes_environ:
try:
path_listb = env[b'PATH']
except (KeyError, TypeError):
pass
else:
if path_list is not None:
raise ValueError(
"env cannot contain 'PATH' and b'PATH' keys")
path_list = path_listb
if path_list is not None and isinstance(path_list, bytes):
path_list = fsdecode(path_list)
These exceptions do not signify an error and are thrown under normal conditions. When using exception breakpoints for one of these exceptions, the debugger will also break in these library functions.
Is there a way in PyCharm or in Python in general to have the debugger not break on exceptions that are thrown and caught inside a library without any involvement of my code?
in PyCharm, go to Run-->View Breakpoints, and check "On raise" and "Ignore library files".
The first option makes the debugger stop whenever an exception is raised, instead of just when the program terminates, and the second option gives PyCharm the policy to ignore library files, thus searching mainly in your code.
The solution was found thanks to CrazyCoder's link to the feature request, which has since been added.
For a while I had a complicated scheme which involved something like the following:
try( Closeable ignore = Debugger.newBreakSuppression() )
{
... library call which may throw ...
} <-- exception looks like it is thrown here
This allowed me to never be bothered by exceptions that were thrown and swallowed within library calls. If an exception was thrown by a library call and was not caught, then it would appear as if it occurred at the closing curly bracket.
The way it worked was as follows:
Closeable is an interface which extends AutoCloseable without declaring any checked exceptions.
ignore is just a name that tells IntelliJ IDEA to not complain about the unused variable, and it is necessary because silly java does not support try( Debugger.newBreakSuppression() ).
Debugger is my own class with debugging-related helper methods.
newBreakSuppression() was a method which would create a thread-local instance of some BreakSuppression class which would take note of the fact that we want break-on-exception to be temporarily suspended.
Then I had an exception breakpoint with a break condition that would invoke my Debugger class to ask whether it is okay to break, and the Debugger class would respond with a "no" if any BreakSuppression objects were instantiated.
That was extremely complicated, because the VM throws exceptions before my code has loaded, so the filter could not be evaluated during program startup, and the debugger would pop up a dialog complaining about that instead of ignoring it. (I am not complaining about that, I hate silent errors.) So, I had to have a terrible, horrible, do-not-try-this-at-home hack where the break condition would look like this: java.lang.System.err.equals( this ) Normally, this would never return
true, because System.err is not equal to a thrown exception, therefore the debugger would never break. However, when my Debugger class would get initialized, it would replace System.err with a class of its own,
which provided an implementation for equals(Object) and returned true if the debugger should break. So, essentially, I was using System.err as an eternal global variable.
Eventually I ditched this whole scheme because it is overly complicated and it performs very bad, because exceptions apparently get thrown very often in the java software ecosystem, so evaluating an expression every time an exception is thrown tremendously slows down everything.
This feature is not implemented yet, you can vote for it:
add ability to break (add breakpoint) on exceptions only for my files
There is another SO answer with a solution:
Debugging with pycharm, how to step into project, without entering django libraries
It is working for me, except I still go into the "_pydev_execfile.py" file, but I haven't stepped into other files after adding them to the exclusion in the linked answer.

How do I use Django's logger to log a traceback when I tell it to?

try:
print blah
except KeyError:
traceback.print_exc()
I used to debug like this. I'd print to the console. Now, I want to log everything instead of print, since Apache doesn't allow printing. So, how do I log this entire traceback?
You can use python's logging mechanism:
import logging
...
logger = logging.getLogger("blabla")
...
try:
print blah # You can use logger.debug("blah") instead of print
except KeyError:
logger.exception("An error occurred")
This will print the stack trace and will work with apache.
If you're running Django's trunk version (or 1.3 when it's released), there are a number of default logging configurations built in which are integrated with Python's standard logging module. For that, all you need to do is import logging, call logger = logging.getLogger(__name__) and then call logger.exception(msg) and you'll get both your message and a stack trace. Docs for Django's logging functionality and Python's logger.exception method would be handy references.
If you don't want to use Python's logging module, you can also import sys and write to sys.stderr rather than using print. On the command line this goes to the screen, when running under Apache it'll go into your error logs.

Categories

Resources