Hide the "Failed to load" message when loading an invalid image, wxpython - python

bmp = wx.Image("C:\User\Desktop\cool.bmp", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
If i run this, it will automatically show an error message saying that it failed to load the image. How can I stop my program from doing this?

If all you're after is to stop the exception from raising, you can enclose it in a try/except block:
try:
bmp = wx.Image("C:\User\Desktop\cool.py", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
except:
pass
Bear in mind, it's good practice to only ignore specific exceptions, and to do something when it occurs (ie tell user to pick another image):
try:
bmp = wx.Image("C:\User\Desktop\cool.py", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
except <Specific Exception>, e:
doSomething() # Handle exception
Since it's an actual pop up message, you can use wx.Log_EnableLogging(False) to disable error logging in your application
To stop stderr redirecting you can set wx.App(redirect=False)
Or to make error log to a file instead of onscreen you can use:
wx.App(redirect=True,filename='error_log')

For wxpython version 4+, I was able to disable the popup message by calling
wx.Log.EnableLogging(False)
or by calling
wx.Log.SetLogLevel(wx.LOG_Error)
Relevant docs here

An alternative to wx.Log_EnableLogging(False) is wx.LogNull. From the docs:
# There will normally be a log message if a non-existent file is
# loaded into a wx.Bitmap. It can be suppressed with wx.LogNull
noLog = wx.LogNull()
bmp = wx.Bitmap('bogus.png')
# when noLog is destroyed the old log sink is restored
del noLog

I can't even get my wxPython code to run if I pass it an an invalid image. This is probably related to the fact that wxPython is a light wrapper around a C++ library though. See http://wiki.wxpython.org/C%2B%2B%20%26%20Python%20Sandwich for an interesting explanation.
The best way around issues like this is to actually use Python's os module, like this:
if os.path.exists(path):
# then create the widget
I do this sort of thing for config files and other things. If the file doesn't exist, I either create it myself or don't create the widget or I show a message so I know to fix it.

Related

Being pythonic with errors [duplicate]

A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?
The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.
Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.
I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning
I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.
Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!

Jupyter magic to handle notebook exceptions

I have a few long-running experiments in my Jupyter Notebooks. Because I don't know when they will finish, I add an email function to the last cell of the notebook, so I automatically get an email, when the notebook is done.
But when there is a random exception in one of the cells, the whole notebook stops executing and I never get any email. So I'm wondering if there is some magic function that could execute a function in case of an exception / kernel stop.
Like
def handle_exception(stacktrace):
send_mail_to_myself(stacktrace)
%%in_case_of_notebook_exception handle_exception # <--- this is what I'm looking for
The other option would be to encapsulate every cell in try-catch, right? But that's soooo tedious.
Thanks in advance for any suggestions.
Such a magic command does not exist, but you can write it yourself.
from IPython.core.magic import register_cell_magic
#register_cell_magic('handle')
def handle(line, cell):
try:
exec(cell)
except Exception as e:
send_mail_to_myself(e)
raise # if you want the full trace-back in the notebook
It is not possible to load the magic command for the entire notebook automatically, you have to add it at each cell where you need this feature.
%%handle
some_code()
raise ValueError('this exception will be caught by the magic command')
#show0k gave the correct answer to my question (in regards to magic methods). Thanks a lot! :)
That answer inspired me to dig a little deeper and I came across an IPython method that lets you define a custom exception handler for the whole notebook.
I got it to work like this:
from IPython.core.ultratb import AutoFormattedTB
# initialize the formatter for making the tracebacks into strings
itb = AutoFormattedTB(mode = 'Plain', tb_offset = 1)
# this function will be called on exceptions in any cell
def custom_exc(shell, etype, evalue, tb, tb_offset=None):
# still show the error within the notebook, don't just swallow it
shell.showtraceback((etype, evalue, tb), tb_offset=tb_offset)
# grab the traceback and make it into a list of strings
stb = itb.structured_traceback(etype, evalue, tb)
sstb = itb.stb2text(stb)
print (sstb) # <--- this is the variable with the traceback string
print ("sending mail")
send_mail_to_myself(sstb)
# this registers a custom exception handler for the whole current notebook
get_ipython().set_custom_exc((Exception,), custom_exc)
So this can be put into a single cell at the top of any notebook and as a result it will do the mailing in case something goes wrong.
Note to self / TODO: make this snippet into a little python module that can be imported into a notebook and activated via line magic.
Be careful though. The documentation contains a warning for this set_custom_exc method: "WARNING: by putting in your own exception handler into IPython’s main execution loop, you run a very good chance of nasty crashes. This facility should only be used if you really know what you are doing."
Since notebook 5.1 you can use a new tag: raises-exception
This will indicate that exception in the specific cell is expected and jupyter will not stop the execution.
(In order to set a tag you have to choose from the main menu: View -> Cell Toolbar -> Tags)
Why exec is not always the solution
It's some years later and I had a similar issue trying to handle errors with Jupyter magics. However, I needed variables to persist in the actual Jupyter notebook.
%%try_except print
a = 12
raise ValueError('test')
In this example, I want the error to print (but could be anything such as e-mail as in the opening post), but also a == 12 to be true in the next cell. For that reason, the method exec suggested does not work when you define the magic in a different file. The solution I found is to use the IPython functionalities.
How you can solve it
from IPython.core.magic import line_magic, cell_magic, line_cell_magic, Magics, magics_class
#magics_class
class CustomMagics(Magics):
#cell_magic
def try_except(self, line, cell):
""" This magic wraps a cell in try_except functionality """
try:
self.shell.ex(cell) # This executes the cell in the current namespace
except Exception as e:
if ip.ev(f'callable({how})'): # check we have a callable handler
self.shell.user_ns['error'] = e # add error to namespace
ip.ev(f'{how}(error)') # call the handler with the error
else:
raise e
# Register
from IPython import get_ipython
ip = get_ipython()
ip.register_magics(CustomMagics)
I don't think there is an out-of-the-box way to do that not using a try..except statement in your cells. AFAIK a 4 years old issue mentions this, but is still in open status.
However, the runtools extension may do the trick.

Enumerating child windows in python?

I have dabbled around for a year or so using c++ and decided I would try my hand at python as it has a much easier syntax and will increase productivity while I am still learning (I think!). I am trying to enumerate all child windows from a parent window of a desktop application in Windows.
import win32ui
def WindowExists(windowname):
try:
win32ui.FindWindow(None, windowname)
except win32ui.error:
return False
else:
return True
appFind = "Test Application"
if WindowExists(appFind):
print ("Program is running")
hwnd = win32ui.FindWindow(None, appFind)
else:
print ("Program is not running")
So far I am identifying the application with no problem but I am wondering if my assignment of hwnd is working as I think it would do in a c++ environment so I would be able to pass my hwnd assignment to enumchildwindows. I am not entirely sure how I get the children from here though.
One other question I had was rather than using just the title of the application, how can I use the handle? if for example the handle was something like 00130903 of testapplication. I remember a few months I messed around with something like this in c++ and I think you can use x to replace the first set of zeros (or something similar) on the handle, but I honestly cant remember much of it so hopefully you guys can help!
Edit -
TypeError: The object is not a PyHANDLE object.
I think my assumption is right here that I am not correctly assigning a proper handle named hwnd , this is the error i get when I try to use enumchldwindows or win32con.WM_GETTEXT , any example of correctly setting a handle by title and by handle would really be appreciated!
hwnd = win32ui.FindWindow(None, appFind) , worked for verifying the windows existance
hwnd = win32gui.FindWindow(None, appFind), worked to allow me to use the handle!, live and we learn!

What's the Pythonic way to report nonfatal errors in a parser?

A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?
The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.
Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.
I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning
I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.
Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!

Suppressing pjsua output in Python script

I'm writing a script that uses curses to produce a main window and a log window at the bottom of the screen.
It seems that when I import pjsua it insists on printing to the screen even though I have set log level to 0. Here's what it outputs:
15:49:09.716 os_core_unix.c !pjlib 2.0.1 for POSIX initialized
15:49:09.844 sip_endpoint.c .Creating endpoint instance...
15:49:09.844 pjlib .select() I/O Queue created (0x7f84690decd8)
15:49:09.844 sip_endpoint.c .Module "mod-msg-print" registered
15:49:09.844 sip_transport. .Transport manager created.
15:49:09.845 pjsua_core.c .PJSUA state changed: NULL --> CREATED
15:49:09.896 pjsua_media.c ..NAT type detection failed: Invalid STUN server or server not configured (PJNATH_ESTUNINSERVER)
Note it doesn't send this through the logging callback, meaning I have no way to put it in the log window with the rest of my logging information. Can anyone give me some advice on dealing with this output please?
Thanks
If you can detect which stream it writes to, e.g. sys.stderr, you could redirect it somewhere by simple assignment of sys.stderr to another open file (or even /dev/null ?).

Categories

Resources