No exceptions from GObject properties in pygobject? - python

I'm trying to do some exception handling in python3 / pygobject with a property inside one of my custom gobject classes. The code I had was something like this
try:
label = foo.label # This is a GObject.Property
except Exception:
label = "fallback"
I had noticed that the interpreter never got around to the except block, after trying to figure out the problem I came up with this test case
from gi.repository import Gtk, GObject
class foo(GObject.Object):
#GObject.Property
def bar(self):
raise NotImplementedError
fish = foo()
print("Bar: ", fish.bar)
The output
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/gi/_gobject/propertyhelper.py", line 403, in obj_get_property
return prop.fget(self)
File "test.py", line 6, in bar
raise NotImplementedError
NotImplementedError
Bar: None
As you can see, even though there is an exception the property returns None and the program continues.
I don't get it either.
Anyone knows a workaround or a solution for this?

GObject properties don't support exceptions, so it's understandable that exceptions wouldn't work here. The workaround is to use getter/setter methods.

Related

PyGObject GLib.MainLoop() and exceptions

I'm using GLib.MainLoop() from PyGObject in my Python application and have a question.
Is it possible to handle Python exception that raises in loop.run()?
For example I'm calling some function using GLib.MainContext.invoke_full():
import traceback, gi
from gi.repository import GLib
try:
loop = GLib.MainLoop()
def handler(self):
print('handler')
raise Exception('from handler with love')
loop.get_context().invoke_full(GLib.PRIORITY_DEFAULT, handler, None)
loop.run()
except Exception:
print('catched!')
I thought that handler() should be called somewhere inside loop.run() so raise Exception('from handler with love') should be catched by except Exception:. However, it is not:
$ python test.py
handler
Traceback (most recent call last):
File "test.py", line 9, in handler
raise Exception('from handler with love')
Exception: from handler with love
It seems that handler() called in the middle of nowhere (called from GLib's C code?), and not catched by except Exception:.
Is it possible to catch all Python exceptions that raises in GLib.MainLoop.run()? I have a dozen of handlers called like that so I have to add same try: ... except OneException: ... exceptAnotherException: ... wrapper into each handler.
No, the exception is not propagated. It is caught and printed. No exception in a Python callback causes the loop to exit.
You can handle these types of errors through sys.excepthook

subprocess child traceback

I want to access the traceback of a python programm running in a subprocess.
The documentation says:
Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent. Additionally, the exception object will have one extra attribute called child_traceback, which is a string containing traceback information from the child’s point of view.
Contents of my_sub_program.py:
raise Exception("I am raised!")
Contents of my_main_program.py:
import sys
import subprocess
try:
subprocess.check_output([sys.executable, "my_sub_program.py"])
except Exception as e:
print e.child_traceback
If I run my_main_program.py, I get the following error:
Traceback (most recent call last):
File "my_main_program.py", line 6, in <module>
print e.child_traceback
AttributeError: 'CalledProcessError' object has no attribute 'child_traceback'
How can I access the traceback of the subprocess without modifying the subprocess program code? This means, I want to avoid adding a large try/except clause around my whole sub-program code, but rather handle error logging from my main program.
Edit: sys.executable should be replaceable with an interpreter differing from the one running the main program.
As you're starting another Python process, you can also try to use the multiprocessing Python module ; by sub-classing the Process class it is quite easy to get exceptions from the target function:
from multiprocessing import Process, Pipe
import traceback
import functools
class MyProcess(Process):
def __init__(self, *args, **kwargs):
Process.__init__(self, *args, **kwargs)
self._pconn, self._cconn = Pipe()
self._exception = None
def run(self):
try:
Process.run(self)
self._cconn.send(None)
except Exception as e:
tb = traceback.format_exc()
self._cconn.send((e, tb))
# raise e # You can still rise this exception if you need to
#property
def exception(self):
if self._pconn.poll():
self._exception = self._pconn.recv()
return self._exception
p = MyProcess(target=functools.partial(execfile, "my_sub_program.py"))
p.start()
p.join() #wait for sub-process to end
if p.exception:
error, traceback = p.exception
print 'you got', traceback
The trick is to have the target function executing the Python sub-program, this is done by using functools.partial.

Custom exceptions are not raised properly when used in Multiprocessing Pool

Question
I am observing behavior in Python 3.3.4 that I would like help understanding: Why are my exceptions properly raised when a function is executed normally, but not when the function is executed in a pool of workers?
Code
import multiprocessing
class AllModuleExceptions(Exception):
"""Base class for library exceptions"""
pass
class ModuleException_1(AllModuleExceptions):
def __init__(self, message1):
super(ModuleException_1, self).__init__()
self.e_string = "Message: {}".format(message1)
return
class ModuleException_2(AllModuleExceptions):
def __init__(self, message2):
super(ModuleException_2, self).__init__()
self.e_string = "Message: {}".format(message2)
return
def func_that_raises_exception(arg1, arg2):
result = arg1 + arg2
raise ModuleException_1("Something bad happened")
def func(arg1, arg2):
try:
result = func_that_raises_exception(arg1, arg2)
except ModuleException_1:
raise ModuleException_2("We need to halt main") from None
return result
pool = multiprocessing.Pool(2)
results = pool.starmap(func, [(1,2), (3,4)])
pool.close()
pool.join()
print(results)
This code produces this error:
Exception in thread Thread-3:
Traceback (most recent call last):
File "/user/peteoss/encap/Python-3.4.2/lib/python3.4/threading.py", line 921, in _bootstrap_inner
self.run()
File "/user/peteoss/encap/Python-3.4.2/lib/python3.4/threading.py", line 869, in run
self._target(*self._args, **self._kwargs)
File "/user/peteoss/encap/Python-3.4.2/lib/python3.4/multiprocessing/pool.py", line 420, in _handle_results
task = get()
File "/user/peteoss/encap/Python-3.4.2/lib/python3.4/multiprocessing/connection.py", line 251, in recv
return ForkingPickler.loads(buf.getbuffer())
TypeError: __init__() missing 1 required positional argument: 'message2'
Conversely, if I simply call the function, it seems to handle the exception properly:
print(func(1, 2))
Produces:
Traceback (most recent call last):
File "exceptions.py", line 40, in
print(func(1, 2))
File "exceptions.py", line 30, in func
raise ModuleException_2("We need to halt main") from None
__main__.ModuleException_2
Why does ModuleException_2 behave differently when it is run in a process pool?
The issue is that your exception classes have non-optional arguments in their __init__ methods, but that when you call the superclass __init__ method you don't pass those arguments along. This causes a new exception when your exception instances are unpickled by the multiprocessing code.
This has been a long-standing issue with Python exceptions, and you can read quite a bit of the history of the issue in this bug report (in which a part of the underlying issue with pickling exceptions was fixed, but not the part you're hitting).
To summarize the issue: Python's base Exception class puts all the arguments it's __init__ method receives into an attribute named args. Those arguments are put into the pickle data and when the stream is unpickled, they're passed to the __init__ method of the newly created object. If the number of arguments received by Exception.__init__ is not the same as a child class expects, you'll get at error at unpickling time.
A workaround for the issue is to pass all the arguments you custom exception classes require in their __init__ methods to the superclass __init__:
class ModuleException_2(AllModuleExceptions):
def __init__(self, message2):
super(ModuleException_2, self).__init__(message2) # the change is here!
self.e_string = "Message: {}".format(message2)
Another possible fix would be to not call the superclass __init__ method at all (this is what the fix in the bug linked above allows), but since that's usually poor behavior for a subclass, I can't really recommend it.
Your ModuleException_2.__init__ fails while beeing unpickled.
I was able to fix the problem by changing the signature to
class ModuleException_2(AllModuleExceptions):
def __init__(self, message2=None):
super(ModuleException_2, self).__init__()
self.e_string = "Message: {}".format(message2)
return
but better have a look at Pickling Class Instances to ensure a clean implementation.

Error exception must derive from BaseException even when it does (Python 2.7)

What's wrong with the following code (under Python 2.7.1):
class TestFailed(BaseException):
def __new__(self, m):
self.message = m
def __str__(self):
return self.message
try:
raise TestFailed('Oops')
except TestFailed as x:
print x
When I run it, I get:
Traceback (most recent call last):
File "x.py", line 9, in <module>
raise TestFailed('Oops')
TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType
But it looks to me that TestFailed does derive from BaseException.
__new__ is a staticmethod that needs to return an instance.
Instead, use the __init__ method:
class TestFailed(Exception):
def __init__(self, m):
self.message = m
def __str__(self):
return self.message
try:
raise TestFailed('Oops')
except TestFailed as x:
print x
Others have shown you how to fix your implementation, but I feel it important to point out that the behavior you are implementing is already the standard behavior of exceptions in Python so most of your code is completely unnecessary. Just derive from Exception (the appropriate base class for runtime exceptions) and put pass as the body.
class TestFailed(Exception):
pass
Use __init__() instead of __new__() to "initialize" classes. In most cases overriding __new__ is not necessary. It is called before __init__ during object creation.
See also Python's use of __new__ and __init__?
The __new__ implementation should return an instance of the class, but it's currently returning None (by default).
However, it looks like you should be using __init__ here, rather than __new__.

Can I use wxPython wx.ItemContainer in a derived class?

I'm trying to make a new wx.Choice-like control (actually a replacement for wx.Choice) which uses the wx.ItemContainer to manage the list of items. Here is a minimal example showing the error:
import wx
class c(wx.ItemContainer):
def __init__(my): pass
x = c()
x.Clear()
This fails with:
Traceback (most recent call last):
File "", line 1, in
File "c:\python25\lib\site-packages\wx-2.8-msw-unicode\wx\_core.py", line 1178
7, in Clear
return _core_.ItemContainer_Clear(*args, **kwargs)
TypeError: in method 'ItemContainer_Clear', expected argument 1 of type 'wxItemContainer *'
The other controls using ItemContainer seem to be internal to wxWindows, so it may not be possible for me to use it this way. However, it would certainly be convenient.
Any ideas on what I'm doing wrong?
wx.ItemContainer can't be instantiated directly e.g. try
x = wx.ItemContainer()
it throws error
Traceback (most recent call last):
File "C:\<string>", line 1, in <module>
File "D:\Python25\Lib\site-packages\wx-2.8-msw-unicode\wx\_core.py", line 11812, in __init__
def __init__(self): raise AttributeError, "No constructor defined"
AttributeError: No constructor defined
Reason being it is a type of interface(if we can call that in python) and you can not call __init__ on it, instead use it as second base and override the methods you use e.g.
class C(wx.PyControl, wx.ItemContainer):
def __init__(self, *args, **kwargs):
wx.PyControl.__init__(self, *args, **kwargs)
def Clear(self):
pass
app = wx.PySimpleApp()
frame = wx.Frame(None,title="ItemContainer Test")
x = C(frame)
x.Clear()
frame.Show()
app.SetTopWindow(frame)
app.MainLoop()
Your suspicions are on the right track. You can't subclass any of the wxWidgets types, because they're in the C++ domain and only nominally wrapped in Python. Instead, you need a Py* class, which you can subclass. The explanation is given in this Wiki entry on writing custom controls.
For ItemContainer, there doesn't appear to be such a wrapper - and the fact that ItemContainer is used as a parent in a multiple inheritance pattern may even complicate matters.
I suspect that from within wxPython, it may not be possible to replace ItemContainer--and if you do need it, it will have to be integrated at the C++ level.

Categories

Resources