I'm using a Class provided by a client (I have no access to the object code), and I'm trying to check if a object has a attribute. The attribute itself is write only, so the hasattr fails:
>>> driver.console.con.input = 'm'
>>> hasattr(driver.console.con, 'input')
False
>>> simics> #driver.console.con.input
Traceback (most recent call last):
File "<string>", line 1, in <module>
Attribute: Failed converting 'input' attribute in object
'driver.console.con' to Python: input attribute in driver.console.con
object: not readable.
Is there a different way to check if an attribute exists?
You appear to have some kind of native code proxy that bridges Python to an extension, and it is rather breaking normal Python conventions
There are two possibilities:
The driver.console.con object has a namespace that implements attributes as descriptors, and the input descriptor only has a __set__ method (and possibly a __delete__ method). In that case, look for the descriptor:
if 'input' in vars(type(driver.console.con)):
# there is an `input` name in the namespace
attr = vars(type(driver.console.con))['input']
if hasattr(attr, '__set__'):
# can be set
...
Here the vars() function retrieves the namespace for the class used for driver.console.con.
The proxy uses __getattr__ (or even __getattribute__) and __setattr__ hooks to handle arbitrary attributes. You are out of luck here, you can't detect what attributes either method will support outside of hasattr() and trying to set the attribute directly. Use try...except guarding:
try:
driver.console.con.input = 'something'
except Attribute: # exactly what exception object does this throw?
# can't be set, not a writable attribute
pass
You may have to use a debugger or print() statement to figure out exactly what exception is being thrown (use a try...except Exception as ex: block to capture all exceptions then inspect ex); in the traceback in your question the exception message at the end looks decidedly non-standard. That project really should raise an AttributeError at that point.
Given the rather custom exception being thrown, my money is on option 2 (but option 1 is still a possibility if the __get__ method on the descriptor throws the exception).
Related
I'm trying to generate the values that will go into a custom enum instead of using literals:
from enum import IntEnum
class Test(IntEnum):
for i in range(3):
locals()['ABC'[i]] = i
del i
My desired output is three attributes, named A, B, C, with values 0, 1, 2, respectively. This is based on two expectations that I've come to take for granted about python:
The class body will run in an isolated namespace before anything else
locals during that run will refer to said isolated namespace
Once the body is done executing, I would expect the result to be not much different than calling IntEnum('Test', [('A', 0), ('B', 1), ('C', 2)]) (which works just fine BTW).
Instead, I get an error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in Test
File "/usr/lib/python3.8/enum.py", line 95, in __setitem__
raise TypeError('Attempted to reuse key: %r' % key)
TypeError: Attempted to reuse key: 'i'
If I try doing the same with class Test: instead of class Test(IntEnum):, it works as expected. The traceback is showing the problem to be happening in enum.py. This contradicts my assumptions about how things work.
What is going on with this code, and how to I create attributes in the local namespace of the class body before IntEnum can get to them?
Background The reason that I'm trying to create the enum this way is that the "real" values are a more complex tuple, and there is a __new__ method defined to parse the tuple and assign some attributes to the individual enum objects. All that does not seem to be relevant to figuring out what is happening with the error and fixing it.
First, an explanation of what is happening. Before executing the class body, the metaclass's __prepare__ method is used to create the namespace. Normally, this is just a dict. However, enum.EnumType uses a enum._EnumDict class, which specifically prevents duplicate names from being added to the namespace. While this does not alter how the code in the class body is run, it does alter the namespace into which that code places names.
There are a couple of exceptions to the duplicate prevention, which offer potential solutions. First, the proper solution is to use the _ignore_ sunder attribute. If it gets set first, the variable i can be used normally, and will not appear in the final class:
class Test(IntEnum):
_ignore_ = ['i']
for i in range(3):
locals()['ABC'[i]] = i
Another, much hackier method is to use a dunder name, which will be ignored by the metaclass:
class Test(IntEnum):
for __i__ in range(3):
locals()['ABC'[__i__]] = __i__
del __i__
While this solution is functional, it uses dunders, which are nominally reserved by the language, and an undocumented feature, both of which are bad.
I know what namespaces are. But when running
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('bar')
parser.parse_args(['XXX']) # outputs: Namespace(bar='XXX')
What kind of object is Namespace(bar='XXX')? I find this totally confusing.
Reading the argparse docs, it says "Most ArgumentParser actions add some value as an attribute of the object returned by parse_args()". Shouldn't this object then appear when running globals()? Or how can I introspect it?
Samwise's answer is very good, but let me answer the other part of the question.
Or how can I introspect it?
Being able to introspect objects is a valuable skill in any language, so let's approach this as though Namespace is a completely unknown type.
>>> obj = parser.parse_args(['XXX']) # outputs: Namespace(bar='XXX')
Your first instinct is good. See if there's a Namespace in the global scope, which there isn't.
>>> Namespace
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'Namespace' is not defined
So let's see the actual type of the thing. The Namespace(bar='XXX') printer syntax is coming from a __str__ or __repr__ method somewhere, so let's see what the type actually is.
>>> type(obj)
<class 'argparse.Namespace'>
and its module
>>> type(obj).__module__
'argparse'
Now it's a pretty safe bet that we can do from argparse import Namespace and get the type. Beyond that, we can do
>>> help(argparse.Namespace)
in the interactive interpreter to get detailed documentation on the Namespace class, all with no Internet connection necessary.
It's simply a container for the data that parse_args generates.
https://docs.python.org/3/library/argparse.html#argparse.Namespace
This class is deliberately simple, just an object subclass with a readable string representation.
Just do parser.parse_args(...).bar to get the value of your bar argument. That's all there is to that object. Per the doc, you can also convert it to a dict via vars().
The symbol Namespace doesn't appear when running globals() because you didn't import it individually. (You can access it as argparse.Namespace if you want to.) It's not necessary to touch it at all, though, because you don't need to instantiate a Namespace yourself. I've used argparse many times and until seeing this question never paid attention to the name of the object type that it returns -- it's totally unimportant to the practical applications of argparse.
Namespace is basically just a bare-bones class, on whose instances you can define attributes, with a few niceties:
A nice __repr__
Only keyword arguments can be used to instantiate it, preventing "anonymous" attributes.
A convenient method to check if an attribute exists (foo in Namespace(bar=3) evaluates to False)
Equality with other Namespace instances based on having identical attributes and attribute values. (E.g. ,Namespace(foo=3, bar=5) == Namespace(bar=5, foo=3))
Instances of Namespace are returned by parse_args:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('bar')
args = parser.parse_args(['XXX'])
assert args.bar == 'XXX'
Consider the following example that uses __subclasscheck__ for a custom exception type:
class MyMeta(type):
def __subclasscheck__(self, subclass):
print(f'__subclasscheck__({self!r}, {subclass!r})')
class MyError(Exception, metaclass=MyMeta):
pass
Now when raising an exception of this type, the __subclasscheck__ method gets invoked; i.e. raise MyError() results in:
__subclasscheck__(<class '__main__.MyError'>, <class '__main__.MyError'>)
Traceback (most recent call last):
File "test.py", line 8, in <module>
raise MyError()
__main__.MyError
Here the first line of the output shows that __subclasscheck__ got invoked to check whether MyError is a subclass of itself, i.e. issubclass(MyError, MyError). I'd like to understand why that's necessary and how it's useful in general.
I'm using CPython 3.8.1 to reproduce this behavior. I also tried PyPy3 (3.6.9) and here __subclasscheck__ is not invoked.
I guess this is a CPython implementation detail. As stated in documentation to PyErr_NormalizeException:
Under certain circumstances, the values returned by PyErr_Fetch()
below can be “unnormalized”, meaning that *exc is a class object but
*val is not an instance of the same class.
So sometime during the processing of the raised error, CPython will normalize the exception, because otherwise it cannot assume that the value of the error is of the right type.
In your case it happens as follows:
Eventually while processing the exception, PyErr_Print is called, where it calls _PyErr_NormalizeException.
_PyErr_NormaliizeException calls PyObject_IsSubclass.
PyObject_IsSubclass uses __subclasscheck__ if it is provided.
I cannot say what those "certain circumstances" for "*exc is a class object but *val is not an instance of the same class" are (maybe needed for backward compatibility - I don't know).
My first assumption was, that it happens, when CPython ensures (i.e. here), that the exception is derived from BaseException.
The following code
class OldStyle():
pass
raise OldStyle
would raise OldStyle for Python2, but TypeError: exceptions must be old-style classes or derived from BaseException, not type for
class NewStyle(object):
pass
raise NewStyle
or TypeError: exceptions must derive from BaseException in Python3 because in Python3 all classes are "new style".
However, for this check not PyObject_IsSubclass but PyType_FastSubclass is used:
#define PyExceptionClass_Check(x) \
(PyType_Check((x)) && \
PyType_FastSubclass((PyTypeObject*)(x), Py_TPFLAGS_BASE_EXC_SUBCLASS))
i.e. only the tpflags are looked at.
I'm a minor contributor to a package where people are meant to do this (Foo.Bar.Bar is a class):
>>> from Foo.Bar import Bar
>>> s = Bar('a')
Sometimes people do this by mistake (Foo.Bar is a module):
>>> from Foo import Bar
>>> s = Bar('a')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
This might seems simple, but users still fail to debug it, I would like to make it easier. I can't change the names of Foo or Bar but I would like to add a more informative traceback like:
TypeError("'module' object is not callable, perhaps you meant to call 'Bar.Bar()'")
I read the Callable modules Q&A, and I know that I can't add a __call__ method to a module (and I don't want to wrap the whole module in a class just for this). Anyway, I don't want the module to be callable, I just want a custom traceback. Is there a clean solution for Python 3.x and 2.7+?
Add this to top of Bar.py: (Based on this question)
import sys
this_module = sys.modules[__name__]
class MyModule(sys.modules[__name__].__class__):
def __call__(self, *a, **k): # module callable
raise TypeError("'module' object is not callable, perhaps you meant to call 'Bar.Bar()'")
def __getattribute__(self, name):
return this_module.__getattribute__(name)
sys.modules[__name__] = MyModule(__name__)
# the rest of file
class Bar:
pass
Note: Tested with python3.6 & python2.7.
What you want is to change the error message when is is displayed to the user. One way to do that is to define your own excepthook.
Your own function could:
search the calling frame in the traceback object (which contains informations about the TypeError exception and the function which does that),
search the Bar object in the local variables,
alter the error message if the object is a module instead of a class or function.
In Foo.__init__.py you can install a your excepthook
import inspect
import sys
def _install_foo_excepthook():
_sys_excepthook = sys.excepthook
def _foo_excepthook(exc_type, exc_value, exc_traceback):
if exc_type is TypeError:
# -- find the last frame (source of the exception)
tb_frame = exc_traceback
while tb_frame.tb_next is not None:
tb_frame = tb_frame.tb_next
# -- search 'Bar' in the local variable
f_locals = tb_frame.tb_frame.f_locals
if 'Bar' in f_locals:
obj = f_locals['Bar']
if inspect.ismodule(obj):
# -- change the error message
exc_value.args = ("'module' object is not callable, perhaps you meant to call 'Foo.Bar.Bar()'",)
_sys_excepthook(exc_type, exc_value, exc_traceback)
sys.excepthook = _foo_excepthook
_install_foo_excepthook()
Of course, you need to enforce this algorithm…
With the following demo:
# coding: utf-8
from Foo import Bar
s = Bar('a')
You get:
Traceback (most recent call last):
File "/path/to/demo_bad.py", line 5, in <module>
s = Bar('a')
TypeError: 'module' object is not callable, perhaps you meant to call 'Foo.Bar.Bar()'
There are a lot of ways you could get a different error message, but they all have weird caveats and side effects.
Replacing the module's __class__ with a types.ModuleType subclass is probably the cleanest option, but it only works on Python 3.5+.
Besides the 3.5+ limitation, the primary weird side effects I've thought of for this option are that the module will be reported callable by the callable function, and that reloading the module will replace its class again unless you're careful to avoid such double-replacement.
Replacing the module object with a different object works on pre-3.5 Python versions, but it's very tricky to get completely right.
Submodules, reloading, global variables, any module functionality besides the custom error message... all of those are likely to break if you miss some subtle aspect of the implementation. Also, the module will be reported callable by callable, just like with the __class__ replacement.
Trying to modify the exception message after the exception is raised, for example in sys.excepthook, is possible, but there isn't a good way to tell that any particular TypeError came from trying to call your module as a function.
Probably the best you could do would be to check for a TypeError with a 'module' object is not callable message in a namespace where it looks plausible that your module would have been called - for example, if the Bar name is bound to the Foo.Bar module in either the frame's locals or globals - but that's still going to have plenty of false negatives and false positives. Also, sys.excepthook replacement isn't compatible with IPython, and whatever mechanism you use would probably conflict with something.
Right now, the problems you have are easy to understand and easy to explain. The problems you would have with any attempt to change the error message are likely to be much harder to understand and harder to explain. It's probably not a worthwhile tradeoff.
This question already has an answer here:
Python: subscript a module
(1 answer)
Closed 7 years ago.
So it's quite a simple question. how do I add __getitem__ to a Python module. I mostly just want it as an ease of use, however it's confusing why it wouldn't let me 'just set it'. Below is a simple example of __getitem__ semi-working, however I wish for the other['test'] to work.
Here's the full output:
hello
hello
Traceback (most recent call last):
File "main.py", line 4, in <module>
print other['test']
TypeError: 'module' object has no attribute '__getitem__'
main.py
import other
print other.get('test')
print other.__getitem__('test')
print other['test']
other.py
test = 'hello'
def __getitem__(name):
return globals()[name]
get = __getitem__
I've tried to set __getitem__ using globals() aswell, globals()['__getitem__'] = __getitem__. It didn't work. And I tried to set it in main.py. So I'm confused as to why it's so adamant in not allowing me to use other['test'].
If it's impossible, then a short reason would be good.
Special methods are looked up on the type, not on an instance. Python looks for type(other).__getitem__() and that isn't available. You'd have to add the __getitem__ method to the module type; you can't in Python.
You'd have to replace the whole module instance in sys.modules with an instance of your own class to achieve what you want:
class MyModule(object):
def __init__(self, namespace):
self.__dict__.update(namespace)
def __getitem__(name):
return self.__dict__[name]
import other
import sys
sys.modules[other.__name__] = MyModule(other.__dict__)
This limitation doesn't just apply for modules, it applies for anything such that the type is not object or some subclass of object, or something with a metaclass that never bottoms out with object in the mro.
For example, you can also see this happening with type type:
In [32]: class Foo(type):
....: pass
....:
In [33]: type(Foo)
Out[33]: type
In [34]: Foo.__getitem__ = lambda x, y: x.__dict__.get(y)
In [35]: Foo.foo = "hello"
In [36]: Foo['foo']
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-38-e354ca231ddc> in <module>()
----> 1 Foo['foo']
TypeError: 'type' object has no attribute '__getitem__'
In [37]: Foo.__dict__.get('foo')
Out[37]: 'hello'
The reason is that at the C-API level, both module and type are particular instances of PyTypeObject which don't implement the required protocol for inducing the same search mechanism that the PyTypeObject implementation of object and friends does implement.
To change this aspect of the language itself, rather than hacking a replacement of sys.modules, you would need to change the C source definitions for PyModule_Type and PyType_Type such that there were C functions created for __getitem__ and added to the appropriate location in the C-API big PyTypeObject struct-o-magic-functions (a lot of which is expanded by the macro PyObject_HEAD) instead of 0 (which is the sentinel for does not exist), and recompile Python itself with these modified implementations of module and type.