It's not a real world program but I would like to know why it can't be done.
I was thinking about numpy.r_ object and tried to do something similar but just making a class and not instantiating it.
a simple code (has some flaws) for integers could be:
class r_:
#classmethod
def __getitem__(clc, sl):
try:
return range(sl)
except TypeError:
sl = sl.start, sl.stop, sl.step
return range(*(i for i in sl if i is not None))
but as I try to do r_[1:10] i receive TypeError: 'type' object is not subscriptable.
Of course the code works with r_.__getitem__(slice(1,10)) but that's not what I want.
Is there something I can do in this case instead of using r_()[1:10]?
The protocol for resolving obj[index] is to look for a __getitem__ method in the type of obj, not to directly look up a method on obj (which would normally fall back to looking up a method on the type if obj didn't have an instance attribute with the name __getitem__).
This can be easily verified.
>>> class Foo(object):
pass
>>> def __getitem__(self, index):
return index
>>> f = Foo()
>>> f.__getitem__ = __getitem__
>>> f[3]
Traceback (most recent call last):
File "<pyshell#8>", line 1, in <module>
f[3]
TypeError: 'Foo' object does not support indexing
>>> Foo.__getitem__ = __getitem__
>>> f[3]
3
I don't know why exactly it works this way, but I would guess that at least part of the reason is exactly to prevent what you're trying to do; it would be surprising if every class that defined __getitem__ so that its instances were indexable accidentally gained the ability to be indexed itself. In the overwhelming majority of cases, code that tries to index a class will be a bug, so if the __getitem__ method happened to be able to return something, it would be bad if that didn't get caught.
Why don't you just call the class something else, and bind an instance of it to the name r_? Then you'd be able to do r_[1:10].
What you are trying to do is like list[1:5] or set[1:5] =) The special __getitem__ method only works on instances.
What one would normally do is just create a single ("singleton") instance of the class:
class r_class(object):
...
r_ = r_class()
Now you can do:
r_[1:5]
You can also use metaclasses, but that may be more than is necessary.
"No, my question was about getitem in the class, not in the instance"
Then you do need metaclasses.
class r_meta(type):
def __getitem__(cls, key):
return range(key)
class r_(object, metaclass=r_meta):
pass
Demo:
>>> r_[5]
range(0, 5)
If you pass in r_[1:5] you will get a slice object. Do help(slice) for more info; you can access values like key.stop if isinstance(key,slice) else key.
Define __getitem__() as a normal method in r_'s metaclass.
The reason for this behavior lies in the way how special methods like __getitem__() are lookup up.
Attributes are looked up first in the objects __dict__, and, if not found there, in the class __dict__. That's why e.g. this works:
>>> class Test1(object):
... x = 'hello'
...
>>> t = Test1()
>>> t.__dict__
{}
>>> t.x
'hello'
Methods that are defined in the class body are stored in the class __dict__:
>>> class Test2(object):
... def foo(self):
... print 'hello'
...
>>> t = Test2()
>>> t.foo()
hello
>>> Test2.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method foo() must be called with Test2 instance as first argument (got nothing
instead)
So far there's nothing surprising here. When it comes to special methods however, Python's behavior is a little (or very) different:
>>> class Test3(object):
... def __getitem__(self, key):
... return 1
...
>>> t = Test3()
>>> t.__getitem__('a key')
1
>>> Test3['a key']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'type' object is unsubscriptable
The error messages are very different. With Test2, Python complains about an unbound method call, whereas with Test3 it complains about the unsubscriptability.
If you try to invoke a special method - by way of using it's associated operator - on an object, Python doesn't try to find it in the objects __dict__ but goes straight to the __dict__ of the object's class, which, if the object is itself a class, is a metaclass. So that's where you have to define it:
>>> class Test4(object):
... class __metaclass__(type):
... def __getitem__(cls, key):
... return 1
...
>>> Test4['a key']
1
There's no other way. To quote PEP20: There should be one-- and preferably only one --obvious way to do it.
Related
Suppose I have a python object x and a string s, how do I set the attribute s on x? So:
>>> x = SomeObject()
>>> attr = 'myAttr'
>>> # magic goes here
>>> x.myAttr
'magic'
What's the magic? The goal of this, incidentally, is to cache calls to x.__getattr__().
setattr(x, attr, 'magic')
For help on it:
>>> help(setattr)
Help on built-in function setattr in module __builtin__:
setattr(...)
setattr(object, name, value)
Set a named attribute on an object; setattr(x, 'y', v) is equivalent to
``x.y = v''.
However, you should note that you can't do that to a "pure" instance of object. But it is likely you have a simple subclass of object where it will work fine. I would strongly urge the O.P. to never make instances of object like that.
Usually, we define classes for this.
class XClass( object ):
def __init__( self ):
self.myAttr= None
x= XClass()
x.myAttr= 'magic'
x.myAttr
However, you can, to an extent, do this with the setattr and getattr built-in functions. However, they don't work on instances of object directly.
>>> a= object()
>>> setattr( a, 'hi', 'mom' )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'hi'
They do, however, work on all kinds of simple classes.
class YClass( object ):
pass
y= YClass()
setattr( y, 'myAttr', 'magic' )
y.myAttr
let x be an object then you can do it two ways
x.attr_name = s
setattr(x, 'attr_name', s)
Also works fine within a class:
def update_property(self, property, value):
setattr(self, property, value)
If you want a filename from an argument:
import sys
filename = sys.argv[1]
file = open(filename, 'r')
contents = file.read()
If you want an argument to show on your terminal (using print()):
import sys
arg = sys.argv[1]
arg1config = print(arg1config)
Let us, counterfactually, assume I had a good reason for wanting to make builtin print a static method of some class.
My, apparently wrong, gut feeling was that I need to declare it static doing something like
class sm:
p = staticmethod(print)
as opposed to
class no_sm:
p = print
But it seems both work just fine.
a = sm()
b = no_sm()
a.p("hello")
b.p("hello")
prints
hello
hello
Why does it just work and is there any difference between the two?
Related: Why use staticmethod instead of no decorator at all
for ~most normal functions, they go through a descriptor protocol (__get__) and so they need special decorating when attached to classes in the way you're doing.
take for example:
def f():
print('hi')
class C:
f = f
class D:
f = staticmethod(f)
in this case C().f() errors:
>>> C().f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() takes 0 positional arguments but 1 was given
that's because it goes through:
>>> C.f.__get__(C())()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() takes 0 positional arguments but 1 was given
but you'll notice, that print has no such attribute __get__:
>>> hasattr(print, '__get__')
it even has a different type!
>>> type(print)
<class 'builtin_function_or_method'>
>>> type(f)
<class 'function'>
so the answer is: print (along with other C functions) without special treatment do not participate in the method descriptor protocol so they act as normal callable objects without attachment of self
I want to create a static variable in python for a class and instantiate it with the same type i.e.
class TestVarClass():
# this is my static variable here
testVar = None
def __init__(self, value):
# instance variable here
instanceVar = 0
# instantiating the static variable with its own type
TestVarClass.testVar = TestVarClass(1)
Since python is an interpreting language, I cannot instantiate the static object inside the class before init. Hence, I placed it outside the class. But when I debug this in pycharm, the variable testVar comes with infinite nesting like below:
What does this mean? Since the address at every level is same - it
doesn't look like it is allocating multiple times but hen why does
the debugger show the value like this?
I basically want to achieve
creating a static and read-only variable in python and ended up
here.
Why do you see what you see? You have created an instance of TestVarClass and assigned it to testVar class attribute, which is accessible from that class and each of its instances (but is still the same class attribute and refers to the same object). It would be the same as a simplified example of:
>>> class C:
... pass
...
>>> C.a = C()
>>> C.a
<__main__.C instance at 0x7f14d6b936c8>
>>> C.a.a
<__main__.C instance at 0x7f14d6b936c8>
class C now having attribute a itself being instance of C. I can access C.a and since that is instance of C and I can access its C.a (or C.a.a). And so on. It's still the very same object though.
Python doesn't really have static variables. Well, it sort of does, but as a side effect of default argument values being assigned once when a function is being defined. Combine that with behavior (and in-place modification) of mutable objects. And you essentially get the same behavior you'd expect form a static variable in other languages. Take the following example:
>>> def f(a=[]):
... a.append('x')
... return id(a), a
...
>>> f()
(139727478487952, ['x'])
>>> f()
(139727478487952, ['x', 'x'])
>>>
I am not entirely sure what exactly are you after. Once assigned, class attribute lives with the class and hence could be considered static in that respect. So I presume assign only once behavior interests you? Or to expose the class attribute in instances without being able to assign to it instances themselves? Such as:
>>> class C:
... _a = None
... #property
... def a(self):
... return self._a
...
>>> C._a = C()
>>> c = C()
>>> print(c.a)
<__main__.C object at 0x7f454bccda10>
>>> c.a = 'new'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
Or if you wanted to still use a in both class and instance?
>>> class C:
... a = None
... def __setattr__(self, name, value):
... if name == 'a':
... raise TypeError("Instance cannot assign to 'a'")
... super().__setattr__(name, value)
...
>>> C.a = C()
>>> c = C()
>>> c.a
<__main__.C object at 0x7f454bccdc10>
>>> C.a
<__main__.C object at 0x7f454bccdc10>
>>> c.a = 'new_val'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __setattr__
TypeError: Instance cannot assign to 'a'
Essentially read-only variable (static or not) already sounds a bit like a contradictio in adiecto (not really as much of a variable), but (long story short) I guess the question really is, what is it that you're trying to do (problem you're trying to solve) in a context... and perhaps based on that we could try to come up with a reasonable way to express the idea in Python, but as given without further qualification, Python does not have anything it'd call static read-only variables.
Suppose I have a python object x and a string s, how do I set the attribute s on x? So:
>>> x = SomeObject()
>>> attr = 'myAttr'
>>> # magic goes here
>>> x.myAttr
'magic'
What's the magic? The goal of this, incidentally, is to cache calls to x.__getattr__().
setattr(x, attr, 'magic')
For help on it:
>>> help(setattr)
Help on built-in function setattr in module __builtin__:
setattr(...)
setattr(object, name, value)
Set a named attribute on an object; setattr(x, 'y', v) is equivalent to
``x.y = v''.
However, you should note that you can't do that to a "pure" instance of object. But it is likely you have a simple subclass of object where it will work fine. I would strongly urge the O.P. to never make instances of object like that.
Usually, we define classes for this.
class XClass( object ):
def __init__( self ):
self.myAttr= None
x= XClass()
x.myAttr= 'magic'
x.myAttr
However, you can, to an extent, do this with the setattr and getattr built-in functions. However, they don't work on instances of object directly.
>>> a= object()
>>> setattr( a, 'hi', 'mom' )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'hi'
They do, however, work on all kinds of simple classes.
class YClass( object ):
pass
y= YClass()
setattr( y, 'myAttr', 'magic' )
y.myAttr
let x be an object then you can do it two ways
x.attr_name = s
setattr(x, 'attr_name', s)
Also works fine within a class:
def update_property(self, property, value):
setattr(self, property, value)
If you want a filename from an argument:
import sys
filename = sys.argv[1]
file = open(filename, 'r')
contents = file.read()
If you want an argument to show on your terminal (using print()):
import sys
arg = sys.argv[1]
arg1config = print(arg1config)
I'm trying to reduce copy/paste in my code and have stumbled upon this problem. I've googled for the answer but all answers use an instance of a class as the key, I can't find anything on using a class definition itself as the key (I don't know if it's possible).
My code is this:
# All chunkFuncs keys are class definitions, all values are functions
chunkFuncs = {Math_EXP : Math_EXPChunk, Assignment : AssignmentChunk, Function : FunctionChunk}
def Chunker(chunk, localScope):
for chunkType in chunkFuncs:
if isinstance(chunk,chunkType):
# The next line is where the error is raised
localScope = chunkFuncs[chunk](chunk,localScope)
return localScope
and the error is this
TypeError: unhashable type: 'Assignment'
Here are the class definitions:
class Math_EXP(pyPeg.List):
grammar = [Number,Symbol],pyPeg.maybe_some(Math_OP,[Number,Symbol])
class Assignment(pyPeg.List):
grammar = Symbol,'=',[Math_EXP,Number]
class Function(pyPeg.List):
grammar = Symbol,'(',pyPeg.optional(pyPeg.csl([Symbol,Number])),')'
Are there any alternative methods I could use to get the same effect?
Thanks.
OK, the comments are getting out of hand ;-)
It seems certain now that the class object isn't the problem. If it were, the error would have triggered on the first line, when the dict was first constructed:
chunkFuncs = {Math_EXP : Math_EXPChunk, Assignment : AssignmentChunk, Function : FunctionChunk}
If you try to construct a dict with an unhashable key, the dict creation fails at once:
>>> {[]: 3}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
But you got beyond that line, and Assignment is a key in the dict you constructed. So the error is in this line:
localScope = chunkFuncs[chunk](chunk,localScope)
Best guess is that it's an instance of Assignment that's unhashable:
>>> class mylist(list):
... pass
...
>>> hash(mylist)
2582159
>>> hash(mylist())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'mylist'
See? mylist is hashable, but the instance mylist() is not.
Later: best guess is that you're not going to be able to worm around this. Why? Because of the name of the base class, pyPeg.List. If it's mutable like a Python list, then instances won't be hashable - and shouldn't be (mutable objects are always dangerous as dict keys). You could still index a dict by id(the_instance), but whether that's semantically correct is something I can't guess without knowing a lot more about your code.
You should be able to, yes, but you might need an extra type call:
>>> class X:
... pass
...
>>> class_map = {X: 5}
>>> my_x = X()
>>> class_map[type(my_x)]
5