I know with getattr() you can call the method, but I need to overwrite it, so myInstance.mymethod will be overwritten.
I have the method's name as a string and the instance's reference.
You can overwrite it with setattr
>>> class Foo(object):
... def method(self): pass
...
>>> a = Foo()
>>> a.method()
>>> setattr(a,'method',1)
>>> a.method()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
To replace with another method:
>>> import types
>>> setattr(a,'method',types.MethodType(lambda self: self.__class__.__name__,a))
>>> a.method()
'Foo'
Where the lambda stuff is just fancy shorthand for defining a function:
def func(self):
return self.__class__.__name__
setattr(a,'method',types.MethodType(func,a))
Related
I want to provide a method that can be used on a Python 2.7 class object, but does not pollute the attribute namespace of its instances. Is there any way to do this?
>>> class Foo(object):
... #classmethod
... def ugh(cls):
... return 33
...
>>> Foo.ugh()
33
>>> foo = Foo()
>>> foo.ugh()
33
You could subclass the classmethod descriptor:
class classonly(classmethod):
def __get__(self, obj, type):
if obj: raise AttributeError
return super(classonly, self).__get__(obj, type)
This is how it would behave:
class C(object):
#classonly
def foo(cls):
return 42
>>> C.foo()
42
>>> c=C()
>>> c.foo()
AttributeError
This desugars to the descriptor call (rather, it is invoked by the default implementation of __getattribute__):
>>> C.__dict__['foo'].__get__(None, C)
<bound method C.foo of <class '__main__.C'>>
>>> C.__dict__['foo'].__get__(c, type(c))
AttributeError
Required reading: Data Model — Implementing Descriptors and Descriptor HowTo Guide.
ugh is not in the namespace:
>>> foo.__dict__
{}
but the rules for attribute lookup fall back to the type of the instance for missing names. You can override Foo.__getattribute__ to prevent this.
class Foo(object):
#classmethod
def ugh(cls):
return 33
def __getattribute__(self, name):
if name == 'ugh':
raise AttributeError("Access to class method 'ugh' block from instance")
return super(Foo,self).__getattribute__(name)
This produces:
>>> foo = Foo()
>>> foo.ugh()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tmp.py", line 8, in __getattribute__
raise AttributeError("Access to class method 'ugh' block from instance")
AttributeError: Access to class method 'ugh' block from instance
>>> Foo.ugh()
33
You must use __getattribute__, which is called unconditionally on any attribute access, rather than __getattr__, which is only called after the normal lookup (which includes checking the type's namespace) fails.
Python has quasi-private variables that use name-munging to reduce accidental access. Methods and object variables of the form __name are converted to _ClassName__name. Python automatically changes the name when compiling methods on the class but doesn't change the name for subclasses.
I can use the private method in a class
>>> class A(object):
... def __private(self):
... print('boo')
... def hello(self):
... self.__private()
...
>>>
>>> A().hello()
boo
But not outside the class
>>> A().__private()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute '__private'
>>>
Or in subclasses
>>> class B(A):
... def hello2(self):
... self.__private()
...
>>>
>>> B().hello()
boo
>>> B().hello2()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in hello2
AttributeError: 'B' object has no attribute '_B__private'
Yes, you can create the method in the metaclass.
class FooMeta(type):
# No #classmethod here
def ugh(cls):
return 33
class Foo(object):
__metaclass__ = FooMeta
Foo.ugh() # returns 33
Foo().ugh() # AttributeError
Note that metaclasses are a power feature, and their use is discouraged if unnecessary. In particular, multiple inheritance requires special care if the parent classes have different metaclasses.
What's the easiest way to determine which Python class defines an attribute when inheriting? For example, say I have:
class A(object):
defined_in_A = 123
class B(A):
pass
a = A()
b = B()
and I wanted this code to pass:
assert hasattr(a, 'defined_in_A')
assert hasattr(A, 'defined_in_A')
assert hasattr(b, 'defined_in_A')
assert hasattr(B, 'defined_in_A')
assert defines_attribute(A, 'defined_in_A')
assert not defines_attribute(B, 'defined_in_A')
How would I implement the fictional defines_attribute function? My first thought would be to walk through the entire inheritance chain, and use hasattr to check for the attribute's existence, with the deepest match assumed to be the definer. Is there a simpler way?
(Almost) Every python object is defined with it's own instance variables (instance variables of a class object we usually call class variables) to get this as a dictionary you can use the vars function and check for membership in it:
>>> "defined_in_A" in vars(A)
True
>>> "defined_in_A" in vars(B)
False
>>> "defined_in_A" in vars(a) or "defined_in_A" in vars(b)
False
the issue with this is that it does not work when a class uses __slots__ or builtin objects since it changes how the instance variables are stored:
class A(object):
__slots__ = ("x","y")
defined_in_A = 123
>>> A.x
<member 'x' of 'A' objects>
>>> "x" in vars(a)
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
"x" in vars(a)
TypeError: vars() argument must have __dict__ attribute
>>> vars(1) #or floats or strings will raise the same error
Traceback (most recent call last):
...
TypeError: vars() argument must have __dict__ attribute
I'm not sure there is a simple workaround for this case.
Let's say a function looks at an object and checks if it has a function a_method:
def func(obj):
if hasattr(obj, 'a_method'):
...
else:
...
I have an object whose class defines a_method, but I want to hide it from hasattr. I don't want to change the implementation of func to achieve this hiding, so what hack can I do to solve this problem?
If the method is defined on the class you appear to be able to remove it from the __dict__ for the class. This prevents lookups (hasattr will return false). You can still use the function if you keep a reference to it when you remove it (like the example) - just remember that you have to pass in an instance of the class for self, it's not being called with the implied self.
>>> class A:
... def meth(self):
... print "In method."
...
>>>
>>> a = A()
>>> a.meth
<bound method A.meth of <__main__.A instance at 0x0218AB48>>
>>> fn = A.__dict__.pop('meth')
>>> hasattr(a, 'meth')
False
>>> a.meth
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: A instance has no attribute 'meth'
>>> fn()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: meth() takes exactly 1 argument (0 given)
>>> fn(a)
In method.
You could redefine the hasattr function. Below is an example.
saved_hasattr = hasattr
def hasattr(obj, method):
if method == 'MY_METHOD':
return False
else:
return saved_hasattr(obj, method)
Note that you probably want to implement more detailed checks than just checking the method name. For example checking the object type might be beneficial.
Try this:
class Test(object):
def __hideme(self):
print 'hidden'
t = Test()
print hasattr(t,"__hideme") #prints False....
I believe this works b/c of the double underscore magic of hiding members (owning to name mangling) of a class to outside world...Unless someone has a strong argument against this, I'd think this is way better than popping stuff off from __dict__? Thoughts?
Disclaimer This is just an exercise in meta-programming, it has no practical
purpose.
I've assigned __getitem__ and __getattr__ methods on a function object, but
there is no effect...
def foo():
print "foo!"
foo.__getitem__ = lambda name: name
foo.__getattr__ = lambda name: name
foo.baz = 'baz'
Sanity check that we can assign properties to a function:
>>> foo.baz
'baz'
Neat. How about the "magic getters"?
>>> foo.bar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'function' object has no attribute 'bar'
>>> foo['foo']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'function' object is not subscriptable
>>> getattr(foo, 'bar')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'function' object has no attribute 'bar'
Is it possible to have a "magic getter" on a function object?
Nope! Assigning __getitem__ to an instance doesn't work on any type of object:
>>> class A(object):
... pass
...
>>> a = A()
>>> a.__getattr__ = lambda name: name
>>> a.foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute 'foo'
And you can't define __getattr__ on the built-in function type:
>>> import types
>>> types.FunctionType.__getitem__ = lambda name: name
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't set attributes of built-in/extension type 'function'
And you can't subclass types.FunctionType:
>>> import types
>>> class F(types.FunctionType):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Error when calling the metaclass bases
type 'function' is not an acceptable base type
At least on new-style classes (which are the only kind in Python 3 and the kind you should be using in Python 2), Python only looks for magic methods on the class (and its ancestors), never on the instance. Docs here.
And of course you can't modify the function type, or derive from it. As you've found, however, any class with a __call__() method makes callable instances, so that's the way to do it.
AHHA! Use __call__, and wrap the function in F()
class F(object):
def __init__(self, fn):
self.__dict__['fn'] = fn
def __call__(self, *args, **kwargs):
return self.fn(*args, **kwargs)
def __getitem__(self, name):
return name
def __getattr__(self, name):
return name
>>> foo = F(foo)
>>> f.bar
'bar'
>>> f['foo']
'foo'
>>> foo()
foo!
My question is pretty simple, I have:
class upperstr(str):
def __new__(cls, arg):
return str.__new__(cls, str(arg).upper())
Why, if my __new__() method is directly using an instance of an inmutable type (str), instances of my new type (upperstr) are mutable?
>>> s = str("text")
>>> "__dict__" in dir(s)
False
>>> s = upperstr("text")
>>> "__dict__" in dir(s)
True
In what stage does the interpreter sets the __dict__ attribute to upperstr intances if I'm only overriding the __new__() method?
Thanks!
All user-defined classes in Python have a __dict__() attribute by default, even if you don't overwrite anything at all:
>>> x = object()
>>> x.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute '__dict__'
>>> class MyObject(object):
... pass
...
>>> x = MyObject()
>>> x.__dict__
{}
If you don't want a new-style class to have a __dict__, use __slots__ (documentation, related SO thread):
>>> class MyObject(object):
... __slots__ = []
...
>>> x = MyObject()
>>> x.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'MyObject' object has no attribute '__dict__'