I'm working with Python3, and I have a really heavy class with many functions as attributes:
Class A (object):
def __init__(self):
...
def method1(self):
...
def method2(self):
...
...
def methodN(self):
...
I would like to create an instance of class A that only has method1, for example. How could I do this?
Using inheritance, though it might be the most technically correct way, is not an option in my case - I can't modify the codebase so much.
I thought about decorating the class and deleting its attributes before __init__ is called, but I'm not even sure where to start tackling this. Any ideas?
You can modify the __getattribute__ method of the class to disallow access to those attributes (via normal instance.attribute access)
class A (object):
def __init__(self, x):
self.x = x
def method1(self):
...
def method2(self):
...
def __getattribute__(self, name):
if object.__getattribute__(self, 'x'):
if name == 'method2':
raise AttributeError("Cannot access method2 is self.x is True")
return object.__getattribute__(self, name)
>>> a = A(False)
>>> a.method1
<bound method A.method1 of <__main__.A object at 0x000001E25992F248>>
>>> a.method2
<bound method A.method2 of <__main__.A object at 0x000001E25992F248>>
>>> b = A(True)
>>> b.method1
<bound method A.method1 of <__main__.A object at 0x000001E25992F2C8>>
>>> b.method2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 11, in __getattribute__
AttributeError: Cannot access method2 is self.x is True
Obviously, this gets pretty unwieldy and violates a lot of assumptions about what it means to be an instance of a class. I can't think of a good reason to do this in real code, as you can still access the methods through object.__getattribute__(b, 'method2')
Related
Let's say I have this class:
class A:
def __init__(self, a):
self.a = a
#classmethod
def foo(self):
return 'hello world!'
I use #classmethod, so that I can directly call the function without calling the class:
>>> A.foo()
'hello world!'
>>>
But now I am wondering, since I still can access it with calling the class:
>>> A(1).foo()
'hello world!'
>>>
Would I be able to make it that it would raise an error if the function foo is called from a called class. And only let it to be called without calling the class, like A.foo().
So if I do:
A(1).foo()
It should give an error.
The functionality of how classmethod, staticmethod and in fact normal methods are lookedup / bound is implemented via descriptors. Similarly, one can define a descriptor that forbids lookup/binding on an instance.
A naive implementation of such a descriptor checks whether it is looked up via an instance and raises an error in this case:
class NoInstanceMethod:
"""Descriptor to forbid that other descriptors can be looked up on an instance"""
def __init__(self, descr, name=None):
self.descr = descr
self.name = name
def __set_name__(self, owner, name):
self.name = name
def __get__(self, instance, owner):
# enforce the instance cannot look up the attribute at all
if instance is not None:
raise AttributeError(f"{type(instance).__name__!r} has no attribute {self.name!r}")
# invoke any descriptor we are wrapping
return self.descr.__get__(instance, owner)
This can be applied on top of other descriptors to prevent them from being looked up on an instance. Prominently, it can be combined with classmethod or staticmethod to prevent using them on an instance:
class A:
def __init__(self, a):
self.a = a
#NoInstanceMethod
#classmethod
def foo(cls):
return 'hello world!'
A.foo() # Stdout: hello world!
A(1).foo() # AttributeError: 'A' object has no attribute 'foo'
The above NoInstanceMethod is "naive" in that it does not take care of propagating descriptor calls other than __get__ to its wrapped descriptor. For example, one could propagate __set_name__ calls to allow the wrapped descriptor to know its name.
Since descriptors are free to (not) implement any of the descriptor methods, this can be supported but needs appropriate error handling. Extend the NoInstanceMethod to support whatever descriptor methods are needed in practice.
A workaround is to override its value upon initialization of a class object to make sure it wouldn't be called from self.
def raise_(exc):
raise exc
class A:
STRICTLY_CLASS_METHODS = [
"foo",
]
def __init__(self, a):
self.a = a
for method in self.STRICTLY_CLASS_METHODS:
# Option 1: Using generator.throw() to raise exception. See https://www.python.org/dev/peps/pep-0342/#new-generator-method-throw-type-value-none-traceback-none
# setattr(self, method, lambda *args, **kwargs: (_ for _ in ()).throw(AttributeError(method)))
# Option 2: Using a function to raise exception
setattr(self, method, lambda *args, **kwargs: raise_(AttributeError(method)))
#classmethod
def foo(cls):
return 'hello world!'
def bar(self):
return 'hola mundo!', self.a
Output
>>> A.foo()
'hello world!'
>>> a = A(123)
>>> a.bar()
('hola mundo!', 123)
>>> a.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 11, in <lambda>
File "<stdin>", line 2, in raise_
AttributeError: foo
>>> a.bar()
('hola mundo!', 123)
>>> A(45).bar()
('hola mundo!', 45)
>>> A(6789).foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 11, in <lambda>
File "<stdin>", line 2, in raise_
AttributeError: foo
>>> A.foo()
'hello world!'
I want to provide a method that can be used on a Python 2.7 class object, but does not pollute the attribute namespace of its instances. Is there any way to do this?
>>> class Foo(object):
... #classmethod
... def ugh(cls):
... return 33
...
>>> Foo.ugh()
33
>>> foo = Foo()
>>> foo.ugh()
33
You could subclass the classmethod descriptor:
class classonly(classmethod):
def __get__(self, obj, type):
if obj: raise AttributeError
return super(classonly, self).__get__(obj, type)
This is how it would behave:
class C(object):
#classonly
def foo(cls):
return 42
>>> C.foo()
42
>>> c=C()
>>> c.foo()
AttributeError
This desugars to the descriptor call (rather, it is invoked by the default implementation of __getattribute__):
>>> C.__dict__['foo'].__get__(None, C)
<bound method C.foo of <class '__main__.C'>>
>>> C.__dict__['foo'].__get__(c, type(c))
AttributeError
Required reading: Data Model — Implementing Descriptors and Descriptor HowTo Guide.
ugh is not in the namespace:
>>> foo.__dict__
{}
but the rules for attribute lookup fall back to the type of the instance for missing names. You can override Foo.__getattribute__ to prevent this.
class Foo(object):
#classmethod
def ugh(cls):
return 33
def __getattribute__(self, name):
if name == 'ugh':
raise AttributeError("Access to class method 'ugh' block from instance")
return super(Foo,self).__getattribute__(name)
This produces:
>>> foo = Foo()
>>> foo.ugh()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tmp.py", line 8, in __getattribute__
raise AttributeError("Access to class method 'ugh' block from instance")
AttributeError: Access to class method 'ugh' block from instance
>>> Foo.ugh()
33
You must use __getattribute__, which is called unconditionally on any attribute access, rather than __getattr__, which is only called after the normal lookup (which includes checking the type's namespace) fails.
Python has quasi-private variables that use name-munging to reduce accidental access. Methods and object variables of the form __name are converted to _ClassName__name. Python automatically changes the name when compiling methods on the class but doesn't change the name for subclasses.
I can use the private method in a class
>>> class A(object):
... def __private(self):
... print('boo')
... def hello(self):
... self.__private()
...
>>>
>>> A().hello()
boo
But not outside the class
>>> A().__private()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute '__private'
>>>
Or in subclasses
>>> class B(A):
... def hello2(self):
... self.__private()
...
>>>
>>> B().hello()
boo
>>> B().hello2()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in hello2
AttributeError: 'B' object has no attribute '_B__private'
Yes, you can create the method in the metaclass.
class FooMeta(type):
# No #classmethod here
def ugh(cls):
return 33
class Foo(object):
__metaclass__ = FooMeta
Foo.ugh() # returns 33
Foo().ugh() # AttributeError
Note that metaclasses are a power feature, and their use is discouraged if unnecessary. In particular, multiple inheritance requires special care if the parent classes have different metaclasses.
When should the following code be used in Python
(Assume that Baseclass inherits from Parent class and Parent class has some variables initiated in __init__() method)
class Baseclass(Parent):
def __init__(self, some_arg):
self.some_arg = some_arg
super(Baseclass, self).__init__()
Does this code makes all the local variables defined in __init__ method of Parent class accessible in Baseclass? What significance does it make?
super keeps your code from being repetitive; a complex __init__ needn't be c/p'ed into your inheriting classes. It also makes MRO work as it should, such that if you use multiple inheritance it will work correctly.
One reason to do this would be to ensure that all of your inheriting objects have certain attributes which they don't have from the parent. If you simply write a new __init__, they won't have them unless you repeat your code. For example:
>>> class A(object):
... def __init__(self, x):
... self.x = x
...
>>> class B(A):
... def __init__(self, y):
... self.y = y
...
>>> Stick = B(15)
>>> Stick.x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'B' object has no attribute 'x'
>>>
Without calling super during the __init__ the entire method is simply overridden. A call to super here ensures that both variables exist in the inherited class.
>>> class C(A):
... def __init__(self, x, y):
... super(C, self).__init__(x)
... self.y = y
...
>>> Dave = C(15, 22)
>>> Dave.x
15
>>> Dave.y
22
>>>
Note that in the super call, x is passed to the __init__() call, but self is taken care of in the super(C, self) part of the code.
EDIT: TyrantWave also rightly points out that super is also quite useful outside of __init__. Take an object with a simple foo method for example.
class Parent(object):
def foo(self):
return "I say foo!"
The inherited class may want to just alter the output of this function instead of totally rewriting it. So instead of repeating ourselves and writing the same code over again, we just call super to get the parent's return value, then work with the data and return the child class's modified results.
class Child(Parent):
def foo(self):
parent_result = super(Child, self).foo()
return "I'm a child!! %s" % parent_result
In the above, the call to super returns the Parents value for foo() and then the Child goes on to work with the data further before returning it themselves.
>>> Alan = Parent()
>>> Stan = Child()
>>> Alan.foo()
'I say foo!'
>>> Stan.foo()
"I'm a child!! I say foo!"
>>>
Consider the following case:
class Meta(type):
def shadowed(cls):
print "Meta.shadowed()"
def unshadowed(cls):
print "Meta.unshadowed()"
class Foo(object):
__metaclass__ = Meta
def shadowed(self):
print "Foo.shadowed()"
I can call get the bound method unshadowed on Foo and it works fine:
>>> Foo.unshadowed
<bound method Meta.unshadowed of <class '__main__.Foo'>>
>>> Foo.unshadowed()
Meta.unshadowed()
However, I can't seem to get the bound method shadowed on Foo - it directs me rather to the unbound method which must be called with instances of Foo:
>>> Foo.shadowed
<unbound method Foo.shadowed>
>>> Foo.shadowed()
Traceback (most recent call last):
File "<pyshell#45>", line 1, in <module>
Foo.shadowed()
TypeError: unbound method shadowed() must be called with Foo instance as first argument (got nothing instead)
Is there any way to get <bound method Meta.shadowed of <class '__main__.Foo'>>?
It seems one potential answer (maybe not the best) is found in this answer on how to bind unbound methods. So we can do this:
>>> Meta.shadowed.__get__(Foo, Meta)()
Meta.shadowed()
Better demonstration:
class Meta(type):
def shadowed(cls):
print "Meta.shadowed() on %s" % (cls.__name__,)
def unshadowed(cls):
print "Meta.unshadowed() on %s" % (cls.__name__,)
class Foo(object):
__metaclass__ = Meta
def shadowed(self):
print "Foo.shadowed()"
class Bar(object):
__metaclass__ = Meta
Bar.unshadowed() #Meta.unshadowed() on Bar
Bar.shadowed() #Meta.shadowed() on Bar
Foo.unshadowed() #Meta.unshadowed() on Foo
#Foo.shadowed() #TypeError
Meta.shadowed.__get__(Foo, Meta)() #Meta.shadowed() on Foo
What would be the most convenient way to create a class which instances' attributes can't be changed from outside the class (you could still get the value), so it'd be possible to call self.var = v inside the class' methods, but not ClassObject().var = v outside of the class?
I've tried messing with __setattr__() but if I override it, the name attribute cannot be initiated in the __init__() method. Only way would be to override __setattr__() and use object.__setattr__(), which I am doing at the moment:
class MyClass(object):
def __init__(self, name):
object.__setattr__(self, "name", name)
def my_method(self):
object.__setattr__(self, "name", self.name + "+")
def __setattr__(self, attr, value):
raise Exception("Usage restricted")
Now this solution works, and it's enough, but I was wondering if there's even a better solution. The problem with this one is: I can still call object.__setattr__(MyClass("foo"), "name", "foo_name") from anywhere outside the class.
Is there any way to totally prevent setting the variable to anything from outside of the class?
EDIT: Stupid me not mentioning I'm not looking for property here, some of you already answered it, however it's not enough for me since it will leave self._name changeable.
No, you cannot do this in pure python.
You can use properties to mark your attributes as read-only though; using underscore-prefixed 'private' attributes instead:
class Foo(object):
def __init__(self, value):
self._spam = value
#property
def spam(self):
return self._spam
The above code only specifies a getter for the property; Python will not let you set a value for Foo().spam now:
>>> class Foo(object):
... def __init__(self, value):
... self._spam = value
... #property
... def spam(self):
... return self._spam
...
>>> f = Foo('eggs')
>>> f.spam
'eggs'
>>> f.spam = 'ham'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
Of course, you can still access the 'private' _spam attribute from outside:
>>> f._spam
'eggs'
>>> f._spam = 'ham'
>>> f.spam
'ham'
You could use the double underscore convention, where attribute names with __ at the start (but not at the end!) are renamed on compilation. This is not meant for making a attribute inaccessible from the outside, it's intent is to protect an attribute from being overwritten by a subclass instead.
class Foo(object):
def __init__(self, value):
self.__spam = value
#property
def spam(self):
return self.__spam
You can still access those attributes:
>>> f = Foo('eggs')
>>> f.spam
'eggs'
>>> f.__spam
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' object has no attribute '__spam'
>>> f._Foo__spam
'eggs'
>>> f._Foo__spam = 'ham'
>>> f.spam
'ham'
There is no strict way of doing encapsulation on Python. The best you can do is prepend 2 underscores __ to the intended to be private attributes. This will cause them to be mangled with the class name (_ClassName_AttribName), so if you try to use them on an inherited class, the base member won't be referenced. The names are not mangled if you use getattrib() or setattrib() though.
You can also override __dir()__ in order to hide them.
You can use properties to simulate such a behavior but like Martijn said, it'll be possible to access the variable directly.
Doing this is a signal of you not understanding Python philosophy, check this out.
Why Python is not full object-oriented?
The properties way:
http://docs.python.org/2/library/functions.html#property
class C(object):
def __init__(self):
self._x = None
def getx(self):
return self._x
def setx(self, value):
raise Exception("Usage restricted")
x = property(getx, setx)