I have two classes, BaseClass and Subclass (where Subclass is a subclass of BaseClass). In each of these classes, I have a property 'foo' which I'd like to be able to get and set. The way in which 'foo' is set is independent of the class - it's done the same way in both BaseClass and Subclass. However, the way in which we 'get' the property 'foo' is dependent on the class. I would thus like a way to define my 'setter' on BaseClass and have it inherit to Subclass in such a way that it's compatible with the class-dependent implementations of the 'getter' in both Subclass and BaseClass.
My first instinct was to use python #property and #property.setter. I quickly ran into issues:
class BaseClass(object):
def __init__(self):
self._foo = None
#property
def foo(self):
print 'BaseClass'
#foo.setter
def foo(self, val):
self._foo = val
class Subclass(BaseClass):
#property
def foo(self):
print 'Subclass'
My naive hope was that the foo.setter would be inherited into Subclass and that it would be compatible with the Subclass implementation of the foo property. However:
b = BaseClass()
s = Subclass()
b.foo
BaseClass
s.foo
Subclass
b.foo = 'BaseClass!'
s.foo = 'Subclass!'
Traceback (most recent call last):
File "<input>", line 1, in <module>
AttributeError: can't set attribute
I believe what is happening here is that the '#foo.setter' is being bound to the BaseClass namespace at compile time, and thus is not available to Subclass (although I could be wrong on this point).
Can anyone tell me a neat way of achieving what I want here? It doesn't necessarily need to use python property builtin, but that would be nice.
There are some interesting things going on here. BaseClass.foo is an object, and the lines
#foo.setter
def foo(self, val):
self._foo = val
return a modified copy of that object.
In the SubClass, you are redefining this object by recreating the property and so you will need to write a new setter or follow Aran-Fey’s answer.
I found a similar question that might also help with understanding this idea.
You can use #BaseClass.foo.getter to create a copy of the property with a different getter:
class Subclass(BaseClass):
#BaseClass.foo.getter
def foo(self):
print('Subclass')
See the property documentation for details.
Related
Suppose I have the following code:
class Classy:
def other(self):
print("other")
def method(self):
print("method")
self.other()
obj = Classy()
obj.method()
The output:
method
other
So I invoke another object/class method from inside the class. I invoke the other method within the 'method' method.
Now if I run the following code:
class Classy:
def other(self):
print("other")
def method(self):
print("method")
Classy.other(self)
obj = Classy()
obj.method()
The output is the same. Now my question is: What is the difference between these two?
I am not sure if it is just a different style of calling - so it is basically the same - or if there is a difference in the logic. If yes, I would be interested in an example where the difference matters.
Let's set it up so we can run them side by side:
class Classy:
def other(self):
print("Classy.other")
def method(self):
print("Classy.method")
self.other()
class NotClassy:
def other(self):
print("NotClassy.other")
def method(self):
print("NotClassy.method")
NotClassy.other(self)
So far, so good:
>>> Classy().method()
Classy.method
Classy.other
>>> NotClassy().method()
NotClassy.method
NotClassy.other
But what if inheritance gets involved, as it so often does in oop? Let's define two subclasses that inherit method but override other:
class ClassyToo(Classy):
def other(self):
print("ClassyToo.other")
class NotClassyToo(NotClassy):
def other(self):
print("NotClassyToo.other")
Then things get a bit problematic; although the subclasses have almost identical implementation, and the parent classes seemed to behave exactly the same, the outputs here are different:
>>> ClassyToo().method()
Classy.method
ClassyToo.other
>>> NotClassyToo().method()
NotClassy.method
NotClassy.other # what about NotClassyToo.other??
By calling NotClassy.other directly, rather than invoking the method on self, we've bypassed the overridden implementation in NotClassyToo. self might not always be an instance of the class the method is defined in, which is also why you see super getting used - your classes should cooperate in inheritance to ensure the right behaviour.
Let me start off by saying that I understand how slots and metaclasses work in Python. Playing around with the two, I've run into an interesting problem. Here's a minimal example:
def decorator(cls):
dct = dict(cls.__dict__)
dct['__slots__'] = ('y',)
return type('NewClass', cls.__bases__, dct)
#decorator
class A(object):
__slots__= ('x',)
def __init__(self):
self.x = 'xx'
A()
This produces the following exception:
Traceback (most recent call last):
File "p.py", line 12, in <module>
A()
File "p.py", line 10, in __init__
self.x = 'xx'
TypeError: descriptor 'x' for 'A' objects doesn't apply to 'NewClass' object
Now, I know why this happens: the descriptor created for the slot x must be able to reference the reserved space for the slot. Only instances of class A, or instances of subclasses of A, have this reserved space, and therefore only those instances can use the descriptor x. In the above example, the metaclass creates a new type that is a sublcass of A's base classes, but not of A itself, so we get the exception. Simple enough.
Of course, in this simple example, either of the following two definitions of decorator will work around the problem:
def decorator(cls):
dct = dict(cls.__dict__)
dct['__slots__'] = ('y',)
return type('NewClass', (cls,) + cls.__bases__, dct)
def decorator(cls):
class NewClass(cls):
__slots__ = ('y',)
return NewClass
But these work-arounds aren't exactly he same as the original, as they both add A as a base class. They can fail in a more complicated set up. For example, if the inheritance tree is more complicated, you might run into the following exception: TypeError: multiple bases have instance lay-out conflict.
So my very specific question is:
Is there are way to create a new class, via a call to type, that modifies the __slots__ attribute of an existing class, but does not add the existing class as a base class of the new class?
Edit:
I know that strict metaclasses are another work around for my examples above. There are lots of ways to to make the minimal examples work, but my question is about creating a class via new that is based on an existing class, not about how to make the examples work. Sorry for the confusion.
Edit 2:
Discussion in the comments has led me a more precise question than what I originally asked:
Is it possible to create a class, via a call to type, that uses the slots and descriptors of an existing class without being a descendant of that class?
If the answer is "no", I'd appreciate a source as to why not.
No, unfortunately there is no way to do anything with the __slots__ after the class is created (and that's when the decorators on them are called). The only way is to use a metaclass, and modify/add __slots__ before calling the type.__new__.
An example of such a metaclass:
class MetaA(type):
def __new__(mcls, name, bases, dct):
slots = set(dct.get('__slots__', ()))
slots.add('y')
dct['__slots__'] = tuple(slots)
return super().__new__(mcls, name, bases, dct)
class BaseA(metaclass=MetaA):
pass
class A(BaseA):
__slots__ = ('x',)
def __init__(self):
self.x = 1
self.y = 2
print(A().x, A().y)
Without metaclasses, you can do some magic and copy everything from the defined class and create a new one on the fly, but that code smells ;)
def decorator(cls):
slots = set(cls.__slots__)
slots.add('y')
dct = cls.__dict__.copy()
for name in cls.__slots__:
dct.pop(name)
dct['__slots__'] = tuple(slots)
return type(cls)(cls.__name__, cls.__bases__, dct)
#decorator
class A:
__slots__ = ('x',)
def __init__(self):
self.x = self.y = 42
print(A().x, A().y)
The main disadvantage of such code, is that if someone applies another decorator, before yours one, and, let's say, creates a reference to the decorated class somewhere, then they will end up storing reference to a different class. Same for metaclasses - they will execute twice. So, the metaclass approach is better, since there are no side-effects.
The definitive answer of why you can't really change __slots__ after the class is created depends on implementation details of the python interpreter you're working with. For instance, in CPython, for each slot you defined, class has a descriptor (see PyMemberDescr_Type & PyMemberDef struct in CPython source code), that has an offset parameter of where the slot value is aligned in internal object storage. And you simply have no instruments of manipulating such things in public Python API. You trade flexibility for less memory usage (again, in CPython, as in PyPy you get the same memory effect automatically for all your classes).
If modification of __slots__ is absolutely required, you can, probably, write a C extension (or work with ctypes) and do it, but that's hardly a reliable solution.
You can do that with metaclasses:
class MetaSlot(type):
def __new__(mcs, name, bases, dic):
dic['__slots__'] += ('y',)
return type.__new__(mcs, name, bases, dic)
class C(metaclass=MetaSlot): # Python 3 syntax
__slots__ = ('x',)
Now both x and y can be used:
>>> c = C()
>>> c.y = 10
>>> c.x = 10
Is there a reasonable way in Python to implement mixin behavior similar to that found in Ruby -- that is, without using inheritance?
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
I had a vague idea to do this with a class decorator, but my attempts led to confusion. Most of my searches on the topic have led in the direction of using inheritance (or in more complex scenarios, multiple inheritance) to achieve mixin behavior.
def mixer(*args):
"""Decorator for mixing mixins"""
def inner(cls):
for a,k in ((a,k) for a in args for k,v in vars(a).items() if callable(v)):
setattr(cls, k, getattr(a, k).im_func)
return cls
return inner
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
#mixer(Mixin, Mixin2)
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
f.d()
f.e()
print issubclass(Foo, Mixin)
output:
a()
b()
c()
d()
e()
False
You can add the methods as functions:
Foo.b = Mixin.b.im_func
Foo.c = Mixin.c.im_func
I am not that familiar with Python, but from what I know about Python metaprogramming, you could actually do it pretty much the same way it is done in Ruby.
In Ruby, a module basically consists of two things: a pointer to a method dictionary and a pointer to a constant dictionary. A class consists of three things: a pointer to a method dictionary, a pointer to a constant dictionary and a pointer to the superclass.
When you mix in a module M into a class C, the following happens:
an anonymous class α is created (this is called an include class)
α's method dictionary and constant dictionary pointers are set equal to M's
α's superclass pointer is set equal to C's
C's superclass pointer is set to α
In other words: a fake class which shares its behavior with the mixin is injected into the inheritance hierarchy. So, Ruby actually does use inheritance for mixin composition.
I left out a couple of subleties above: first off, the module doesn't actually get inserted as C's superclass, it gets inserted as C's superclasses' (which is C's singleton class) superclass. And secondly, if the mixin itself has mixed in other mixins, then those also get wrapped into fake classes which get inserted directly above α, and this process is applied recursively, in case the mixed in mixins in turn have mixins.
Basically, the whole mixin hierarchy gets flattened into a straight line and spliced into the inheritance chain.
AFAIK, Python actually allows you to change a class's superclass(es) after the fact (something which Ruby does not allow you to do), and it also gives you access to a class's dict (again, something that is impossible in Ruby), so you should be able to implement this yourself.
EDIT: Fixed what could (and probably should) be construed as a bug. Now it builds a new dict and then updates that from the class's dict. This prevents mixins from overwriting methods that are defined directly on the class. The code is still untested but should work. I'm busy ATM so I'll test it later. It worked fine except for a syntax error. In retrospect, I decided that I don't like it (even after my further improvements) and much prefer my other solution even if it is more complicated. The test code for that one applies here as well but I wont duplicate it.
You could use a metaclass factory:
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
for mixin in reversed(inspect.getmro(Dummy)):
d.update(mixin.__dict__)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
d.update(classdict)
return super(WithMixins, meta).__new__(meta, classname, bases, d)
return WithMixins
then use it like:
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2)
# rest of the stuff
This one is based on the way it's done in ruby as explained by Jörg W Mittag. All of the wall of code after if __name__=='__main__' is test/demo code. There's actually only 13 lines of real code to it.
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
# Now get all the class attributes. Use reversed so that conflicts
# are resolved with the proper priority. This rules out the possibility
# of the mixins calling methods from their base classes that get overridden
# using super but is necessary for the subclass check to fail. If that wasn't a
# requirement, we would just use Dummy above (or use MI directly and
# forget all the metaclass stuff).
for base in reversed(inspect.getmro(Dummy)):
d.update(base.__dict__)
# Create the mixin class. This should be equivalent to creating the
# anonymous class in Ruby.
Mixin = type('Mixin', (object,), d)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
# The check below prevents an inheritance cycle from forming which
# leads to a TypeError when trying to inherit from the resulting
# class.
if not any(issubclass(base, Mixin) for base in bases):
# This should be the the equivalent of setting the superclass
# pointers in Ruby.
bases = (Mixin,) + bases
return super(WithMixins, meta).__new__(meta, classname, bases,
classdict)
return WithMixins
if __name__ == '__main__':
class Mixin1(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
class Mixin3Base(object):
def f(self): print "f()"
class Mixin3(Mixin3Base): pass
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2, Mixin3)
def a(self): print "a()"
class Bar(Foo):
def f(self): print "Bar.f()"
def test_class(cls):
print "Testing {0}".format(cls.__name__)
f = cls()
f.a()
f.b()
f.c()
f.d()
f.e()
f.f()
print (issubclass(cls, Mixin1) or
issubclass(cls, Mixin2) or
issubclass(cls, Mixin3))
test_class(Foo)
test_class(Bar)
You could decorate the classes __getattr__ to check in the mixin. The problem is that all methods of the mixin would always require an object the type of the mixin as their first parameter, so you would have to decorate __init__ as well to create a mixin-object. I believe you could achieve this using a class decorator.
from functools import partial
class Mixin(object):
#staticmethod
def b(self): print "b()"
#staticmethod
def c(self): print "c()"
class Foo(object):
def __init__(self, mixin_cls):
self.delegate_cls = mixin_cls
def __getattr__(self, attr):
if hasattr(self.delegate_cls, attr):
return partial(getattr(self.delegate_cls, attr), self)
def a(self): print "a()"
f = Foo(Mixin)
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
This basically uses the Mixin class as a container to hold ad-hoc functions (not methods) that behave like methods by taking an object instance (self) as the first argument. __getattr__ will redirect missing calls to these methods-alike functions.
This passes your simple tests as shown below. But I cannot guarantee it will do all the things you want. Make more thorough test to make sure.
$ python mixin.py
a()
b()
c()
False
Composition? It seems like that would be the simplest way to handle this: either wrap your object in a decorator or just import the methods as an object into your class definition itself. This is what I usually do: put the methods that I want to share between classes in a file and then import the file. If I want to override some behavior I import a modified file with the same method names as the same object name. It's a little sloppy, but it works.
For example, if I want the init_covers behavior from this file (bedg.py)
import cove as cov
def init_covers(n):
n.covers.append(cov.Cover((set([n.id]))))
id_list = []
for a in n.neighbors:
id_list.append(a.id)
n.covers.append(cov.Cover((set(id_list))))
def update_degree(n):
for a in n.covers:
a.degree = 0
for b in n.covers:
if a != b:
a.degree += len(a.node_list.intersection(b.node_list))
In my bar class file I would do: import bedg as foo
and then if I want to change my foo behaviors in another class that inherited bar, I write
import bild as foo
Like I say, it is sloppy.
I already know the difference between old-style class (class Foo()...) and new-style class (class Foo(object)...). But, what is the difference between that :
class Foo(object):
def __repr__(self):
return 'foo'
and
class Foo(object):
def __repr__(object):
return 'foo'
Thanks.
The difference is that in one case you called the variable that holds the instance self and in another case you called it object. That's the only difference.
The self variable is explicit in Python, and you can call it whatever you want. self is just the convention everyone uses for readability.
For example, this works just fine:
>>> class Foo(object):
... def __init__(bippity, colour):
... bippity.colour = colour
... def get_colour(_):
... return _.colour
...
>>> f = Foo('Blue')
>>> f.get_colour()
'Blue'
But it's pretty damn confusing. :)
This is like saying;
class Foo(object):
def __init__(self):
self.a="foo"
def __repr__(bar):
return bar.a
The variable name bar has no meaning whatsoever. It is just a reference to self.
As others have pointed out, the name of the first parameter in a class method is merely a convention, you can name it anything you want. BUT DON'T. Always name it self, or you will be confusing everyone. In particular, your example names it object which obscures a built-in name, and so is doubly bad.
Assuming you know about Python builtin property: http://docs.python.org/library/functions.html#property
I want to re-set a object property in this way but, I need to do it inside a method to be able to pass to it some arguments, currently all the web examples of property() are defining the property outside the methods, and trying the obvious...
def alpha(self, beta):
self.x = property(beta)
...seems not to work, I'm glad if you can show me my concept error or other alternative solutions without subclassing the code (actually my code is already over-subclassed) or using decorators (this is the solution I'll use if there is no other).
Thanks.
Properties work using the descriptor protocol, which only works on attributes of a class object. The property object has to be stored in a class attribute. You can't "override" it on a per-instance basis.
You can, of course, provide a property on the class that gets an instance attribute or falls back to some default:
class C(object):
_default_x = 5
_x = None
#property
def x(self):
return self._x or self._default_x
def alpha(self, beta):
self._x = beta
In this case all you need to do in your alpha() is self.x = beta. Use property when you want to implement getters and setters for an attribute, for example:
class Foo(object):
#property
def foo(self):
return self._dblookup('foo')
#foo.setter
def foo(self, value):
self._dbwrite('foo', value)
And then be able to do
f = Foo()
f.foo
f.foo = bar