My confusion is with the interplay between dataclasses & __init_subclass__.
I am trying to implement a base class that will exclusively be inherited from. In this example, A is the base class. It is my understanding from reading the python docs on dataclasses that simply adding a decorator should automatically create some special dunder methods for me. Quoting their docs:
For example, this code:
from dataclasses import dataclass
#dataclass
class InventoryItem:
"""Class for keeping track of an item in inventory."""
name: str
unit_price: float
quantity_on_hand: int = 0
def total_cost(self) -> float:
return self.unit_price * self.quantity_on_hand
will add, among other things, a __init__() that looks like:
def __init__(self, name: str, unit_price: float, quantity_on_hand: int = 0):
self.name = name
self.unit_price = unit_price
self.quantity_on_hand = quantity_on_hand
This is an instance variable, no? From the classes docs, it shows a toy example, which reads super clear.
class Dog:
kind = 'canine' # class variable shared by all instances
def __init__(self, name):
self.name = name # instance variable unique to each instance
A main gap in my understanding is - is it an instance variable or a class variable? From my testing below, it is a class variable, but from the docs, it shows an instance variable as it's proximal implementation. It may be that most of my problem is there. I've also read the python docs on classes, which do not go into dataclasses.
The problem continues with the seemingly limited docs on __init_subclass__, which yields another gap in my understanding. I am also making use of __init_subclass__, in order to enforce that my subclasses have indeed instantiated the variable x.
Below, we have A, which has an instance variable x set to None. B, C, and D all subclass A, in different ways (hoping) to determine implementation specifics.
B inherits from A, setting a class variable of x.
D is a dataclass, which inherits from A, setting what would appear to be a class variable of x. However, given their docs from above, it seems that the class variable x of D should be created as an instance variable. Thus, when D is created, it should first call __init_subclass__, in that function, it will check to see if x exists in D - by my understanding, it should not; however, the code passes scot-free. I believe D() will create x as an instance variable because the dataclass docs show that this will create an __init__ for the user.
"will add, among other things..." <insert __init__ code>
I must be wrong here but I'm struggling to put it together.
import dataclasses
class A:
def __init__(self):
self.x = None
def __init_subclass__(cls):
if not getattr(cls, 'x') or not cls.x:
raise TypeError(
f'Cannot instantiate {cls.__name__}, as all subclasses of {cls.__base__.__name__} must set x.'
)
class B(A):
x = 'instantiated-in-b'
#dataclasses.dataclass
class D(A):
x : str = 'instantiated-in-d'
class C(A):
def __init__(self):
self.x = 'instantiated-in-c'
print('B', B())
print('D', D())
print('C', C())
The code, per my expectation, properly fails with C(). Executing the above code will succeed with D, which does not compute for me. In my understanding (which is wrong), I am defining a field, which means that dataclass should expand my class variables as instance variables. (The previous statement is most probably where I am wrong, but I cannot find anything that documents this behavior. Are data classes not actually expanding class variables as instance variables? It certainly appears that way from the visual explanation in their docs.) From the dataclass docs:
The dataclass() decorator examines the class to find fields. A field is defined as a class variable that has a type annotation.
Thus - why - when creating an instance D() - does it slide past the __init_subclass__ of its parent A?
Apologies for the lengthy post, I must be missing something simple, so if once can point me in the right direction, that would be excellent. TIA!
I have just found the implementation for dataclasses from the CPython github.
Related Articles:
Understanding __init_subclass__
python-why-use-self-in-a-class
proper-way-to-create-class-variable-in-data-class
how-to-get-instance-variables-in-python
enforcing-class-variables-in-a-subclass
__init_subclass__ is called when initializing a subclass. Not when initializing an instance of a subclass - it's called when initializing the subclass itself. Your exception occurs while trying to create the C class, not while trying to evaluate C().
Decorators, such as #dataclass, are a post-processing mechanism, not a pre-processing mechanism. A class decorator takes an existing class that has already gone through all the standard initialization, including __init_subclass__, and modifies the class. Since this happens after __init_subclass__, __init_subclass__ doesn't see any of the modifications that #dataclass performs.
Even if the decorator were to be applied first, D still would have passed the check in A.__init_subclass__, because the dataclass decorator will set D.x to the default value of the x field anyway, so __init_subclass__ will find a value of x. In this case, that happens to be the same thing you set D.x to in the original class definition, but it can be a different object in cases where you construct field objects explicitly.
(Also, you probably wanted to write hasattr instead of getattr in not getattr(cls, 'x').)
Related
I'm using Python dataclasses with inheritance and I would like to make an inherited abstract property into a required constructor argument. Using an inherited abstract property as a optional constructor argument works as expected, but I've been having real trouble making the argument required.
Below is a minimal working example, test_1() fails with TypeError: Can't instantiate abstract class Child1 with abstract methods inherited_attribute, test_2() fails with AttributeError: can't set attribute, and test_3() works as promised.
Does anyone know a way I can achieve this behavior while still using dataclasses?
import abc
import dataclasses
#dataclasses.dataclass
class Parent(abc.ABC):
#property
#abc.abstractmethod
def inherited_attribute(self) -> int:
pass
#dataclasses.dataclass
class Child1(Parent):
inherited_attribute: int
#dataclasses.dataclass
class Child2(Parent):
inherited_attribute: int = dataclasses.field()
#dataclasses.dataclass
class Child3(Parent):
inherited_attribute: int = None
def test_1():
Child1(42)
def test_2():
Child2(42)
def test_3():
Child3(42)
So, the thing is, you declared an abstract property. Not an abstract constructor argument, or an abstract instance dict entry - abc has no way to specify such things.
Abstract properties are really supposed to be overridden by concrete properties, but the abc machinery will consider it overridden if there is a non-abstract entry in the subclass's class dict.
Your Child1 doesn't create a class dict entry for inherited_attribute - the annotation only creates an entry in the annotation dict.
Child2 does create an entry in the class dict, but then the dataclass machinery removes it, because it's a field with no default value. This changes the abstractness status of Child2, which is undefined behavior below Python 3.10, but Python 3.10 added abc.update_abstractmethods to support that, and dataclasses uses that function on Python 3.10.
Child3 creates an entry in the class dict, and since the dataclass machinery sees this entry as a default value, it leaves the entry there, so the abstract property is considered overridden.
So you've got a few courses of action here. The first is to remove the abstract property. You don't want to force your subclasses to have a property - you want your subclasses to have an accessible inherited_attribute instance attribute, and it's totally fine if this attribute is implemented as an instance dict entry. abc doesn't support that, and using an abstract property is wrong, so just document the requirement instead of trying to use abc to enforce it.
With the abstract property removed, Parent isn't actually abstract any more, and in fact doesn't really do anything, so at that point, you can just take Parent out entirely.
Option 2, if you really want to stick with the abstract property, would be to give your subclasses a concrete property, properly overriding the abstract property:
#dataclasses.dataclass
class Child(Parent):
_hidden_field: int
#property
def inherited_attribute(self):
return self._hidden_field
This would require you to give the field a different name from the attribute name you wanted, with consequences for the constructor argument names, the repr output, and anything else that cares about field names.
The third option is to get something else into the class dict to shadow the inherited_attribute name, in a way that doesn't get treated as a default value. Python 3.10 added slots support in dataclasses, so you could do
#dataclasses.dataclass(slots=True)
class Child(Parent):
inherited_attribute: int
and the generated slot descriptor would shadow the abstract property, without being treated as a default value. However, this would not give the usual memory savings of slots, because your classes inherit from Parent, which doesn't use slots.
Overall, I would recommend option 1. Abstract properties don't mean what you want, so just don't use them.
Answering my own question since I just found another option than those listed in #user2357112's excellent answer.
What seems to work is setting the default value of the field to dataclasses.MISSING like in the following example:
#dataclasses.dataclass
class Child4(Parent):
inherited_attribute: int = dataclasses.MISSING
This might be better than #user2357112's Option 3 since it actually raises a TypeError: Child4.__init__() missing 1 required positional argument: 'inherited_attribute' if the value of inherited_attribute is missing, instead of silently setting it to the property Parent.inherited_attribute.
This is probably more of a hack than a real solution since the documentation of dataclasses.field() says that "No code should directly use the MISSING value."
I am writing a CustomEnum class in which I want to add some helper methods, that would then be available by the classes subclassing my CustomEnum. One of the methods is to return a random enum value, and this is where I am stuck. The function works as expected, but on the type-hinting side, I cannot figure out a way of saying "the return type is the same type of cls".
I am fairly sure there's some TypeVar or similar magic involved, but since I never had to use them I never took the time to figure them out.
class CustomEnum(Enum):
#classmethod
def random(cls) -> ???:
return random.choice(list(cls))
class SubclassingEnum(CustomEnum):
A = "a"
B = "b"
random_subclassing_enum: SubclassingEnum
random_subclassing_enum = SubclassingEnum.random() # Incompatible types in assignment (expression has type "CustomEnum", variable has type "SubclassingEnum")
Can somebody help me or give me a hint on how to proceed?
Thanks!
The syntax here is kind of horrible, but I don't think there's a cleaner way to do this. The following passes MyPy:
from typing import TypeVar
from enum import Enum
import random
T = TypeVar("T", bound="CustomEnum")
class CustomEnum(Enum):
#classmethod
def random(cls: type[T]) -> T:
return random.choice(list(cls))
(In python versions <= 3.8, you have to use typing.Type rather than the builtin type if you want to parameterise it.)
What's going on here?
T is defined at the top as being a type variable that is "bound" to the CustomEnum class. This means that a variable annotated with T can only be an instance of CustomEnum or an instance of a class inheriting from CustomEnum.
In the classmethod above, we're actually using this type-variable to define the type of the cls parameter with respect to the return type. Usually we do the opposite — we usually define a function's return types with respect to the types of that function's input parameters. So it's understandable if this feels a little mind-bending!
We're saying: this method leads to instances of a class — we don't know what the class will be, but we know it will either be CustomEnum or a class inheriting from CustomEnum. We also know that whatever class is returned, we can guarantee that the type of the cls parameter in the function will be "one level up" in the type heirarchy from the type of the return value.
In a lot of situations, we might know that type[cls] will always be a fixed value. In those situations, it would be possible to hardcode that into the type annotations. However, it's best not to do so, and instead to use this method, which clearly shows the relationship between the type of the input and the return type (even if it uses horrible syntax to do so!).
Further reading: the MyPy documentation on the type of class objects.
Further explanation and examples
For the vast majority of classes (not with Enums, they use metaclasses, but let's leave that aside for the moment), the following will hold true:
Example 1
Class A:
pass
instance_of_a = A()
type(instance_of_a) == A # True
type(A) == type # True
Example 2
class B:
pass
instance_of_b = B()
type(instance_of_b) == B # True
type(B) == type # True
For the cls parameter of your CustomEnum.random() method, we're annotating the equivalent of A rather than instance_of_a in my Example 1 above.
The type of instance_of_a is A.
But the type of A is not A — A is a class, not an instance of a class.
Classes are not instances of classes; they are either instances of type or instances of custom metaclasses that inherit from type.
No metaclasses are being used here; ergo, the type of A is type.
The rule is as follows:
The type of all python class instances will be the class they're an instance of.
The type of all python classes will be either type or (if you're being too clever for your own good) a custom metaclass that inherits from type.
With your CustomEnum class, we could annotate the cls parameter with the metaclass that the enum module uses (enum.EnumType, if you want to know). But, as I say — best not to. The solution I've suggested illustrates the relationship between the input type and the return type more clearly.
Starting in Python 3.11, the correct return annotation for this code is Self:
from typing import Self
class CustomEnum(Enum):
#classmethod
def random(cls) -> Self:
return random.choice(list(cls))
Quoting from the PEP:
This PEP introduces a simple and intuitive way to annotate methods that return an instance of their class. This behaves the same as the TypeVar-based approach specified in PEP 484 but is more concise and easier to follow.
The current workaround for this is unintuitive and error-prone:
Self = TypeVar("Self", bound="Shape")
class Shape:
#classmethod
def from_config(cls: type[Self], config: dict[str, float]) -> Self:
return cls(config["scale"])
We propose using Self directly:
from typing import Self
class Shape:
#classmethod
def from_config(cls, config: dict[str, float]) -> Self:
return cls(config["scale"])
This avoids the complicated cls: type[Self] annotation and the TypeVar declaration with a bound. Once again, the latter code behaves equivalently to the former code.
I wrote a class that can handle integers with arbitrary precision (just for learning purposes). The class takes a string representation of an integer and converts it into an instance of BigInt for further calculations.
Often times you need the numbers Zero and One, so I thought it would be helpfull if the class could return these. I tried the following:
class BigInt():
zero = BigInt("0")
def __init__(self, value):
####yada-yada####
This doesn't work. Error: "name 'BigInt' is not defined"
Then I tried the following:
class BigInt():
__zero = None
#staticmethod
def zero():
if BigInt.__zero is None:
BigInt.__zero = BigInt('0')
return BigInt.__zero
def __init__(self, value):
####yada-yada####
This actually works very well. What I don't like is that zero is a method (and thus has to be called with BigInt.zero()) which is counterintuitive since it should just refer to a fixed value.
So I tried changing zero to become a property, but then writing BigInt.zero returns an instance of the class property instead of BigInt because of the decorator used. That instance cannot be used for calculations because of the wrong type.
Is there a way around this issue?
A static property...? We call a static property an "attribute". This is not Java, Python is a dynamically typed language and such a construct would be really overcomplicating matters.
Just do this, setting a class attribute:
class BigInt:
def __init__(self, value):
...
BigInt.zero = BigInt("0")
If you want it to be entirely defined within the class, do it using a decorator (but be aware it's just a more fancy way of writing the same thing).
def add_zero(cls):
cls.zero = cls("0")
return cls
#add_zero
class BigInt:
...
The question is contradictory: static and property don't go together in this way. Static attributes in Python are simply ones that are only assigned once, and the language itself includes a very large number of these. (Most strings are interred, all integers < a certain value are pre-constructed, etc. E.g. the string module.). Easiest approach is to statically assign the attributes after construction as wim illustrates:
class Foo:
...
Foo.first = Foo()
...
Or, as he further suggested, using a class decorator to perform the assignments, which is functionally the same as the above. A decorator is, effectively, a function that is given the "decorated" function as an argument, and must return a function to effectively replace the original one. This may be the original function, say, modified with some annotations, or may be an entirely different function. The original (decorated) function may or may not be called as appropriate for the decorator.
def preload(**values):
def inner(cls):
for k, v in values.items():
setattr(cls, k, cls(v))
return cls
return inner
This can then be used dynamically:
#preload(zero=0, one=1)
class Foo:
...
If the purpose is to save some time on common integer values, a defaultdict mapping integers to constructed BigInts could be useful as a form of caching and streamlined construction / singleton storage. (E.g. BigInt.numbers[27])
However, the problem of utilizing #property at the class level intrigued me, so I did some digging. It is entirely possible to make use of "descriptor protocol objects" (which the #property decorator returns) at the class level if you punt the attribute up the object model hierarchy, to the metaclass.
class Foo(type):
#property
def bar(cls):
print("I'm a", cls)
return 27
class Bar(metaclass=Foo):
...
>>> Bar.bar
I'm a <class '__main__.Bar'>
<<< 27
Notably, this attribute is not accessible from instances:
>>> Bar().bar
AttributeError: 'Bar' object has no attribute 'bar'
Hope this helps!
I have class Base. I'd like to extend its functionality in a class Derived. I was planning to write:
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
# ...
def derived_method1(self):
# ...
Sometimes I already have a Base instance, and I want to create a Derived instance based on it, i.e., a Derived instance that shares the Base object (doesn't re-create it from scratch). I thought I could write a static method to do that:
b = Base(arg1, arg2) # very large object, expensive to create or copy
d = Derived.from_base(b, derived_arg1, derived_arg2) # reuses existing b object
but it seems impossible. Either I'm missing a way to make this work, or (more likely) I'm missing a very big reason why it can't be allowed to work. Can someone explain which one it is?
[Of course, if I used composition rather than inheritance, this would all be easy to do. But I was hoping to avoid the delegation of all the Base methods to Derived through __getattr__.]
Rely on what your Base class is doing with with base_arg1, base_arg2.
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
...
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
...
#classmethod
def from_base(cls, b, da1, da2):
return cls(b.base_arg1, b.base_arg2, da1, da2)
The alternative approach to Alexey's answer (my +1) is to pass the base object in the base_arg1 argument and to check, whether it was misused for passing the base object (if it is the instance of the base class). The other agrument can be made technically optional (say None) and checked explicitly when decided inside the code.
The difference is that only the argument type decides what of the two possible ways of creation is to be used. This is neccessary if the creation of the object cannot be explicitly captured in the source code (e.g. some structure contains a mix of argument tuples, some of them with the initial values, some of them with the references to the existing objects. Then you would probably need pass the arguments as the keyword arguments:
d = Derived(b, derived_arg1=derived_arg1, derived_arg2=derived_arg2)
Updated: For the sharing the internal structures with the initial class, it is possible using both approaches. However, you must be aware of the fact, that if one of the objects tries to modify the shared data, the usual funny things can happen.
To be clear here, I'll make an answer with code. pepr talks about this solution, but code is always clearer than English. In this case Base should not be subclassed, but it should be a member of Derived:
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
class Derived(object):
def __init__(self, base, derived_arg1, derived_arg2):
self.base = base
self.derived_arg1 = derived_arg1
self.derived_arg2 = derived_arg2
def derived_method1(self):
return self.base.base_arg1 * self.derived_arg1
This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.