Is it safe to make two class objects with the same name? - python

It's possible to use type in Python to create a new class object, as you probably know:
A = type('A', (object,), {})
a = A() # create an instance of A
What I'm curious about is whether there's any problem with creating different class objects with the same name, eg, following on from the above:
B = type('A', (object,), {})
In other words, is there an issue with this second class object, B, having the same name as our first class object, A?
The motivation for this is that I'd like to get a clean copy of a class to apply different decorators to without using the inheritance approach described in this question.
So I'd like to define a class normally, eg:
class Fruit(object):
pass
and then make a fresh copy of it to play with:
def copy_class(cls):
return type(cls.__name__, cls.__bases__, dict(cls.__dict__))
FreshFruit = copy_class(fruit)
In my testing, things I do with FreshFruit are properly decoupled from things I do to Fruit.
However, I'm unsure whether I should also be mangling the name in copy_class in order to avoid unexpected problems.
In particular, one concern I have is that this could cause the class to be replaced in the module's dictionary, such that future imports (eg, from module import Fruit return the copied class).

There is no reason why you can't have 2 classes with the same __name__ in the same module if you want to and have a good reason to do so.
e.g. In your example from module import Fruit -- python doesn't care at all about the __name__ of the class. It looks in the module's globals for Fruit and imports what it finds there.
Note that, in general, this approach isn't great if you're using super (although the same can be said for class decorators ...):
class A(Base):
def foo(self):
super(A, self).foo()
B = copy_class(A)
In this case, when B.foo is called, it will end up calling super(A, self) which could lead to funky behaviour in a number of circumstances. . .

Related

Subclass avoiding parent's metaclass

Say I have a third-party library where a metaclass requires me to implement something. But I want to have an intermediate "abstract" subclass that doesn't. How can I do this?
Consider this to be a very minimal example of what third-party library has:
class ServingMeta(type):
def __new__(cls, name, bases, classdict):
if any(isinstance(b, ServingMeta) for b in bases):
if "name" not in classdict:
# Actual code fails for a different reason,
# but the logic is the same.
raise TypeError(f"Class '{name}' has no 'name' attribute")
return super().__new__(cls, name, bases, classdict)
class Serving(object, metaclass=ServingMeta):
def shout_name(self):
return self.name.upper()
I cannot modify the code above. It's an external dependency (and I don't want to fork it).
The code is meant to be used this way:
class Spam(Serving):
name = "SPAM"
spam = Spam()
print(spam.shout_name())
However, I happen to have a lot of spam, and I want to introduce a base class with the common helper methods. Something like this:
class Spam(Serving):
def thrice(self):
return " ".join([self.shout_name()] * 3)
class LovelySpam(Spam):
name = "lovely spam"
class WonderfulSpam(Spam):
name = "wonderful spam"
Obviously, this doesn't work and fails with the well-expected TypeError: Class 'SpamBase' has no 'name' attribute declared. Would third-party library had a SpamBase class without a metaclass, I could've subclassed that - but no such luck this time (I've mentioned this inconvenience to the library authors).
I can make it a mixin:
class SpamMixin(object):
def thrice(self):
return " ".join([self.shout_name()] * 3)
class LovelySpam(SpamMixin, Serving):
name = "lovely spam"
class WonderfulSpam(SpamMixin, Serving):
name = "wonderful spam"
However, this makes me and my IDE cringe a little, as it quickly becomes cumbersome to repeat SpamMixin everywhere and also because object has no shout_name attribute (and I don't want to silence analysis tools). In short, I just don't like this approach.
What else can I do?
Is there a way to get a metaclass-less version of Serving? I think of something like this:
ServingBase = remove_metaclass(Serving)
class Spam(ServingBase, metaclass=ServingMeta):
...
But don't know how to actually implement remove_metaclass and whenever it's even reasonably possible (of course, it must be doable, with some introspection, but it could require more arcane magic than I can cast).
Any other suggestions are also welcomed. Basically, I want to have my code DRY (one base class to rule them all), and have my linter/code analysis icons all green.
The mixin approach is the correct way to go. If your IDE "cringe" that is a deffect on that tool - just disable a little of the "features" that are obviously incorrect tunning when coding for a dynamic language like Python.
And this is not even about creating things dynamically, it is merely multiple-inheritance, which is supported by the language since forever. And one of the main uses of multiple-inheritance is exactly being able to create mixins just as this one you need.
Another inheritance-based workaround is to make your hierarchy one level deeper, and just introduce the metaclass after you come up with your mixin methods:
class Mixin(object):
def mimixin(self): ...
class SpamBase(Mixin, metaclass=ServingMeta):
name = "stub"
Or just addd the mixin in an intermediate subclass:
class Base(metaclass=Serving Meta):
name = "stub"
class MixedBase(Mixin, Base):
name = "stub"
class MyLovingSpam(MixedBase):
name = "MyLovingSpam"
If you don't want to be repeating the mixin=-base name in every class, that is the way to go.
"Removing" a metaclass just for the sake of having a late mixin is way over the top. Really. Broken. The way to do it wol e re-create the class dynamically, just as #vaultah mentions in the other answer, but doing that in an intermediate class is a thing you should not do. Doing that to please the IDE is something you should not do twice: messing with metaclasses is hard enough already. Removing things on inheritance/class creation that the language puts there naturally is something nasty (cf. this answer: How to make a class attribute exclusive to the super class ) . On the other hand, mixins and multiple inheritance are just natural.
Are you still there? I told you not to do so:
Now, onto your question - instead of "supressing the metaclass" in an intermediate class, it would be more feasible to inherit the metaclass you have there and change its behavior - so that it does not check for the constraints in specially marked classes - create an attribute for your use, like _skip_checking
class MyMeta(ServingMeta):
def __new__(metacls, name, bases, namespace):
if namespace.get("_skip_checking", False):
# hardcode call to "type" metaclass:
del namespace["_skip_checking"]
cls = type.__new__(metacls, name, bases, namespace)
else:
cls = super().__new__(metacls, name, bases, namespace)
return cls
# repeat for __init__ if needed.
class Base(metaclass=MyMeta):
_skip_checking = True
# define mixin methods
class LoveSpam(Base):
name = "LoveSpam"
There's really no direct way to remove the metaclass from a Python class, because the metaclass created that class. What you can try is re-create the class using a different metaclass, which doesn't have unwanted behaviour. For example, you could use type (the default metaclass).
In [6]: class Serving(metaclass=ServingMeta):
...: def shout_name(self):
...: return self.name.upper()
...:
In [7]: ServingBase = type('ServingBase', Serving.__bases__, dict(vars(Serving)))
Basically this takes the __bases__ tuple and the namespace of the Serving class, and uses them to create a new class ServingBase. N.B. this means that ServingBase will receive all bases and methods/attributes from Serving, some of which may have been added by ServingMeta.

Python weird class variables usage

Suppose we have the following code:
class A:
var = 0
a = A()
I do understand that a.var and A.var are different variables, and I think I understand why this thing happens. I thought it was just a side effect of python's data model, since why would someone want to modify a class variable in an instance?
However, today I came across a strange example of such a usage: it is in google app engine db.Model reference. Google app engine datastore assumes we inherit db.Model class and introduce keys as class variables:
class Story(db.Model):
title = db.StringProperty()
body = db.TextProperty()
created = db.DateTimeProperty(auto_now_add=True)
s = Story(title="The Three Little Pigs")
I don't understand why do they expect me to do like that? Why not introduce a constructor and use only instance variables?
The db.Model class is a 'Model' style class in classic Model View Controller design pattern.
Each of the assignments in there are actually setting up columns in the database, while also giving an easy to use interface for you to program with. This is why
title="The Three Little Pigs"
will update the object as well as the column in the database.
There is a constructor (no doubt in db.Model) that handles this pass-off logic, and it will take a keyword args list and digest it to create this relational model.
This is why the variables are setup the way they are, so that relation is maintained.
Edit: Let me describe that better. A normal class just sets up the blue print for an object. It has instance variables and class variables. Because of the inheritence to db.Model, this is actually doing a third thing: Setting up column definitions in a database. In order to do this third task it is making EXTENSIVE behinds the scenes changes to things like attribute setting and getting. Pretty much once you inherit from db.Model you aren't really a class anymore, but a DB template. Long story short, this is a VERY specific edge case of the use of a class
If all variables are declared as instance variables then the classes using Story class as superclass will inherit nothing from it.
From the Model and Property docs, it looks like Model has overridden __getattr__ and __setattr__ methods so that, in effect, "Story.title = ..." does not actually set the instance attribute; instead it sets the value stored with the instance's Property.
If you ask for story.__dict__['title'], what does it give you?
I do understand that a.var and A.var are different variables
First off: as of now, no, they aren't.
In Python, everything you declare inside the class block belongs to the class. You can look up attributes of the class via the instance, if the instance doesn't already have something with that name. When you assign to an attribute of an instance, the instance now has that attribute, regardless of whether it had one before. (__init__, in this regard, is just another function; it's called automatically by Python's machinery, but it simply adds attributes to an object, it doesn't magically specify some kind of template for the contents of all instances of the class - there's the magic __slots__ class attribute for that, but it still doesn't do quite what you might expect.)
But right now, a has no .var of its own, so a.var refers to A.var. And you can modify a class attribute via an instance - but note modify, not replace. This requires, of course, that the original value of the attribute is something modifiable - a list qualifies, a str doesn't.
Your GAE example, though, is something totally different. The class Story has attributes which specifically are "properties", which can do assorted magic when you "assign to" them. This works by using the class' __getattr__, __setattr__ etc. methods to change the behaviour of the assignment syntax.
The other answers have it mostly right, but miss one critical thing.
If you define a class like this:
class Foo(object):
a = 5
and an instance:
myinstance = Foo()
Then Foo.a and myinstance.a are the very same variable. Changing one will change the other, and if you create multiple instances of Foo, the .a property on each will be the same variable. This is because of the way Python resolves attribute access: First it looks in the object's dict, and if it doesn't find it there, it looks in the class's dict, and so forth.
That also helps explain why assignments don't work the way you'd expect given the shared nature of the variable:
>>> bar = Foo()
>>> baz = Foo()
>>> Foo.a = 6
>>> bar.a = 7
>>> bar.a
7
>>> baz.a
6
What happened here is that when we assigned to Foo.a, it modified the variable that all instance of Foo normally resolve when you ask for instance.a. But when we assigned to bar.a, Python created a new variable on that instance called a, which now masks the class variable - from now on, that particular instance will always see its own local value.
If you wanted each instance of your class to have a separate variable initialized to 5, the normal way to do it would be like this:
class Foo(object);
def __init__(self):
self.a = 5
That is, you define a class with a constructor that sets the a variable on the new instance to 5.
Finally, what App Engine is doing is an entirely different kind of black magic called descriptors. In short, Python allows objects to define special __get__ and __set__ methods. When an instance of a class that defines these special methods is attached to a class, and you create an instance of that class, attempts to access the attribute will, instead of setting or returning the instance or class variable, they call the special __get__ and __set__ methods. A much more comprehensive introduction to descriptors can be found here, but here's a simple demo:
class MultiplyDescriptor(object):
def __init__(self, multiplicand, initial=0):
self.multiplicand = multiplicand
self.value = initial
def __get__(self, obj, objtype):
if obj is None:
return self
return self.multiplicand * self.value
def __set__(self, obj, value):
self.value = value
Now you can do something like this:
class Foo(object):
a = MultiplyDescriptor(2)
bar = Foo()
bar.a = 10
print bar.a # Prints 20!
Descriptors are the secret sauce behind a surprising amount of the Python language. For instance, property is implemented using descriptors, as are methods, static and class methods, and a bunch of other stuff.
These class variables are metadata to Google App Engine generate their models.
FYI, in your example, a.var == A.var.
>>> class A:
... var = 0
...
... a = A()
... A.var = 3
... a.var == A.var
1: True

How to dynamically change base class of instances at runtime?

This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.

What is the purpose of python's inner classes?

Python's inner/nested classes confuse me. Is there something that can't be accomplished without them? If so, what is that thing?
Quoted from http://www.geekinterview.com/question_details/64739:
Advantages of inner class:
Logical grouping of classes: If a class is useful to only one other class then it is logical to embed it in that class and keep the two together. Nesting such "helper classes" makes their package more streamlined.
Increased encapsulation: Consider two top-level classes A and B where B needs access to members of A that would otherwise be declared private. By hiding class B within class A A's members can be declared private and B can access them. In addition B itself can be hidden from the outside world.
More readable, maintainable code: Nesting small classes within top-level classes places the code closer to where it is used.
The main advantage is organization. Anything that can be accomplished with inner classes can be accomplished without them.
Is there something that can't be accomplished without them?
No. They are absolutely equivalent to defining the class normally at top level, and then copying a reference to it into the outer class.
I don't think there's any special reason nested classes are ‘allowed’, other than it makes no particular sense to explicitly ‘disallow’ them either.
If you're looking for a class that exists within the lifecycle of the outer/owner object, and always has a reference to an instance of the outer class — inner classes as Java does it – then Python's nested classes are not that thing. But you can hack up something like that thing:
import weakref, new
class innerclass(object):
"""Descriptor for making inner classes.
Adds a property 'owner' to the inner class, pointing to the outer
owner instance.
"""
# Use a weakref dict to memoise previous results so that
# instance.Inner() always returns the same inner classobj.
#
def __init__(self, inner):
self.inner= inner
self.instances= weakref.WeakKeyDictionary()
# Not thread-safe - consider adding a lock.
#
def __get__(self, instance, _):
if instance is None:
return self.inner
if instance not in self.instances:
self.instances[instance]= new.classobj(
self.inner.__name__, (self.inner,), {'owner': instance}
)
return self.instances[instance]
# Using an inner class
#
class Outer(object):
#innerclass
class Inner(object):
def __repr__(self):
return '<%s.%s inner object of %r>' % (
self.owner.__class__.__name__,
self.__class__.__name__,
self.owner
)
>>> o1= Outer()
>>> o2= Outer()
>>> i1= o1.Inner()
>>> i1
<Outer.Inner inner object of <__main__.Outer object at 0x7fb2cd62de90>>
>>> isinstance(i1, Outer.Inner)
True
>>> isinstance(i1, o1.Inner)
True
>>> isinstance(i1, o2.Inner)
False
(This uses class decorators, which are new in Python 2.6 and 3.0. Otherwise you'd have to say “Inner= innerclass(Inner)” after the class definition.)
There's something you need to wrap your head around to be able to understand this. In most languages, class definitions are directives to the compiler. That is, the class is created before the program is ever run. In python, all statements are executable. That means that this statement:
class foo(object):
pass
is a statement that is executed at runtime just like this one:
x = y + z
This means that not only can you create classes within other classes, you can create classes anywhere you want to. Consider this code:
def foo():
class bar(object):
...
z = bar()
Thus, the idea of an "inner class" isn't really a language construct; it's a programmer construct. Guido has a very good summary of how this came about here. But essentially, the basic idea is this simplifies the language's grammar.
Nesting classes within classes:
Nested classes bloat the class definition making it harder to see whats going on.
Nested classes can create coupling that would make testing more difficult.
In Python you can put more than one class in a file/module, unlike Java, so the class still remains close to top level class and could even have the class name prefixed with an "_" to help signify that others shouldn't be using it.
The place where nested classes can prove useful is within functions
def some_func(a, b, c):
class SomeClass(a):
def some_method(self):
return b
SomeClass.__doc__ = c
return SomeClass
The class captures the values from the function allowing you to dynamically create a class like template metaprogramming in C++
I understand the arguments against nested classes, but there is a case for using them in some occasions. Imagine I'm creating a doubly-linked list class, and I need to create a node class for maintaing the nodes. I have two choices, create Node class inside the DoublyLinkedList class, or create the Node class outside the DoublyLinkedList class. I prefer the first choice in this case, because the Node class is only meaningful inside the DoublyLinkedList class. While there's no hiding/encapsulation benefit, there is a grouping benefit of being able to say the Node class is part of the DoublyLinkedList class.
Is there something that can't be accomplished without them? If so,
what is that thing?
There is something that cannot be easily done without: inheritance of related classes.
Here is a minimalist example with the related classes A and B:
class A(object):
class B(object):
def __init__(self, parent):
self.parent = parent
def make_B(self):
return self.B(self)
class AA(A): # Inheritance
class B(A.B): # Inheritance, same class name
pass
This code leads to a quite reasonable and predictable behaviour:
>>> type(A().make_B())
<class '__main__.A.B'>
>>> type(A().make_B().parent)
<class '__main__.A'>
>>> type(AA().make_B())
<class '__main__.AA.B'>
>>> type(AA().make_B().parent)
<class '__main__.AA'>
If B were a top-level class, you could not write self.B() in the method make_B but would simply write B(), and thus lose the dynamic binding to the adequate classes.
Note that in this construction, you should never refer to class A in the body of class B. This is the motivation for introducing the parent attribute in class B.
Of course, this dynamic binding can be recreated without inner class at the cost of a tedious and error-prone instrumentation of the classes.
1. Two functionally equivalent ways
The two ways shown before are functionally identical. However, there are some subtle differences, and there are situations when you would like to choose one over another.
Way 1: Nested class definition (="Nested class")
class MyOuter1:
class Inner:
def show(self, msg):
print(msg)
Way 2: With module level Inner class attached to Outer class(="Referenced inner class")
class _InnerClass:
def show(self, msg):
print(msg)
class MyOuter2:
Inner = _InnerClass
Underscore is used to follow PEP8 "internal interfaces (packages, modules, classes, functions, attributes or other names) should -- be prefixed with a single leading underscore."
2. Similarities
Below code snippet demonstrates the functional similarities of the "Nested class" vs "Referenced inner class"; They would behave the same way in code checking for the type of an inner class instance. Needless to say, the m.inner.anymethod() would behave similarly with m1 and m2
m1 = MyOuter1()
m2 = MyOuter2()
innercls1 = getattr(m1, 'Inner', None)
innercls2 = getattr(m2, 'Inner', None)
isinstance(innercls1(), MyOuter1.Inner)
# True
isinstance(innercls2(), MyOuter2.Inner)
# True
type(innercls1()) == mypackage.outer1.MyOuter1.Inner
# True (when part of mypackage)
type(innercls2()) == mypackage.outer2.MyOuter2.Inner
# True (when part of mypackage)
3. Differences
The differences of "Nested class" and "Referenced inner class" are listed below. They are not big, but sometimes you would like to choose one or the other based on these.
3.1 Code Encapsulation
With "Nested classes" it is possible to encapsulate code better than with "Referenced inner class". A class in the module namespace is a global variable. The purpose of nested classes is to reduce clutter in the module and put the inner class inside the outer class.
While no-one* is using from packagename import *, low amount of module level variables can be nice for example when using an IDE with code completion / intellisense.
*Right?
3.2 Readability of code
Django documentation instructs to use inner class Meta for model metadata. It is a bit more clearer* to instruct the framework users to write a class Foo(models.Model) with inner class Meta;
class Ox(models.Model):
horn_length = models.IntegerField()
class Meta:
ordering = ["horn_length"]
verbose_name_plural = "oxen"
instead of "write a class _Meta, then write a class Foo(models.Model) with Meta = _Meta";
class _Meta:
ordering = ["horn_length"]
verbose_name_plural = "oxen"
class Ox(models.Model):
Meta = _Meta
horn_length = models.IntegerField()
With the "Nested class" approach the code can be read a nested bullet point list, but with the "Referenced inner class" method one has to scroll back up to see the definition of _Meta to see its "child items" (attributes).
The "Referenced inner class" method can be more readable if your code nesting level grows or the rows are long for some other reason.
* Of course, a matter of taste
3.3 Slightly different error messages
This is not a big deal, but just for completeness: When accessing non-existent attribute for the inner class, we see slighly different exceptions. Continuing the example given in Section 2:
innercls1.foo()
# AttributeError: type object 'Inner' has no attribute 'foo'
innercls2.foo()
# AttributeError: type object '_InnerClass' has no attribute 'foo'
This is because the types of the inner classes are
type(innercls1())
#mypackage.outer1.MyOuter1.Inner
type(innercls2())
#mypackage.outer2._InnerClass
The main use case I use this for is the prevent proliferation of small modules and to prevent namespace pollution when separate modules are not needed. If I am extending an existing class, but that existing class must reference another subclass that should always be coupled to it. For example, I may have a utils.py module that has many helper classes in it, that aren't necessarily coupled together, but I want to reinforce coupling for some of those helper classes. For example, when I implement https://stackoverflow.com/a/8274307/2718295
:utils.py:
import json, decimal
class Helper1(object):
pass
class Helper2(object):
pass
# Here is the notorious JSONEncoder extension to serialize Decimals to JSON floats
class DecimalJSONEncoder(json.JSONEncoder):
class _repr_decimal(float): # Because float.__repr__ cannot be monkey patched
def __init__(self, obj):
self._obj = obj
def __repr__(self):
return '{:f}'.format(self._obj)
def default(self, obj): # override JSONEncoder.default
if isinstance(obj, decimal.Decimal):
return self._repr_decimal(obj)
# else
super(self.__class__, self).default(obj)
# could also have inherited from object and used return json.JSONEncoder.default(self, obj)
Then we can:
>>> from utils import DecimalJSONEncoder
>>> import json, decimal
>>> json.dumps({'key1': decimal.Decimal('1.12345678901234'),
... 'key2':'strKey2Value'}, cls=DecimalJSONEncoder)
{"key2": "key2_value", "key_1": 1.12345678901234}
Of course, we could have eschewed inheriting json.JSONEnocder altogether and just override default():
:
import decimal, json
class Helper1(object):
pass
def json_encoder_decimal(obj):
class _repr_decimal(float):
...
if isinstance(obj, decimal.Decimal):
return _repr_decimal(obj)
return json.JSONEncoder(obj)
>>> json.dumps({'key1': decimal.Decimal('1.12345678901234')}, default=json_decimal_encoder)
'{"key1": 1.12345678901234}'
But sometimes just for convention, you want utils to be composed of classes for extensibility.
Here's another use-case: I want a factory for mutables in my OuterClass without having to invoke copy:
class OuterClass(object):
class DTemplate(dict):
def __init__(self):
self.update({'key1': [1,2,3],
'key2': {'subkey': [4,5,6]})
def __init__(self):
self.outerclass_dict = {
'outerkey1': self.DTemplate(),
'outerkey2': self.DTemplate()}
obj = OuterClass()
obj.outerclass_dict['outerkey1']['key2']['subkey'].append(4)
assert obj.outerclass_dict['outerkey2']['key2']['subkey'] == [4,5,6]
I prefer this pattern over the #staticmethod decorator you would otherwise use for a factory function.
I have used Python's inner classes to create deliberately buggy subclasses within unittest functions (i.e. inside def test_something():) in order to get closer to 100% test coverage (e.g. testing very rarely triggered logging statements by overriding some methods).
In retrospect it's similar to Ed's answer https://stackoverflow.com/a/722036/1101109
Such inner classes should go out of scope and be ready for garbage collection once all references to them have been removed. For instance, take the following inner.py file:
class A(object):
pass
def scope():
class Buggy(A):
"""Do tests or something"""
assert isinstance(Buggy(), A)
I get the following curious results under OSX Python 2.7.6:
>>> from inner import A, scope
>>> A.__subclasses__()
[]
>>> scope()
>>> A.__subclasses__()
[<class 'inner.Buggy'>]
>>> del A, scope
>>> from inner import A
>>> A.__subclasses__()
[<class 'inner.Buggy'>]
>>> del A
>>> import gc
>>> gc.collect()
0
>>> gc.collect() # Yes I needed to call the gc twice, seems reproducible
3
>>> from inner import A
>>> A.__subclasses__()
[]
Hint - Don't go on and try doing this with Django models, which seemed to keep other (cached?) references to my buggy classes.
So in general, I wouldn't recommend using inner classes for this kind of purpose unless you really do value that 100% test coverage and can't use other methods. Though I think it's nice to be aware that if you use the __subclasses__(), that it can sometimes get polluted by inner classes. Either way if you followed this far, I think we're pretty deep into Python at this point, private dunderscores and all.

Is there a benefit to defining a class inside another class in Python?

What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().

Categories

Resources