Declaring inner class out of the class - python

I want to use inner class in my code but I have problem with planning how should it looks like, maybe even it's bad use of it so you could help me to find other way to write it.
I wish this inner class would have been created in some if structure, depending on the string provided in input. It would be nice if I could write the code of this inner class outside it's parent class because it might be huge in the future and it should be easy to extend.
I was wondering if I should just use inheritage like Case1(Case1), but im not sure about this.
class Example:
def declare_specific_case(self):
if "case1" in foo:
class Case1:
pass
elif "case2" in foo:
class Case2:
pass
class Case1:
<some code which can be used in inner class>
So I expect that I can declare the code outside the class maybe even in other module.

You can do something like this:
class Class1: pass
class Class2: pass
class Wrapper:
def __init__(self, x):
if x:
self._inner = Class1
else:
self._inner = Class2
even you can add a convenience method just like:
class Wrapper:
# ...
def get_inner_class_instance(self, *args, **kwargs):
return self._inner(*args, **kwargs)
On the other hand, perhaps you want to look on how to implement the Factory Pattern using python. Take a look to this article: https://hub.packtpub.com/python-design-patterns-depth-factory-pattern/

Related

How to create a class decorator that can add multiple methods to a class, while preserving the IDE's ability to type-hint the methods

The issue
I would like to be able to re-use methods by implementing them with a decorator, while preserving my IDE's ability to type-hint the methods added, such that:
#methods(methods=[implement_foo, implement_bar])
class K:
pass
# OR
#method(methods[Foo, Bar])
class K:
pass
k = K()
#### THE ISSUE
k. <- # IDE should recognize the methods .foo() or bar(), but does not.
My issue is much like How to create a class decorator that can add multiple methods to a class?, but as mentioned, while preserving the type-hint and only use one decorator.
What I have tried
I can make it work with one decorator, but not with multiple.
Example with one decorator called implement_method
def implement_method(cls):
class Inner(cls):
def __init__(self, *args, **kargs):
super(Inner, self).__init__(*args, **kargs)
def method(self):
pass
return Inner
#implement_method
class K:
pass
And type hint works for a new instance of K:
I imagine that one of the issues is using a loop, but I am unable to come up with a different solution. The following is my best attempt:
def methods(methods):
def wrapper(cls):
for method in methods:
cls = method(cls)
return cls
return wrapper
class Bar:
def bar(self):
pass
#methods(methods=[Bar])
class K:
pass
k = K()
k. # <- not finding bar()
Since your question is a 2 part one:
I have an answer for your first part and I am quite stuck on the second. You can modify signatures of functions using the inspect module, but I have not found anything similar for classes and I am not sure if it is possible. So for my answer I will focus on your first part:
One decorator for multiple functions:
Let's look at the decorator first:
def add_methods(*methods):
def wrapper(cls):
for method in methods:
setattr(cls, method.__name__, staticmethod(method))
return cls
return wrapper
We use *methods as a parameter so that we can add as many methods as we want as arguments.
Then we define a wrapper for the class and in it iterate over all methods we want to add using setattr to add the method to the class. Notice the staticmethod wrapping the original method. You can leave this out if you want the methods to receive the argument self.
Then we return from the wrapper returning the class and return from the decorator returning the wrapper.
Let's write some simple methods next:
def method_a():
print("I am a banana!")
def method_b():
print("I am an apple!")
Now we create a simple class using our decorator:
#add_methods(method_a, method_b)
class MyClass:
def i_was_here_before(self):
print("Hah!")
And finally test it:
my_instance = MyClass()
my_instance.i_was_here_before()
my_instance.method_a()
my_instance.method_b()
Our output:
Hah!
I am a banana!
I am an apple!
A word of caution
Ususally it is not advised to change the signature of functions or classes without a good reason (and sometimes even with a good reason).
Alternate Solution
Given that you will need to apply the decorator to each class anyway, you could also just use a superclass like this:
class Parent:
#staticmethod
def method_a():
print("I am a banana!")
#staticmethod
def method_b():
print("I am an apple!")
class MyClass(Parent):
def i_was_here_before(self):
print("Hah!")
my_instance = MyClass()
my_instance.i_was_here_before()
my_instance.method_a()
my_instance.method_b()
Since python supports multiple inheritance this should work better and it also gives you the correct hints.
Complete working example:
def add_methods(*methods):
def wrapper(cls):
for method in methods:
setattr(cls, method.__name__, staticmethod(method))
return cls
return wrapper
def method_a():
print("I am a banana!")
def method_b():
print("I am an apple!")
#add_methods(method_a, method_b)
class MyClass:
def i_was_here_before(self):
print("Hah!")
my_instance = MyClass()
my_instance.i_was_here_before()
my_instance.method_a()
my_instance.method_b()

Calling base class method after child class __init__ from base class __init__?

This is a feature I miss in several languages and wonder if anyone has any idea how it can be done in Python.
The idea is that I have a base class:
class Base(object):
def __init__(self):
self.my_data = 0
def my_rebind_function(self):
pass
and a derived class:
class Child(Base):
def __init__(self):
super().__init__(self)
# Do some stuff here
self.my_rebind_function() # <==== This is the line I want to get rid of
def my_rebind_function(self):
# Do stuff with self.my_data
As can be seen above, I have a rebound function which I want called after the Child.__init__ has done its job. And I want this done for all inherited classes, so it would be great if it was performed by the base class, so I do not have to retype that line in every child class.
It would be nice if the language had a function like __finally__, operating similar to how it operates with exceptions. That is, it should run after all __init__-functions (of all derived classes) have been run, that would be great. So the call order would be something like:
Base1.__init__()
...
BaseN.__init__()
LeafChild.__init__()
LeafChild.__finally__()
BaseN.__finally__()
...
Base1.__finally__()
And then object construction is finished. This is also kind of similar to unit testing with setup, run and teardown functions.
You can do this with a metaclass like that:
class Meta(type):
def __call__(cls, *args, **kwargs):
print("start Meta.__call__")
instance = super().__call__(*args, **kwargs)
instance.my_rebind_function()
print("end Meta.__call__\n")
return instance
class Base(metaclass=Meta):
def __init__(self):
print("Base.__init__()")
self.my_data = 0
def my_rebind_function(self):
pass
class Child(Base):
def __init__(self):
super().__init__()
print("Child.__init__()")
def my_rebind_function(self):
print("Child.my_rebind_function")
# Do stuff with self.my_data
self.my_data = 999
if __name__ == '__main__':
c = Child()
print(c.my_data)
By overwriting Metaclass.__call__ you can hook after all __init__ ( and __new__) methods of the class-tree have run an before the instance is returned. This is the place to call your rebind function. To understand the call order i added some print statements. The output will look like this:
start Meta.__call__
Base.__init__()
Child.__init__()
Child.my_rebind_function
end Meta.__call__
999
If you want to read on and get deeper into details I can recommend following great article: https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/
I may still not fully understand, but this seems to do what I (think) you want:
class Base(object):
def __init__(self):
print("Base.__init__() called")
self.my_data = 0
self.other_stuff()
self.my_rebind_function()
def other_stuff(self):
""" empty """
def my_rebind_function(self):
""" empty """
class Child(Base):
def __init__(self):
super(Child, self).__init__()
def other_stuff(self):
print("In Child.other_stuff() doing other stuff I want done in Child class")
def my_rebind_function(self):
print("In Child.my_rebind_function() doing stuff with self.my_data")
child = Child()
Output:
Base.__init__() called
In Child.other_stuff() doing other stuff I want done in Child class
In Child.my_rebind_function() doing stuff with self.my_data
If you want a "rebind" function to be invoked after each instance of a type which inherits from Base is instantiated, then I would say this "rebind" function can live outside the Base class(or any class inheriting from it).
You can have a factory function that gives you the object you need when you invoke it(for example give_me_a_processed_child_object()). This factory function basically instantiates an object and does something to it before it returns it to you.
Putting logic in __init__ is not a good idea because it obscures logic and intention. When you write kid = Child(), you don't expect many things to happen in the background, especially things that act on the instance of Child that you just created. What you expect is a fresh instance of Child.
A factory function, however, transparently does something to an object and returns it to you. This way you know you're getting an already processed instance.
Finally, you wanted to avoid adding "rebind" methods to your Child classes which you now you can since all that logic can be placed in your factory function.

Extending a class hierarchy in Python

I have a class hierarchy in a module that I want to extend.
The module to be extended looks something like this.
Module foo:
class bar(object): pass
class spam(bar): pass
class eggs(bar): pass
Now I want to extend these classes:
class my_bar(foo.bar):
def new_func(): pass
class my_spam(foo.spam): pass
class my_eggs(foo.eggs): pass
Doing so, a new function new_func() in my_bar would not be available in a my_spam instance using
my_spam_instance.new_func()
What is the best ("most pythonic") way to achieve this? I thought of multiple inheritance, like this:
class my_bar(foo.bar): pass
class my_spam(foo.bar, my_bar): pass
class my_eggs(foo.eggs, my_bar): pass
Though I never really used it before and I am not sure this is the best way.
You don't even need to inherit my_bar from bar
pythonic will be adding Mixin, which is actually a base class, but not inherited
class NewFuncMixin():
def new_func(): pass
And add it to new classes
class my_bar(foo.bar, NewFuncMixin): pass
class my_spam(foo.spam, NewFuncMixin): pass
class my_eggs(foo.eggs, NewFuncMixin): pass
What about a mixin class? The pattern is
class My_mixin( object):
def new_func(self): pass
class My_spam( My_mixin, foo.spam): pass
class My_eggs( My_mixin, foo.eggs): pass
mixins should inherit from object, and go on the left of the inheritance list so that the mixin class methods get name priority. Within the mixin you can then wrap any method of the superclass:
class My_mixin( object):
def bar_method( self):
# stuff
bar_result = super( My_mixin, self).bar_method()
# more stuff
return bar_result # or my_result based on bar_result
You can of course completely override the method instead of wrapping it.

Python: find classes in module before class is defined

I have a python class in a module, and I have a few methods within it that need to have a list of certain other classes within the same module. Here is how I'm doing it right now:
module.py
class Main:
#staticmethod
def meth1():
for c in classes:
#do something
#staticmethod
def meth2():
for c in classes:
#do something
class Class1:
pass
class Class2:
pass
class Class3:
pass
classes = [Class1, Class3]
A few things I would like to improve:
I'd like to put the classes list somewhere more prevalent. Ideally, either outside all classes, but at the top of the module file, or as a class attribute of Main, but outside of either meth1 or meth2. The purpose of that is to make it easier to find, if someone needs to add another class definition.
If possible, I'd like to do this programmatically so I don't need to explicitly define the list to begin with. This eliminates the need for #1 (though I'd still like it to be prevalent). To do this, I need a way to list all classes defined within the same module. The closest I've been able to come is dir() or locals(), but they also list imported classes, methods, and modules. Also, I would need some way to identify the classes I want. I can do that just with an attribute in the classes, but if there's some more elegant way, that would be nice.
Is what I'm trying to do even possible?
Personally, I would use a decorator to mark the classes that are important. You can place the list that will hold them at the top of the file where it will be noticable.
Here's a simple example:
# Classes are added here if they are important, because...
important_classes = []
def important(cls):
important_classes.append(cls)
return cls
#important
class ClassA(object):
pass
class ClassB(object):
pass
#important
class ClassC(object):
pass
# Now you can use the important_classes list however you like.
print(important_classes)
# => [<class '__main__.ClassA'>, <class '__main__.ClassC'>]
There may be better ways to achieve this, but I would make all of these subclasses of a holder, then use __subclasses__() to pull them all:
class Main:
def meth1(self):
for c in Holder._subclasses__():
#do something
def meth2(self):
for c in Holder._subclasses__():
#do something
class Holder(object):
pass
class Class1(Holder):
pass
class Class2(Holder):
pass
class Class3(Holder):
pass
You could even make them subclasses of Main if you wanted to, and then pull them with a classmethod:
class Main(object):
#classmethod
def meth1(cls):
for c in cls._subclasses__():
#do something
class Class1(Main): pass
You do need to inherit from object with Python 2 for this to work.
Your list seems to be targeting a subset of the available classes in the module, so at some point you will have to specify the classes you are targeting.
import sys
target_classes = ["Class1", "Class3"]
class Main:
def __init__(self, classes):
self.target_classes = classes
def meth1(self):
for s in self.target_classes:
C = getattr(sys.modules[__name__], s)
C().speak()
def meth2(self):
for c in classes:
print c
#do something
class Class1:
def speak(self):
print "woof"
class Class2:
def speak(self):
print "squeak"
class Class3:
def speak(self):
print "meow"
Main(target_classes).meth1()
--output:--
woof
meow
You can use inspect.
First, get the list of local variables:
local_vars = locals().values()
Then we need to inspect each one:
import inspect
local_vars = [i for i in local_vars if inspect.isclass(i)]
To get only classes locally defined, check if cls.__module__ == __name__ as follows:
def get_classes():
global_vars = list(globals().values())
classes = [i for i in global_vars if inspect.isclass(i)]
return [i for i in classes if i.__module__ == __name__]
The overall idea is this: inspect allows you to inspect live objects, and iterating over all local variables allows you to inspect everything within your current namespace. The final part, which classes are defined locally, can be done by checking if the module name is the same as the current namespace, or cls.__module__ == __name__.
Finally, for Python3 compatibility, I've added list(globals().values(), since the dictionary size will change during list comprehension. For Python2, since dict.values() returns a list, this can be omitted.
EDIT:
For further filtering, you can also use specific class attributes or other, as was mentioned in the comments. This is great if you are worried about restructuring your module into a package later.
def get_classes(name='target'):
global_vars = list(globals().values())
classes = [i for i in global_vars if inspect.isclass(i)]
return [i for i in classes if hasattr(i, name)]
I am not sure if this is the best practice, but this will do what you need:
class Main:
def __init__(self, locals):
self.classes = []
for (c, val) in locals.iteritems():
try:
if c[:5] == 'Class':
self.classes.append(val)
except:
pass
def meth1(self):
for c in self.classes:
pass
def meth2(self):
for c in self.classes:
pass
class Class1:
pass
class Class2:
pass
class Class3:
pass
main = Main(locals())
print main.classes
In current versions of Python you can use:
from __future__ import annotations

Is there a benefit to defining a class inside another class in Python?

What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().

Categories

Resources