Ordering class methods without instantiation - python

Related: inspect.getmembers in order?
Background: I'm trying to build a tool that creates a Python file according to a certain specification. One option is to give, as input, a Python module that contains an abstract class declaration, and creates a base class that inherits that abstract class but also adds a default implementation to all abstract methods.
For example: say I have the following file, called Abstract.py that contains the following:
class Abstract(object):
__metaclass__ = ABCMeta
#abstractmethod
def first(self):
pass
#abstractmethod
def second(self):
pass
So then the output of my tool would be a file called BaseClass.py that contains:
class BaseClass(Abstract):
def first(self):
pass
def second(self):
pass
I want the methods in the BaseClass to be in the same order as in Abstract.
My question is: Is there a way to sort the methods according to their appearance without relying on built-in method comparison (which is based on memory address comparison)? I'm also hoping to avoid any kind of file-parsing, if possible.
Please note that I cannot create an instance of Abstract so the solution mentioned in the above related question will not work for me.

At the time of class creation in Python2 (that is, when the interpreter just get pass the class body when running a file, which happens in sequence) - the Class itself is created as an object. At this point all variables and methods defined in the class body are passed to a call to "type" (which is the default metaclass) as a dictionary.
As you know, dictionaries in Python have no ordering, so ordinarily it is impossible in Python2. It is possible in Python 3 because metaclasses can implement a __prepare__ method which returns the mapping object that will be used to build the class body - so instead of an ordinary dict, __prepare__ could return an OrderedDict.
However, in your case, all relevant methods are decorated with #abstractmethod - we can take advantage of that to not only annotate the methods as abstract, but also mark down the order in which they appear.
You can either wrap the abstractclass decorator, or create another decorator and use both. I'd favor a new decorator that would do both things, in order to keep linecount down.
Also, you have to choose how will you keep the order of the methods and make use of it. Ordinarily iterating on the class's attributes will just iterate over a dictionary (rather a dictionary proxy), which is unorderd- so, you have to have keep a data structure were to keep the ordered methods available, and a way to record this given order. There are are some options there - but maybe, the most direct thing is to annotate the method order in the methods themselves, and retrieve them with a call to built-in sorted with an appropriate key parameter. Other means would require eithe a class decorator or a custom metaclass to work.
So here is an example of what I wrote about:
from abc import abstractmethod, ABCMeta
class OrderedAbstractMethod(object):
def __init__(self):
self.counter = 0
def __call__(self,func):
func._method_order = self.counter
self.counter += 1
return abstractmethod(func)
ordered_abstract_method = OrderedAbstractMethod()
class Abstract(object):
__metaclass__ = ABCMeta
#ordered_abstract_method
def first(self):
pass
#ordered_abstract_method
def second(self):
pass
#ordered_abstract_method
def third(self):
pass
#ordered_abstract_method
def fourth(self):
pass
print "Unordered methods: ",[method[0] for method in Abstract.__dict__.items() if not method[0].startswith("_") ]
# here it printed out - ['second', 'third', 'fourth', 'first']
print "Ordered methods: ", sorted([method for method in Abstract.__dict__.items() if not method[0].startswith("_") ], key= lambda m: m[1]._method_order)

Related

Call specific method from parent class in multiple inheritance - Python

I have one class with multiple inheritance. I would like to concat the output from some parents' methods that share the same name. Ideally, I would be able to do this without going through all parent class but selecting explicitly the cases I want.
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return <my_class1.common_method()> + <my_class2.common_method()>
Since you don't want to use the langage mechanisms to call super-methors (which are designed to go through all the methods in the superclasses, even ones that are not known at the time the code is written), just call the methods explitly on the classes you want - by using the class name.
The only thing different that has to be done is that you have to call the method from the class, not from the instance, and then insert the instance manually as first parameter. Python's automatic self reference is only good when calling the method in the most derived sub-class (from which point, in a more common design, it will use super to run its coutnerparts in the superclasses)
For your example to work, you simply have to write it like this:
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return my_class1.common_method(self) + my_class2.common_method(self)
Note, hoever, that if any of the common_methods would call super().common_method in a common ancestor base, that super-method would be run once for each explicit invocation of a sub-class' .common_method.
If you would want to specialize that it would be though to do.
In other words, if you want, a "super" counterpart that would allow you to specify which super-classes to visit when calling the method, and ensure any super-method called by those would run only once - that i feasible, but complicated and error prone. If you can use explicit classes like in this example, it is 100 times simpler.

Pythonic multiple inheritance to generate collections of custom classes

I've got two class trees in my Python code:
BaseComponent BaseSeries
/ | \ |
Resistor Capacitor Inductor ESeries
The BaseSeries class tree implements preferred numbers such as the E-series, and generates sets of values between a pair of powers (e.g. [1.0, 2.2, 4.7, 10, 22, 47, 100, 220, 470] for the E3 series with exponents between 1 to 3).
By default, ESeries and any other instance of BaseSeries creates sequences of float objects. I'd like to use these classes to instead create sequences of Resistor, Capacitor and Inductor objects. Ideally, the individual Resistor, Capacitor, Inductor and ESeries classes would remain usable on their own (i.e. not rely on methods being implemented by other classes).
This sounds like a job for multiple inheritance, but I'm a bit confused about how best to implement this in Python (3). Ideally I'd like to just define something like:
class ResistorESeries(Resistor, ESeries):
pass
class CapacitorESeries(Capacitor, ESeries):
pass
class InductorESeries(Inductor, ESeries):
pass
in order to create classes that yield sequences of resistors, capacitors and inductors, but I don't know how best to tell BaseSeries instances to create objects of type Resistor, Capacitor and Inductor. I can think of two ways, but I can't decide which one is best, and I have a feeling there is a simpler, more Pythonic way that I'm missing:
have BaseSeries contain a property or variable pointing to the element type (e.g. Resistor) set either by the constructor, a class variable in the child class (e.g. Resistor.ELEMENT_TYPE = Resistor) or with an abstract property provided by the child class:
class BaseSeries(object):
...
def elements(self):
# loop over numbers in this series
for v in self.values():
yield self.element_type(v)
#property
#abc.abstractmethod
def element_type(self):
return NotImplemented
class ESeries(BaseSeries):
....
class BaseComponent(object):
...
#property
def element_type(self):
return self
class Resistor(BaseComponent):
...
class ResistorESeries(Resistor, ESeries):
# now BaseSeries' `element_type` property is provided by `BaseComponent`
pass
This would mean ESeries cannot be used on its own as a concrete object, as it does not implement this property/variable, which is not ideal.
use self when creating elements in BaseSeries, where self will, as long as Resistor is earlier in the method resolution order, refer to the desired element:
class BaseSeries(object):
...
def elements(self):
# loop over numbers in this series
for v in self.values():
# self here would refer to `Resistor` in
# `ResistorESeries` instances
yield self(v)
class ESeries(BaseSeries):
....
class BaseComponent(object):
...
class Resistor(BaseComponent):
...
class ResistorESeries(Resistor, ESeries):
pass
This has the downside that, in instances of ESeries without being used as a mix-in, self will refer to itself, which does not support the correct __init__ signature.
So, does anyone have an idea of how best to do this in a Pythonic way, with maximum ability to reuse classes on their own?
You are likely mixing some concepts there - notably "instances" and "classes" - your example calls that do self(v) are perplexing.
I can't see from your design why the classes on the BaseComponent tree would need to be inherited along the BaseSeries tree: can't the component type simply be an attribute on the BaseSeries class?
It is simply a matter of using a class attribute, and in the code suggested on your first attempt, use a prosaic if statement.
class BaseSeries:
component = None
def elements(self):
# loop over numbers in this series
for v in self.values():
yield self.component(v) if self.component else v
class Capacitor(BaseComponent):
...
class CapacitorSeries(BaseSeries):
component = Capacitor
If you think you need multiple inheritance, you can just go for your idea of using a property, and use the same "if" statement there. But if both hierarchies are that ortogonal, I don't see why force the use of multiple inheritance, just because the language permits it.
Maybe you prefer to have it the other way around: a factory method on the component tree that will take an ESeries class as input, and extract the values from that ...
Anyway, you are not making clear the disctinction of classes and instances there. Do you need to have a way to produce several subclasses of
"CapacitorESeries", each class for a different value?
Or would you need just instances of "Capacitors", each for a different value produced on the series?
class BaseComponent:
...
#classmethod
def series_factory(self, series):
for value in series.values():
yield self.__class__(value)
Of course, there could be use cases for really needing classes for everything you claim, including the factories for series of classes, but your use of self as a callable in your snippets suggests that your stance on that is not that solid.
In that case, first, you need all methods to make proper use of super. even if they ar enot supposed to exist across both hierarchies, using super will just call the proper method on the superclasses. But for methods like __init__ this is just needed.
If you design a proper __init__ method using super, and always using named parameters, your second strategy will work out of the box, just fixing the instantiating call (to something other than self(v). Using named parameters and passing the remaining parameters to super will ensure each class in the tree consumes what it needs of those parameters - and when Python gets to the root of both your hierarchies and calls object's __init__, no parameter is remaining,
class BaseSeries:
def __init__(self, value_min, value_max, **kwargs):
self.value_min = value_min
selfvalue_max = value_max
super().__init__(**kwargs)
def elements(self):
# loop over numbers in this series
for v in self.values():
yield self.__class__(value_min = self.value_min, value_max=self.value_max, value=value)
class BaseComponent:
def __init__(self, value, **kwargs):
self.value = value
...
class CapacitorESeries(Capacitor, Eseries):
pass

Access the python class from method while defining it

I wanted to access the class on which method is to be defined. This can be used, for example, to create alias for methods with decorator. This particular case could be implemented without using decorator (alias = original_name), but I would like to use decorator, primarily so because the aliasing will be visible along side the method definition at the top, useful when the method definition is long.
def method_alias(*aliases):
def aliased(m):
class_of_m = ??? # GET class of this method
for alias in aliases:
setattr(class_of_m, alias, m)
return m
return aliased
class Test():
#method_alias('check', 'examine')
def test():
print('I am implemented with name "test"')
Later, I found here that the above could be implemented by using two decorators (first store the aliases as method attributes, later when the class is already created, add the attributes to class). Can it be done without decorating the class, i.e. only decorating the method? This requires getting access to the class name in the decorator.
The short answer is no. The contents of the class body are evaluated before the class object is created, i.e. the function test is created and passed to the decorator without class Test already existing. The decorator is therefore unable to obtain a reference to it.
To solve the problem of method aliasing, I reckon three approaches:
Using a class decorator as described by your link.
Using a metaclass, which lets you modifies the class' __dict__ before the class object is created. (Implementing a metaclass class is acutally overriding the default constructor for class objects, see here. Also the metaclass usage syntax has changed in Python 3.)
Creating the aliases in the __init__ method for each instance of Test.
The first approach is probably the most straightforward. I wrote another example. It basically does the same as your link, but is more stripped down to make it a bit clearer.
def alias(*aliases):
def decorator(f):
f.aliases = set(aliases)
return f
return decorator
def apply_aliases(cls):
for name, elem in list(cls.__dict__.items()):
if not hasattr(elem, 'aliases'):
continue
for alias in elem.aliases:
setattr(cls, alias, elem)
return cls
#apply_aliases
class Test(object):
#alias('check', 'examine')
def test(self):
print('I am implemented with name "test"')
Test().test()
Test().check()
Test().examine()

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

Python, executing extra code at method definition

I am writing a python API/server to allow an external device (microcontroller) to remotely call methods of an object by sending a string with the name of the method. These methods would be stored in a dictionary. e.g. :
class Server:
...
functions = {}
def register(self, func):
self.functions[func.__name__] = func
def call(self, func_name, args):
self.functions[func_name](*args)
...
I know that I could define functions externally to the class definition and register them manually, but I would really like that the registering step would be done automatically. Consider the following class:
class MyServer(Server):
...
def add(self,a,b):
print a+b
def sub(self,a,b):
print a-b
...
It would work by subclassing a server class and by defining methods to be called. How could I get the methods to be automatically registered in the functions dictionary?
One way that I thought it could be done is with a metaclass that look at a pattern in the methods name add if a match is found, add that methods to the functions dictionary. It seems overkill...
Would it be possible to decorate the methods to be registered? Can someone give me a hint to the simplest solution to this problem?
There is no need to construct a dictionary, just use the getattr() built-in function:
def call(self, func_name, args):
getattr(self, func_name)(*args)
Python actually uses a dictionary to access attributes on objects anyway (it's called __dict__, - but using getattr() is better than accessing it directly).
If you really want to construct that dict for some reason, then look at the inspect module:
def __init__(self, ...):
self.functions = dict(inspect.getmembers(self, inspect.ismethod))
If you want to pick specific methods, you could use a decorator to do that, but as BrenBarn points out, the instance doesn't exist at the time the methods are decorated, so you need to use the mark and recapture technique to do what you want.

Categories

Resources