Python, executing extra code at method definition - python

I am writing a python API/server to allow an external device (microcontroller) to remotely call methods of an object by sending a string with the name of the method. These methods would be stored in a dictionary. e.g. :
class Server:
...
functions = {}
def register(self, func):
self.functions[func.__name__] = func
def call(self, func_name, args):
self.functions[func_name](*args)
...
I know that I could define functions externally to the class definition and register them manually, but I would really like that the registering step would be done automatically. Consider the following class:
class MyServer(Server):
...
def add(self,a,b):
print a+b
def sub(self,a,b):
print a-b
...
It would work by subclassing a server class and by defining methods to be called. How could I get the methods to be automatically registered in the functions dictionary?
One way that I thought it could be done is with a metaclass that look at a pattern in the methods name add if a match is found, add that methods to the functions dictionary. It seems overkill...
Would it be possible to decorate the methods to be registered? Can someone give me a hint to the simplest solution to this problem?

There is no need to construct a dictionary, just use the getattr() built-in function:
def call(self, func_name, args):
getattr(self, func_name)(*args)
Python actually uses a dictionary to access attributes on objects anyway (it's called __dict__, - but using getattr() is better than accessing it directly).
If you really want to construct that dict for some reason, then look at the inspect module:
def __init__(self, ...):
self.functions = dict(inspect.getmembers(self, inspect.ismethod))
If you want to pick specific methods, you could use a decorator to do that, but as BrenBarn points out, the instance doesn't exist at the time the methods are decorated, so you need to use the mark and recapture technique to do what you want.

Related

Call specific method from parent class in multiple inheritance - Python

I have one class with multiple inheritance. I would like to concat the output from some parents' methods that share the same name. Ideally, I would be able to do this without going through all parent class but selecting explicitly the cases I want.
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return <my_class1.common_method()> + <my_class2.common_method()>
Since you don't want to use the langage mechanisms to call super-methors (which are designed to go through all the methods in the superclasses, even ones that are not known at the time the code is written), just call the methods explitly on the classes you want - by using the class name.
The only thing different that has to be done is that you have to call the method from the class, not from the instance, and then insert the instance manually as first parameter. Python's automatic self reference is only good when calling the method in the most derived sub-class (from which point, in a more common design, it will use super to run its coutnerparts in the superclasses)
For your example to work, you simply have to write it like this:
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return my_class1.common_method(self) + my_class2.common_method(self)
Note, hoever, that if any of the common_methods would call super().common_method in a common ancestor base, that super-method would be run once for each explicit invocation of a sub-class' .common_method.
If you would want to specialize that it would be though to do.
In other words, if you want, a "super" counterpart that would allow you to specify which super-classes to visit when calling the method, and ensure any super-method called by those would run only once - that i feasible, but complicated and error prone. If you can use explicit classes like in this example, it is 100 times simpler.

How to avoid parameter type in function's name?

I have a function foo that takes a parameter stuff
Stuff can be something in a database and I'd like to create a function that takes a stuff_id, get the stuff from the db, execute foo.
Here's my attempt to solve it:
1/ Create a second function with suffix from_stuff_id
def foo(stuff):
do something
def foo_from_stuff_id(stuff_id):
stuff = get_stuff(stuff_id)
foo(stuff)
2/ Modify the first function
def foo(stuff=None, stuff_id=None):
if stuff_id:
stuff = get_stuff(stuff_id)
do something
I don't like both ways.
What's the most pythonic way to do it ?
Assuming foo is the main component of your application, your first way. Each function should have a different purpose. The moment you combine multiple purposes into a single function, you can easily get lost in long streams of code.
If, however, some other function can also provide stuff, then go with the second.
The only thing I would add is make sure you add docstrings (PEP-257) to each function to explain in words the role of the function. If necessary, you can also add comments to your code.
I'm not a big fan of type overloading in Python, but this is one of the cases where I might go for it if there's really a need:
def foo(stuff):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
With type annotations it would look like this:
def foo(stuff: Union[int, Stuff]):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
It basically depends on how you've defined all these functions. If you're importing get_stuff from another module the second approach is more Pythonic, because from an OOP perspective you create functions for doing one particular purpose and in this case when you've already defined the get_stuff you don't need to call it within another function.
If get_stuff it's not defined in another module, then it depends on whether you are using classes or not. If you're using a class and you want to use all these modules together you can use a method for either accessing or connecting to the data base and use that method within other methods like foo.
Example:
from some module import get_stuff
MyClass:
def __init__(self, *args, **kwargs):
# ...
self.stuff_id = kwargs['stuff_id']
def foo(self):
stuff = get_stuff(self.stuff_id)
# do stuff
Or if the functionality of foo depends on the existence of stuff you can have a global stuff and simply check for its validation :
MyClass:
def __init__(self, *args, **kwargs):
# ...
_stuff_id = kwargs['stuff_id']
self.stuff = get_stuff(_stuff_id) # can return None
def foo(self):
if self.stuff:
# do stuff
else:
# do other stuff
Or another neat design pattern for such situations might be using a dispatcher function (or method in class) that delegates the execution to different functions based on the state of stuff.
def delegator(stff, stuff_id):
if stuff: # or other condition
foo(stuff)
else:
get_stuff(stuff_id)

Benefit of using custom initialize function instead of `__init__` in python

I was looking into the following code.
On many occasions the __init__ method is not really used but there is a custom initialize function like in the following example:
def __init__(self):
pass
def initialize(self, opt):
# ...
This is then called as:
data_loader = CustomDatasetDataLoader()
# other instance method is called
data_loader.initialize(opt)
I see the problem that variables, that are used in other instance methods, could still be undefined, if one forgets to call this custom initialize function. But what are the benefits of this approach?
Some APIs out in the wild (such as inside setuptools) have similar kind of thing and they use it to their advantage. The __init__ call could be used for the low level internal API while public constructors are defined as classmethods for the different ways that one might construct objects. For instance, in pkg_resources.EntryPoint, the way to create instances of this class is to make use of the parse classmethod. A similar way can be followed if a custom initialization is desired
class CustomDatasetDataLoader(object):
#classmethod
def create(cls):
"""standard creation"""
return cls()
#classmethod
def create_with_initialization(cls, opt):
"""create with special options."""
inst = cls()
# assign things from opt to cls, like
# inst.some_update_method(opt.something)
# inst.attr = opt.some_attr
return inst
This way users of the class will not need two lines of code to do what a single line could do, they can just simply call CustomDatasetDataLoader.create_with_initialization(some_obj) if that is what they want, or call the other classmethod to construct an instance of this class.
Edit: I see, you had an example linked (wish underlining links didn't go out of fashion) - that particular usage and implementation I feel is a poor way, when a classmethod (or just rely on the standard __init__) would be sufficient.
However, if that initialize function were to be an interface with some other system that receives an object of a particular type to invoke some method with it (e.g. something akin to the visitor pattern) it might make sense, but as it is it really doesn't.

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

Does Python have something like anonymous inner classes of Java?

In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of OptionParser that overrides only a single method (for example exit()). In Java you can write something like this:
new OptionParser () {
public void exit() {
// body of the method
}
};
This piece of code creates a anonymous class that extends OptionParser and override only the exit() method.
There is a similar idiom in Python? Which idiom is used in these circumstances?
You can use the type(name, bases, dict) builtin function to create classes on the fly. For example:
op = type("MyOptionParser", (OptionParser,object), {"foo": lambda self: "foo" })
op().foo()
Since OptionParser isn't a new-style class, you have to explicitly include object in the list of base classes.
Java uses anonymous classes mostly to imitate closures or simply code blocks. Since in Python you can easily pass around methods there's no need for a construct as clunky as anonymous inner classes:
def printStuff():
print "hello"
def doit(what):
what()
doit(printStuff)
Edit: I'm aware that this is not what is needed in this special case. I just described the most common python solution to the problem most commonly by anonymous inner classes in Java.
You can accomplish this in three ways:
Proper subclass (of course)
a custom method that you invoke with the object as an argument
(what you probably want) -- adding a new method to an object (or replacing an existing one).
Example of option 3 (edited to remove use of "new" module -- It's deprecated, I did not know ):
import types
class someclass(object):
val = "Value"
def some_method(self):
print self.val
def some_method_upper(self):
print self.val.upper()
obj = someclass()
obj.some_method()
obj.some_method = types.MethodType(some_method_upper, obj)
obj.some_method()
Well, classes are first class objects, so you can create them in methods if you want. e.g.
from optparse import OptionParser
def make_custom_op(i):
class MyOP(OptionParser):
def exit(self):
print 'custom exit called', i
return MyOP
custom_op_class = make_custom_op(3)
custom_op = custom_op_class()
custom_op.exit() # prints 'custom exit called 3'
dir(custom_op) # shows all the regular attributes of an OptionParser
But, really, why not just define the class at the normal level? If you need to customise it, put the customisation in as arguments to __init__.
(edit: fixed typing errors in code)
Python doesn't support this directly (anonymous classes) but because of its terse syntax it isn't really necessary:
class MyOptionParser(OptionParser):
def exit(self, status=0, msg=None):
# body of method
p = MyOptionParser()
The only downside is you add MyOptionParser to your namespace, but as John Fouhy pointed out, you can hide that inside a function if you are going to do it multiple times.
Python probably has better ways to solve your problem. If you could provide more specific details of what you want to do it would help.
For example, if you need to change the method being called in a specific point in code, you can do this by passing the function as a parameter (functions are first class objects in python, you can pass them to functions, etc). You can also create anonymous lambda functions (but they're restricted to a single expression).
Also, since python is very dynamic, you can change methods of an object after it's been created object.method1 = alternative_impl1, although it's actually a bit more complicated, see gnud's answer
In python you have anonymous functions, declared using lambda statement. I do not like them very much - they are not so readable, and have limited functionality.
However, what you are talking about may be implemented in python with a completely different approach:
class a(object):
def meth_a(self):
print "a"
def meth_b(obj):
print "b"
b = a()
b.__class__.meth_a = meth_b
You can always hide class by variables:
class var(...):
pass
var = var()
instead of
var = new ...() {};
This is what you would do in Python 3.7
#!/usr/bin/env python3
class ExmapleClass:
def exit(self):
print('this should NOT print since we are going to override')
ExmapleClass= type('', (ExmapleClass,), {'exit': lambda self: print('you should see this printed only')})()
ExmapleClass.exit()
I do this in python3 usually with inner classes
class SomeSerializer():
class __Paginator(Paginator):
page_size = 10
# defining it for e.g. Rest:
pagination_class = __Paginator
# you could also be accessing it to e.g. create an instance via method:
def get_paginator(self):
return self.__Paginator()
as i used double underscore, this mixes the idea of "mangling" with inner classes, from outside you can still access the inner class with SomeSerializer._SomeSerializer__Paginator, and also subclasses, but SomeSerializer.__Paginator will not work, which might or might not be your whish if you want it a bit more "anonymous".
However I suggest to use "private" notation with a single underscore, if you do not need the mangling.
In my case, all I need is a fast subclass to set some class attributes, followed up by assigning it to the class attribute of my RestSerializer class, so the double underscore would denote to "not use it at all further" and might change to no underscores, if I start reusing it elsewhere.
Being perverse, you could use the throwaway name _ for the derived class name:
class _(OptionParser):
def exit(self):
pass # your override impl
Here is a more fancy way of doing Maciej's method.
I defined the following decorator:
def newinstance(*args, **kwargs):
def decorator(cls):
return cls(*args, **kwargs)
return decorator
The following codes are roughly equivalent (also works with args!)
// java
MyClass obj = new MyClass(arg) {
public void method() {
// body of the method
}
};
# python
#newinstance(arg)
class obj(MyClass):
def method(self):
pass # body of the method
You can use this code from within a class/method/function if you want to define an "inner" class instance.

Categories

Resources