Python factory method with external function - python

I've read this SO discussion about factory methods, and have an alternate constructor use case.
My class looks like this:
class Foo(object):
def __init__(self, bar):
self.bar = bar
#classmethod
def from_data(cls, datafile):
bar = datafile.read_bar()
# Now I want to process bar in some way
bar = _process_bar(bar)
return cls(bar)
def _process_bar(self, bar)
return bar + 1
My question is, if a #classmethod factory method wants to use a function in its code, should that function (_proces_bar) be:
A #classmethod, which seems a bit weird because you won't ever call it like Foo._process_bar()
A method outside of the class Foo but in the same .py file. I'd go with this, but it seems kind of weird. Will those methods always be available to an instance of Foo, regardless of how it was instantiated? (e.g. what if it's saved to a Pickle then reloaded? Presumably methods outside the class will then not be available!)
A #staticmethod? (see 1. This seems weird)
Something else? (but not this!)

The "right solution" depends on your needs...
If the function (_process_bar) needs an access to class Foo (or the current subclass of...) then you want a classmethod - which should be then called as cls._process_bar(), not Foo._process_bar().
If the function doesn't need an access to the class itself but you still want to be able to override it in subclasses (IOW : you want class-based polymorphism), you want a staticmethod
Else you just want a plain function. Where this function's code lives is irrelevant, and your import problems are othogonal.
Also, you may (or not, depending on your concrete use case) want to allow for more flexiblity using a callback function (possibly with a default), ie:
def process_bar(bar):
return bar + 1
class Foo(object):
#classmethod
def from_data(self, datafile, processor=process_bar):
bar = datafile.read_bar()
bar = processor(bar)
return cls(bar)

Related

How to override a method in python of an object and call super?

I have an Object of the following class which inherates from the algorithm class.
class AP(Algorithm):
def evaluate(self, u):
return self.stuff *2 +u
The Algorithm class has a method called StoppingCritiria.
At some point in the project the object objAP = AP() gets created. Later on I can then actually access it.
And at that point in time I want to override the method StoppingCriteria by some function which calls the old StoppingCriteria.
I tried simply
def new_stopping(self):
return super().StoppingCriteria() and custom(self.u)
objAP.StoppingCriteria = newStoppingCriteria
But that did not work. What did work were two rather inconviniend solutions:
New AP class (not desirable since I possibly need to do that for lots of classes)
class AP_custom(AP):
def StoppingCriteria(self):
return super().StoppingCriteria() and custom(self)
Override the Method but not using super but rather copy pasting the code into the new function and adding my code to that. Not desirable since I want to changes in the original method to be applyed to my new function as well.
See Override a method at instance level for many possible solutions. None of them will really work with super though, since you're simply not defining the replacement function in a class. You can define it slightly differently though for it to work:
class Foo:
def bar(self):
print('bar')
f = Foo()
def _bar(self):
type(self).bar(self) # or Foo.bar(self)
print('baz')
from typing import MethodType
f.bar = MethodType(_bar, f)
f.bar() # outputs bar baz
Since you're replacing the method at the instance level, you don't really need to access the method of the super class, you just want to access the method of the class, which still exists in its original form.

Dynamically select which subclass to inherit methods from?

Suppose I have different classes providing access to different subsystems but with a common interface. They all provide the same set of methods but each class implements them in a different way (think using foo.write() to write to a file or send data via socket, etc)
Since the interface is the same, I wanted to make a single class that is able to pick the correct class but only based on the constructor/initializer parameters.
On code, it would look like
class Foo(object):
def who_am_i(self):
print "Foo"
class Bar(object):
def who_am_i(self):
print "Bar"
# Class that decides which one to use and adds some methods that are common to both
class SomeClass(Foo, Bar):
def __init__(self, use_foo):
# How inherit methods from Foo -OR- Bar?
How can the SomeClass inherit methods from Foo or Bar given the __init__ and/or __new__ arguments?
The goal should be something like
>>> some_object = SomeClass(use_foo=True)
>>> some_object.who_am_i()
Foo
>>> another_object = SomeClass(use_foo=False)
>>> another_object.who_am_i()
Bar
Is there some clean "pythonic" way to achieve this? I didn't wanted to use a function to dynamically define SomeClass, but I'm not finding another way to do this.
Thanks!
As mentioned in the comments, this can be done with a factory function (a function that pretends to be a class):
def SomeClass(use_foo):
if use_foo:
return Foo()
else:
return Bar()
As far as I can see, you have your inheritance completely backwards; instead of the multiple inheritance you're proposing:
Foo Bar
- foo code - bar code
\ /
SomeClass(Foo, Bar)
- common code
you could use a much simpler single inheritance model:
SomeClass
- common code
/ \
Foo(SomeClass) Bar(SomeClass)
- foo code - bar code
This then makes your problem one of choosing which subclass to instantiate (a decision that only needs to be made once) rather than which superclass method to call (which potentially needs to be made on every method call). This could be solved with as little as:
thing = Foo() if use_foo else Bar()
A class factory can be used here. Note the use of dictionary to make sure that the same subclass instance is used for each base class.
def makeclass(baseclass, classes={}):
if baseclass not in classes:
class Class(baseclass):
pass # define your methods here
classes[baseclass] = Class
return classes[baseclass]
obj1 = makeclass(Foo)(...)
obj2 = makeclass(Bar)(...)
isinstance(obj1, makeclass(Foo)) # True
isinstance(obj1, Foo) # True
issubclass(makeclass(Foo), Foo) # True
issubclass(type(obj1), Foo) # True
You could also make a dict subclass with a __missing__ method to do essentially the same thing; it makes it more explicit that you've got a container that stores classes, but creates them on demand:
class ClassDict(dict):
def __missing__(self, baseclass):
class Class(baseclass):
pass # define your methods here
self[baseclass] = Class
return Class
subclasses = ClassDict()
obj1 = subclasses[Foo]
obj2 = subclasses[Bar]
Judging by the lack of agreement upon the answer, maybe the problem is the question. jonrsharpe's comment gave an interesting insight on the problem: this should not be solved via inheritance.
Consider SomeClass defined as follows:
# Class that uses Foo or Bar depending on the environment
# Notice it doesn't subclasses either Foo or Bar
class SomeClass(object):
def __init__(self, use_foo):
if use_foo:
self.handler = Foo()
else:
self.handler = Bar()
# Makes more sense asking 'Who implements?' instead of 'Who am I?'
def who_implements(self):
return self.handler
# Explicitly expose methods from the handler
def some_handler_method(self, *args, **kwargs):
return self.handler.some_handler_method(*args, **kwargs)
def another_handler_method(self, *args, **kwargs):
return self.handler.another_handler_method(*args, **kwargs)
Should we need to get details on the handler implementation, just get the handler attribute. Other classes that subclass SomeClass won't even see the handler directly, which actually makes sense.
One could use a __new__ method for this purpose:
_foos = {}
_bars = {}
class SomeClass(object):
def __new__(cls,use_foo,*args,**kwargs):
if use_foo:
if cls not in _foos:
class specialized(cls,Foo):pass
_foos[cls] = specialized
else:
specialized = _foos[cls]
else:
if cls not in _bars:
class specialized(cls,Bar):pass
_bars[cls] = specialized
else:
specialized = _bars[cls]
specialized.__name__ = cls.__name__
return object.__new__(specialized,*args,**kwargs)
#common methods to both go here
pass
The advantage of this over a factory function is that isinstance(SomeClass(True),SomeClass) works, and that SomeClass can be subclassed.

Only let function be called as a method in Python

Say I'm writing a module, MyPyLib, that uses another built-in module. From the built-in module, I import a class, Foo. I then define this function, bar:
def bar(self):
return self
This function is written to be a method of the Foo class and I can make it behave properly with setattr(Foo,'bar', bar). Then Foo.bar() will work as intended. However, anyone who imports MyPyLib can also call bar as its own function. Is there any way to limit this function so that Foo.bar() works, but bar(arg) doesn't?
Your code should not care about incorrect use, that's a problem for the caller. Python is a language for consenting adults; if someone wants to bend the rules and use bar with a different argument, that's their problem, not yours.
If you insist, your only option here is to explicitly test for the type of self:
def bar(self):
assert isinstance(self, Foo)
return self
as there is no way for bar to detect otherwise that it is being called as a bound method or used unbound.
An alternative (and more usual) approach is to derive a new class from class Foo and to define bar explicitly as a method of the new class:
class FooBar(Foo):
def bar(self):
return self
This way the intended usage of bar is much more clear.

How to change class implementation efficiently

I'd like to change the implementation depending on a constructor argument. Below is an example showing what I mean:
class Device(object):
def __init__(self, simulate):
self.simulate = simulate
def foo(self):
if simulate:
self._simulate_foo()
else:
self._do_foo()
def _do_foo(self):
# do foo
def _simulate_foo(self):
# simulate foo
Now every call to foo() invokes an if clause. To avoid that I could bind the correct method dynamically to foo.
class Device(object):
def __init__(self, simulate):
if simulate:
self.foo = self._simulate_foo
else:
self.foo = self._do_foo()
def _do_foo(self):
# do foo
def _simulate_foo(self):
# simulate foo
Are there any drawbacks why this should not be done or other drawbacks I'm not aware? Is this really faster?(I'm aware that inheritance is another option)
I'd like to suggest the Replace Conditional with Polymorphism refactoring instead, as it solves your problem in a more elegant way than both the current code and the suggested alternative:
class Device(object):
def foo(self):
# do foo
class SimulatedDevice(object):
def foo(self):
# simulate foo
What you are doing is perfectly fine, and you'll find the technique used in plenty of Python frameworks. However, you may want to use timeit to check if this is really faster.
When you access instance.foo, Python will first look for it in the class definition to make sure it's not a data descriptor (such as a property), then look it up in the instance namespace, but this is a very fast lookup since foo is not defined in the class (setting self.foo stores it in the instance __dict__ namespace).
The if statement is almost certainly slower than that double lookup, since the if statement itself needs to look up self.simulate in the same manner, but the difference will be negligible.

Is there a way apply a decorator to a Python method that needs informations about the class?

When you decorate a method, it is not bound yet to the class, and therefor doesn't have the im_class attribute yet. I looking for a way to get the information about the class inside the decorator. I tried this:
import types
def decorator(method):
def set_signal(self, name, value):
print name
if name == 'im_class':
print "I got the class"
method.__setattr__ = types.MethodType(set_signal, method)
return method
class Test(object):
#decorator
def bar(self, foo):
print foo
But it doesn't print anything.
I can imagine doing this:
class Test(object):
#decorator(klass=Test)
def bar(self, foo):
print foo
But if I can avoid it, it would make my day.
__setattr__ is only called on explicit object.attribute = assignments; building a class does not use attribute assignment but builds a dictionary (Test.__dict__) instead.
To access the class you have a few different options though:
Use a class decorator instead; it'll be passed the completed class after building it, you could decorate individual methods on that class by replacing them (decorated) in the class. You could use a combination of a function decorator and a class decorator to mark which methods are to be decorated:
def methoddecoratormarker(func):
func._decorate_me = True
return func
def realmethoddecorator(func):
# do something with func.
# Note: it is still an unbound function here, not a method!
return func
def classdecorator(klass):
for name, item in klass.__dict__.iteritems():
if getattr(item, '_decorate_me', False):
klass.__dict__[name] = realmethoddecorator(item)
You could use a metaclass instead of a class decorator to achieve the same, of course.
Cheat, and use sys._getframe() to retrieve the class from the calling frame:
import sys
def methoddecorator(func):
callingframe = sys._getframe(1)
classname = callingframe.f_code.co_name
Note that all you can retrieve is the name of the class; the class itself is still being built at this time. You can add items to callingframe.f_locals (a mapping) and they'll be made part of the new class object.
Access self whenever the method is called. self is a reference to the instance after all, and self.__class__ is going to be, at the very least, a sub-class of the original class the function was defined in.
My strict answer would be: It's not possible, because the class does not yet exist when the decorator is executed.
The longer answer would depend on your very exact requirements. As I wrote, you cannot access the class if it does not yet exists. One solution would be, to mark the decorated method to be "transformed" later. Then use a metaclass or class decorator to apply your modifications after the class has been created.
Another option involves some magic. Look for the implementation of the implements method in zope.interfaces. It has some access to the information about the class which is just been parsed. Don't know if it will be enough for your use case.
You might want to take a look at descriptors. They let you implement a __get__ that is used when an attribute is accessed, and can return different things depending on the object and its type.
Use method decorators to add some marker attributes to the interesting methods, and use a metaclass which iterates over the methods, finds the marker attributes, and does the logic. The metaclass code is run when the class is created, so it has a reference to the newly created class.
class MyMeta(object):
def __new__(...):
...
cls = ...
... iterate over dir(cls), find methods having .is_decorated, act on them
return cls
def decorator(f):
f.is_decorated = True
return f
class MyBase(object):
__metaclass__ = MyMeta
class MyClass(MyBase):
#decorator
def bar(self, foo):
print foo
If you worry about that the programmer of MyClass forgets to use MyBase, you can forcibly set the metaclass in decorator, by exampining the globals dicitionary of the caller stack frame (sys._getframe()).

Categories

Resources