How to change class implementation efficiently - python

I'd like to change the implementation depending on a constructor argument. Below is an example showing what I mean:
class Device(object):
def __init__(self, simulate):
self.simulate = simulate
def foo(self):
if simulate:
self._simulate_foo()
else:
self._do_foo()
def _do_foo(self):
# do foo
def _simulate_foo(self):
# simulate foo
Now every call to foo() invokes an if clause. To avoid that I could bind the correct method dynamically to foo.
class Device(object):
def __init__(self, simulate):
if simulate:
self.foo = self._simulate_foo
else:
self.foo = self._do_foo()
def _do_foo(self):
# do foo
def _simulate_foo(self):
# simulate foo
Are there any drawbacks why this should not be done or other drawbacks I'm not aware? Is this really faster?(I'm aware that inheritance is another option)

I'd like to suggest the Replace Conditional with Polymorphism refactoring instead, as it solves your problem in a more elegant way than both the current code and the suggested alternative:
class Device(object):
def foo(self):
# do foo
class SimulatedDevice(object):
def foo(self):
# simulate foo

What you are doing is perfectly fine, and you'll find the technique used in plenty of Python frameworks. However, you may want to use timeit to check if this is really faster.
When you access instance.foo, Python will first look for it in the class definition to make sure it's not a data descriptor (such as a property), then look it up in the instance namespace, but this is a very fast lookup since foo is not defined in the class (setting self.foo stores it in the instance __dict__ namespace).
The if statement is almost certainly slower than that double lookup, since the if statement itself needs to look up self.simulate in the same manner, but the difference will be negligible.

Related

Idiomatic way of processing instances of derived classes?

Although, I have some years of experience programming in Python every time I encounter a problem like this I'm using the built-in isinstance function. However, I'm not sure whether this is the ideomatic way of doing these kind of things in python.
So, I have a base class that most of my instances will be.
class Base():
def a(self):
return 1
I also have a slightly different class that look like this:
class Extended(Base):
def b(self):
return 2
Now, there is a third class that might have additional functionality depending on the received argument which would be instance of one of the previous classes.
class User():
def __init__(self, arg):
... # do some common work
if isinstance(arg, Extended):
...
# define more functionality which will call method 'b'
# at some point during runtime (as event handler or smth)
Is this really the way to go with Python on this trivial example or maybe I should consider changing the interface of the Base to something like:
class Base2():
supports_more_func = False
def a(self):
return 1
def b(self):
pass
class Extended2(Base2):
supports_more_func = True
def b(self):
return 2
class User():
def __init__(self, arg):
... # do some common work
if arg.supports_more_func:
...
# define more functionality which will call method 'b'
# at some point during runtime (as event handler or smth)
Which one is the better approach according to you guy and why?
Generally speaking, when doing object oriented programming, using isinstance is rarely the way to go, especially when you're in charge of designing the classes you use, because that would be breaking S.O.L.I.D. principles.
Instead you should simply design your class to have a common well defined interface and just use it. So testing for type or for a member is rarely the way to go.
The way I'd go would be:
class Base2():
def a(self):
return 1
def b(self):
pass
class Extended2(Base2):
def b(self):
# all that extra functionality that was in User.__init__()
return 2
class User():
def __init__(self, arg):
... # do some common work
arg.b()
now I guess that the part with:
# define more functionality which will call method 'b'
# at some point during runtime (as event handler or smth)
has some data and processing tightly coupled with User and not with Extended2, but I'm pretty sure there's an elegant way to give that data to arg.b() as argument.
Basically, I'd say that 99% of the time when you need to use isinstance() to do something, it means you have a design issue and there's a better way to do the same.
Here's some web-litterature on the topic:
http://canonical.org/~kragen/isinstance/
https://www.quora.com/When-is-it-acceptable-to-use-isinstance-in-Python
https://www.lynda.com/Programming-Languages-tutorials/Avoiding-isinstance/471978/502199-4.html

Python factory method with external function

I've read this SO discussion about factory methods, and have an alternate constructor use case.
My class looks like this:
class Foo(object):
def __init__(self, bar):
self.bar = bar
#classmethod
def from_data(cls, datafile):
bar = datafile.read_bar()
# Now I want to process bar in some way
bar = _process_bar(bar)
return cls(bar)
def _process_bar(self, bar)
return bar + 1
My question is, if a #classmethod factory method wants to use a function in its code, should that function (_proces_bar) be:
A #classmethod, which seems a bit weird because you won't ever call it like Foo._process_bar()
A method outside of the class Foo but in the same .py file. I'd go with this, but it seems kind of weird. Will those methods always be available to an instance of Foo, regardless of how it was instantiated? (e.g. what if it's saved to a Pickle then reloaded? Presumably methods outside the class will then not be available!)
A #staticmethod? (see 1. This seems weird)
Something else? (but not this!)
The "right solution" depends on your needs...
If the function (_process_bar) needs an access to class Foo (or the current subclass of...) then you want a classmethod - which should be then called as cls._process_bar(), not Foo._process_bar().
If the function doesn't need an access to the class itself but you still want to be able to override it in subclasses (IOW : you want class-based polymorphism), you want a staticmethod
Else you just want a plain function. Where this function's code lives is irrelevant, and your import problems are othogonal.
Also, you may (or not, depending on your concrete use case) want to allow for more flexiblity using a callback function (possibly with a default), ie:
def process_bar(bar):
return bar + 1
class Foo(object):
#classmethod
def from_data(self, datafile, processor=process_bar):
bar = datafile.read_bar()
bar = processor(bar)
return cls(bar)

Is it safe to make two class objects with the same name?

It's possible to use type in Python to create a new class object, as you probably know:
A = type('A', (object,), {})
a = A() # create an instance of A
What I'm curious about is whether there's any problem with creating different class objects with the same name, eg, following on from the above:
B = type('A', (object,), {})
In other words, is there an issue with this second class object, B, having the same name as our first class object, A?
The motivation for this is that I'd like to get a clean copy of a class to apply different decorators to without using the inheritance approach described in this question.
So I'd like to define a class normally, eg:
class Fruit(object):
pass
and then make a fresh copy of it to play with:
def copy_class(cls):
return type(cls.__name__, cls.__bases__, dict(cls.__dict__))
FreshFruit = copy_class(fruit)
In my testing, things I do with FreshFruit are properly decoupled from things I do to Fruit.
However, I'm unsure whether I should also be mangling the name in copy_class in order to avoid unexpected problems.
In particular, one concern I have is that this could cause the class to be replaced in the module's dictionary, such that future imports (eg, from module import Fruit return the copied class).
There is no reason why you can't have 2 classes with the same __name__ in the same module if you want to and have a good reason to do so.
e.g. In your example from module import Fruit -- python doesn't care at all about the __name__ of the class. It looks in the module's globals for Fruit and imports what it finds there.
Note that, in general, this approach isn't great if you're using super (although the same can be said for class decorators ...):
class A(Base):
def foo(self):
super(A, self).foo()
B = copy_class(A)
In this case, when B.foo is called, it will end up calling super(A, self) which could lead to funky behaviour in a number of circumstances. . .

do's and don'ts of __init__ method

I was just wondering if it's considered wildly inappropriate, just messy, or unconventional at all to use the init method to set variables by calling, one after another, the rest of the functions within a class. I have done things like, self.age = ch_age(), where ch_age is a function within the same class, and set more variables the same way, like self.name=ch_name() etc. Also, what about prompting for user input within init specifically to get the arguments with which to call ch_age? The latter feels a little wrong I must say. Any advice, suggestions, admonishments welcome!
I always favor being lazy: if you NEED to initialize everything in the constructor, you should--in a lot of cases, I put a general "reset" method in my class. Then you can call that method in init, and can re-initialize the class instance easily.
But if you don't need those variables initially, I feel it's better to wait to initialize things until you actually need them.
For your specific case
class Blah1(object):
def __init__(self):
self.name=self.ch_name()
def ch_name(self):
return 'Ozzy'
you might as well use the property decorator. The following will have the same effect:
class Blah2(object):
def __init__(self):
pass
#property
def name():
return 'Ozzy'
In both of the implementations above, the following code should not issue any exceptions:
>>> b1 = Blah1()
>>> b2 = Blah2()
>>> assert b1.name == 'Ozzy'
>>> assert b2.name == 'Ozzy'
If you wanted to provide a reset method, it might look something like this:
class Blah3(object):
def __init__(self, name):
self.reset(name)
def reset(self, name):
self.name = name

Is there a benefit to defining a class inside another class in Python?

What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().

Categories

Resources