Best way to test instance methods without running __init__ - python

I've got a simple class that gets most of its arguments via init, which also runs a variety of private methods that do most of the work. Output is available either through access to object variables or public methods.
Here's the problem - I'd like my unittest framework to directly call the private methods called by init with different data - without going through init.
What's the best way to do this?
So far, I've been refactoring these classes so that init does less and data is passed in separately. This makes testing easy, but I think the usability of the class suffers a little.
EDIT: Example solution based on Ignacio's answer:
import types
class C(object):
def __init__(self, number):
new_number = self._foo(number)
self._bar(new_number)
def _foo(self, number):
return number * 2
def _bar(self, number):
print number * 10
#--- normal execution - should print 160: -------
MyC = C(8)
#--- testing execution - should print 80 --------
MyC = object.__new__(C)
MyC._bar(8)

For new-style classes, call object.__new__(), passing the class as a parameter. For old-style classes, call types.InstanceType() passing the class as a parameter.
import types
class C(object):
def __init__(self):
print 'init'
class OldC:
def __init__(self):
print 'initOld'
c = object.__new__(C)
print c
oc = types.InstanceType(OldC)
print oc

Why does the usability of the class have to suffer? If all the __init__ is doing is precomputing things so you can expose values as simple variables, change those variables into properties and do the computation (potentially cached/memoized) in the getter. That way your __init__ method is back to doing initialization only and testability is improved.
The downside to this approach is that it might be less performant, but probably not to a significant degree.

Related

Python inheritance - add argument to parent method

I have a base class with function run. For example:
class A:
#abstractmethod
def run(self, steps):
...
It is possible to define class B with more arguments to the run method.
class B(A):
def run(self, steps, save):
...
Working with typing, I can specify if a function gets either A or B as argument. By specifying the function gets A, I tell that I only need the basic interface of run. While specifying B says I need the extended one.
The purpose of this design is to declare a base interface that all the children share but each one can have an extended API.
This is impossible to be done in other languages. Hence I wonder, is it an anti-pattern? Is it something legit to do?
In Python you can do something like the following.
class A:
def run(self, steps):
print("Using class A's run.")
print(f"steps are {steps}")
class B(A):
def run(self, steps, other_arg=None):
if other_arg:
print("Using class B's override.")
print(f"steps are {steps}")
else:
# Use parent's run logic instead.
super().run(steps)
x = B()
x.run(100)
x.run(30, other_arg="something")
# Using class A's run.
# steps are 100
# Using class B's override.
# steps are 30
Now, should you do this? There is a time and a place. You can get into trouble as well. Imagine you break the interface of the core object you're inheriting from, so the core object loses its abstraction value. You'd have been better off having two objects or rewriting your abstraction to be more robust to the differences in object you wish you represent.
Edit: Note that the original question changed to make the base run method abstract. The solution posted here is mostly invalidated by that.

Overriding an inner function of a method in python

That is a kind of best practices question.
I have a class structure with some methods defined. In some cases I want to override a particular part of a method. First thought on that is splitting my method to more atomic pieces and override related parts like below.
class myTest(object):
def __init__(self):
pass
def myfunc(self):
self._do_atomic_job()
...
...
def _do_atomic_job(self):
print "Hello"
That is a practical-looking way to solve the problem. But since I have too many parameters that is needed to be transferred to and revieced back from _do_atomic_job(), I do not want to pass and retrieve tons of parameters. Other option is setting these parameters as class variables with self.param_var etc but those parameters are used in a small part of the code and using self is not my preferred way of solving this.
Last option I thought is using inner functions. (I know I will have problems in variable scopes but as I said, this is a best practise and just ignore them and think scope and all things about the inner functions are working as expected)
class MyTest2(object):
mytext = ""
def myfunc(self):
def _do_atomic_job():
mytext = "Hello"
_do_atomic_job()
print mytext
Lets assume that works as expected. What I want to do is overriding the inner function _do_atomic_job()
class MyTest3(MyTest2):
def __init__(self):
super(MyTest3, self).__init__()
self.myfunc._do_atomic_job = self._alt_do_atomic_job # Of course this do not work!
def _alt_do_atomic_job(self):
mytext = "Hollla!"
Do what I want to achieve is overriding inherited class' method's inner function _do_atomic_job
Is it possible?
Either factoring _do_atomic_job() into a proper method, or maybe factoring it
into its own class seem like the best approach to take. Overriding an inner
function can't work, because you won't have access to the local variable of the
containing method.
You say that _do_atomic_job() takes a lot of parameters returns lots of values. Maybe you group some of these parameters into reasonable objects:
_do_atomic_job(start_x, start_y, end_x, end_y) # Separate coordinates
_do_atomic_job(start, end) # Better: start/end points
_do_atomic_job(rect) # Even better: rectangle
If you can't do that, and _do_atomic_job() is reasonably self-contained,
you could create helper classes AtomicJobParams and AtomicJobResult.
An example using namedtuples instead of classes:
AtomicJobParams = namedtuple('AtomicJobParams', ['a', 'b', 'c', 'd'])
jobparams = AtomicJobParams(a, b, c, d)
_do_atomic_job(jobparams) # Returns AtomicJobResult
Finally, if the atomic job is self-contained, you can even factor it into its
own class AtomicJob.
class AtomicJob:
def __init__(self, a, b, c, d):
self.a = a
self.b = b
self.c = c
self.d = d
self._do_atomic_job()
def _do_atomic_job(self):
...
self.result_1 = 42
self.result_2 = 23
self.result_3 = 443
Overall, this seems more like a code factorization problem. Aim for rather lean
classes that delegate work to helpers where appropriate. Follow the single responsibility principle. If values belong together, bundle them up in a value class.
As David Miller (a prominent Linux kernel developer) recently said:
If you write interfaces with more than 4 or 5 function arguments, it's
possible that you and I cannot be friends.
Inner variables are related to where they are defined and not where they are executed. This prints "hello".
class MyTest2(object):
def __init__(self):
localvariable = "hello"
def do_atomic_job():
print localvariable
self.do_atomic_job = do_atomic_job
def myfunc(self):
localvariable = "hollla!"
self.do_atomic_job()
MyTest2().myfunc()
So I can't see any way you could use the local variables without passing them, which is probably the best way to do it.
Note: Passing locals() will get you a dict of the variables, this is considered quite bad style though.

best practise to init member

I have a question on best practises to initialise a class. Consider a class that is complex and has a lot of members. It's bad practise to init them outside of __init__(), but how can I handle the problem without having an enormous __init__() method.
An example:
class A:
def __init__(self):
self.member0 = "a"
# to prevent that the init method gets too big
# put some initialisation outside
init_other_stuff()
def init_other_stuff(self):
self.member1 = "b"
self.member2 = "c"
...
Thanks in advance.
[update] To clarify. The goal is not to put the stuff into another long method of course. Instead you can split the initialisation into different parts like:
def init_network_stuff(self):
""" init network setup """
self.my_new_socket = socket.socket(..)
def init_local_stuff(self):
""" init local setup """
self.my_new_logpath = "/foo/bar/log"
...
I would also agree that having too many attributes is probably a sign of insufficient abstraction, which is usually harder to develop & debug, and so changing your design is probably a good idea.
But, you do have a gold badge, so you've obviously been around and (probably) know what you are doing, and might have a reason to do this. In that case, I think it's a good idea to split up the initialization by category like you suggested. The only suggestion I would have is to use _ on the sub-init functions to signal to a user they aren't intended for normal use i.e.
class A:
def __init__(self):
self.member0 = "a"
self.__init_other_stuff()
def __init_other_stuff(self):
self.member1 = "b"
self.member2 = "c"
etc. This also hides them from tab-completion in most consoles & editors.
Your other choice is to make this class a subclass of multiple classes that implement part of the interface if you really need A to contain those items directly, i.e. do something like this:
class Logger(object):
def __init__(self):
self.logparam = 1
class NetworkSource(object):
def __init__(self):
self.netparam = 2
class A(Logger, NetworkSource):
def __init__(self):
Logger.__init__(self)
NetworkSource.__init__(self)
In [2]: a = A()
In [3]: a.<tab>
a.logparam a.netparam
Then it would get the functionality of both classes, while having a relatively short init. Multiple inheritance is conceptually a little more complicated though IMHO.

Sharing base object with inheritance

I have class Base. I'd like to extend its functionality in a class Derived. I was planning to write:
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
# ...
def derived_method1(self):
# ...
Sometimes I already have a Base instance, and I want to create a Derived instance based on it, i.e., a Derived instance that shares the Base object (doesn't re-create it from scratch). I thought I could write a static method to do that:
b = Base(arg1, arg2) # very large object, expensive to create or copy
d = Derived.from_base(b, derived_arg1, derived_arg2) # reuses existing b object
but it seems impossible. Either I'm missing a way to make this work, or (more likely) I'm missing a very big reason why it can't be allowed to work. Can someone explain which one it is?
[Of course, if I used composition rather than inheritance, this would all be easy to do. But I was hoping to avoid the delegation of all the Base methods to Derived through __getattr__.]
Rely on what your Base class is doing with with base_arg1, base_arg2.
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
...
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
...
#classmethod
def from_base(cls, b, da1, da2):
return cls(b.base_arg1, b.base_arg2, da1, da2)
The alternative approach to Alexey's answer (my +1) is to pass the base object in the base_arg1 argument and to check, whether it was misused for passing the base object (if it is the instance of the base class). The other agrument can be made technically optional (say None) and checked explicitly when decided inside the code.
The difference is that only the argument type decides what of the two possible ways of creation is to be used. This is neccessary if the creation of the object cannot be explicitly captured in the source code (e.g. some structure contains a mix of argument tuples, some of them with the initial values, some of them with the references to the existing objects. Then you would probably need pass the arguments as the keyword arguments:
d = Derived(b, derived_arg1=derived_arg1, derived_arg2=derived_arg2)
Updated: For the sharing the internal structures with the initial class, it is possible using both approaches. However, you must be aware of the fact, that if one of the objects tries to modify the shared data, the usual funny things can happen.
To be clear here, I'll make an answer with code. pepr talks about this solution, but code is always clearer than English. In this case Base should not be subclassed, but it should be a member of Derived:
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
class Derived(object):
def __init__(self, base, derived_arg1, derived_arg2):
self.base = base
self.derived_arg1 = derived_arg1
self.derived_arg2 = derived_arg2
def derived_method1(self):
return self.base.base_arg1 * self.derived_arg1

Is there a benefit to defining a class inside another class in Python?

What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().

Categories

Resources