calling a function from class in python - different way - python

EDIT2: Thank you all for your help!
EDIT: on adding #staticmethod, it works. However I am still wondering why i am getting a type error here.
I have just started OOPS and am completely new to it. I have a very basic question regarding the different ways I can call a function from a class.
I have a testClass.py file with the code:
class MathsOperations:
def __init__ (self, x, y):
self.a = x
self.b = y
def testAddition (self):
return (self.a + self.b)
def testMultiplication (self):
return (self.a * self.b)
I am calling this class from another file called main.py with the following code:
from testClass import MathsOperations
xyz = MathsOperations(2, 3)
print xyz.testAddition()
This works without any issues. However, I wanted to use the class in a much simpler way.
I have now put the following code in the testClass.py file. I have dropped the init function this time.
class MathsOperations:
def testAddition (x, y):
return x + y
def testMultiplication (a, b):
return a * b
calling this using;
from testClass import MathsOperations
xyz = MathsOperations()
print xyz.testAddition(2, 3)
this doesn't works. Can someone explain what is happening wrongly in case 2? How do I use this class?
The error i get is "TypeError: testAddition() takes exactly 2 arguments (3 given)"

you have to use self as the first parameters of a method
in the second case you should use
class MathOperations:
def testAddition (self,x, y):
return x + y
def testMultiplication (self,a, b):
return a * b
and in your code you could do the following
tmp = MathOperations()
print tmp.testAddition(2,3)
if you use the class without instantiating a variable first
print MathOperation.testAddtion(2,3)
it gives you an error "TypeError: unbound method"
if you want to do that you will need the #staticmethod decorator
For example:
class MathsOperations:
#staticmethod
def testAddition (x, y):
return x + y
#staticmethod
def testMultiplication (a, b):
return a * b
then in your code you could use
print MathsOperations.testAddition(2,3)

disclaimer: this is not a just to the point answer, it's more like a piece of advice, even if the answer can be found on the references
IMHO: object oriented programming in Python sucks quite a lot.
The method dispatching is not very straightforward, you need to know about bound/unbound instance/class (and static!) methods; you can have multiple inheritance and need to deal with legacy and new style classes (yours was old style) and know how the MRO works, properties...
In brief: too complex, with lots of things happening under the hood. Let me even say, it is unpythonic, as there are many different ways to achieve the same things.
My advice: use OOP only when it's really useful. Usually this means writing classes that implement well known protocols and integrate seamlessly with the rest of the system. Do not create lots of classes just for the sake of writing object oriented code.
Take a good read to this pages:
http://docs.python.org/reference/datamodel.html
http://docs.python.org/tutorial/classes.html
you'll find them quite useful.
If you really want to learn OOP, I'd suggest starting with a more conventional language, like Java. It's not half as fun as Python, but it's more predictable.

class MathsOperations:
def __init__ (self, x, y):
self.a = x
self.b = y
def testAddition (self):
return (self.a + self.b)
def testMultiplication (self):
return (self.a * self.b)
then
temp = MathsOperations()
print(temp.testAddition())

Your methods don't refer to an object (that is, self), so you should
use the #staticmethod decorator:
class MathsOperations:
#staticmethod
def testAddition (x, y):
return x + y
#staticmethod
def testMultiplication (a, b):
return a * b

You need to have an instance of a class to use its methods. Or if you don't need to access any of classes' variables (not static parameters) then you can define the method as static and it can be used even if the class isn't instantiated. Just add #staticmethod decorator to your methods.
class MathsOperations:
#staticmethod
def testAddition (x, y):
return x + y
#staticmethod
def testMultiplication (a, b):
return a * b
docs: http://docs.python.org/library/functions.html#staticmethod

Related

pass arguments inside splitted methods

I need to split class methods in several files. Functionality need to by that I can pass inside method all variables defined in self and receive new self variables defined inside the method.
My attempt:
Below code works, but I don't know if this is the best/proper solution.
Base:
from calculate_function import function
class Data():
def __init__(self):
self.y = -2
self.x = 1
self.z, self.result = function(self)
calculate_function.py:
def function(self):
z = 2
result = z + self.x
return z, result
For above I pass self inside new function for collect all init variables, then define new self variable/results.
There will by much more functions inside different files that will done some calculations and create new variables for instance of class.
Question
What I need is to pass each created self variable to each function.
For above code the solution is proper defined or there is better option to this?
If you want to externalize some part of your class code to external functions, it's better to write those as pure functions and keep the attribute access (and even more attributes updates) within the class code itself - this makes the code much easier to test, read and maintain. In you case this would looks like:
from calculate_function import function
class Data():
def __init__(self):
self.y = -2
self.x = 1
self.z, self.result = function(self.x)
calculate_function.py:
def function(x):
z = 2
result = z + x
return z, result
The points here are that 1/ you can immediatly spot the creation of attributes z and result and 2/ you can test function() without a Data instance.
I need to split class methods in several files.
This often means your class has too many responsabilities. Some parts of it can be delegated to pure functions like shown above. Some other parts, that need access to a common subset of your class attributes, can be delegated to other, smaller, specialized classes - but preferably using composition / delegation instead of inheritance (depending on concrete use cases of course).
You dont need pass self inside the function
Why not do it like this:
class Data():
def __init__(self):
self.y = -2
self.x = 1
self.function()
def function(self):
self.z = 2
self.result = self.z + self.x
Do wish to use another Class function or just a stand alone function?
Here is solution, using class inheritance:
-- function1.py --
class FunctionClass1():
def function1(self):
self.result = self.x + self.y
-- function2.py --
class FunctionClass2():
def function2(self):
self.result = self.result + self.z
-- data.py --
from function1 import FunctionClass1
from function2 import FunctionClass2
class Data(FunctionClass1, FunctionClass2):
def __init__(self):
self.x = 1
self.y = 2
self.z = 3
self.function1()
self.function2()

Yaml serialization through camel: using base class load/dump and accessing type(self) in decorator

TL;DR: how to use type(self) in the decorator of a member function?
I would like to do serialization of derived classes and share some serialization logic in the base class in Python.
Since pickle and simple yaml did not seem to be able to deal with this reliably, I then stumbled over camel which I consider a quite neat solution to the problem see this link.
Consider two extremely simplified classes B and A where B is inheriting from A. I want to be able to serialize B in my main function like this:
from camel import Camel, CamelRegistry
serializable_types = CamelRegistry()
# ... define A and B with dump and load functions ...
if __name__ == "__main__":
serialization_interface = Camel([serializable_types])
b = B(x=3, y=4)
s = serialization_interface.dump(b)
print(s)
I came up with two solutions that work:
Version 1: the dumping and loading is done in stand-alone functions outside of the class. Problems: not very elegant, function dumpA not automatically available to inheriting class in dumpB, more cumbersome function naming, function scope bigger than necessary
# VERSION 1 - dump and load in external functions
class A:
def __init__(self, x):
self._x = x
#serializable_types.dumper(A, 'object_A', version=None)
def dumpA(a):
return {'x': a._x}
#serializable_types.loader('object_A', version=None)
def loadA(data, version):
return A(data.x)
class B(A):
def __init__(self, x, y):
super().__init__(x)
self._y = y
#serializable_types.dumper(B, 'object_B', version=None)
def dumpB(b):
b_data = dumpA(b)
b_data.update({'y': b._y})
return b_data
#serializable_types.loader('object_B', version=None)
def loadB(data, version):
return B(data.x)
Version 2: functions for loading and dumping are defined directly in the constructor. Function are still not available in the subclass :/
# VERSION 2 - dump and load functions defined in constructor
class A:
def __init__(self, x):
self._x = x
#serializable_types.dumper(A, 'object_A', version=None)
def dump(a):
a.to_dict()
#serializable_types.loader('object_A', version=None)
def load(data, version):
return A(data.x)
def to_dict(self):
return {'x': self._x}
class B(A):
def __init__(self, x, y):
super().__init__(x)
self._y = y
#serializable_types.dumper(B, 'object_B', version=None)
def dump(b):
b_data = b.to_dict()
return b_data
#serializable_types.loader('object_B', version=None)
def load(data, version):
return B(data.x)
def to_dict(self):
b_data = super().to_dict()
b_data.update({'y': b._y})
return b_data
I would like to achieve an implementation that looks like this:
# VERSION 3 - dump and load functions are member functions
# ERROR: name 'A' is not defined
class A:
def __init__(self, x):
self._x = x
#serializable_types.dumper(A, 'object_A', version=None)
def dump(a):
return {'x': a._x}
#serializable_types.loader('object_A', version=None)
def load(data, version):
return A(data.x)
class B(A):
def __init__(self, x, y):
super().__init__(x)
self._y = y
#serializable_types.dumper(B, 'object_B', version=None)
def dump(b):
b_data = super().dump(b)
b_data.update({'y': b._y})
return b_data
#serializable_types.loader('object_B', version=None)
def load(data, version):
return B(data.x)
This will not work cause in the definition of the dump functions, A and B are not defined. From a software design perspective however, I consider this to be the cleanest solution with fewest lines of code.
Is there a way to get the type definitions of A and B to work in the decorator? Or has anyone solved the problem in a different way?
I came across this but couldn't see a straightforward way of applying it to my usecase.
Your version 3 is not going to work because, as you probably noticed, at the
time the decorator is called, A is not defined yet.
If you would write your decorator
in the way before the # syntactic sugar was added to Python:
def some_decorator(fun):
return fun
#some_decorator
def xyz():
pass
, that is:
def some_decorator(fun):
return fun
def xyz():
pass
some_decorator(xyz)
then that should be immediately clear.
Your version 2, defers the registration of your loader and dumper
routines until an instance of both A and B is created in some
otherway than loading before you can do loading. That could be working
if you created instances of both classes and then did dump, followed by load,
from within one program. But if you only create B and want to dump
it, then the functions for A have not registred and A.dump() is
not available. And anyway if a program does both dump and load data,
it is much more common to do the loading from some persistent storage
first, and then do the dumping, and during loading the registration
would not yet have taken place. So you would need some extra
registration mechanism for all your classes and creation of at least
one instance for each of these classes. Probably not what you want.
In version 1, you cannot easily find dumpA while in dumpB,
although it should be possible to look into the internals of
serializable_types and find the parent class of B, this however is
non-trivial, ugly and there is a better way by minimizing dumpB (and
dumpA) into functions that return the value returned some method of B
(resp. A), appropriately named dump:
from camel import CamelRegistry, Camel
serializable_types = CamelRegistry()
# VERSION 1 - dump and load in external functions
class A:
def __init__(self, x):
self._x = x
def dump(self):
return {'x': self._x}
#serializable_types.dumper(A, 'object_A', version=None)
def dumpA(a):
return a.dump()
#serializable_types.loader('object_A', version=None)
def loadA(data, version):
return A(data.x)
class B(A):
def __init__(self, x, y):
super().__init__(x)
self._y = y
def dump(self):
b_data = A.dump(self)
b_data.update({'y': b._y})
return b_data
#serializable_types.dumper(B, 'object_B', version=None)
def dumpB(b):
return b.dump()
#serializable_types.loader('object_B', version=None)
def loadB(data, version):
return B(data.x)
if __name__ == "__main__":
serialization_interface = Camel([serializable_types])
b = B(x=3, y=4)
s = serialization_interface.dump(b)
print(s)
which gives:
!object_B
x: 3
y: 4
That works because by the time dumpB is called, you have an instance of type B
(otherwise you could not get at its attributes), and the methods of
class B know about class A.
Please note that doing return B(data.x) is not going to work in any of your versions
as B's __init__ expects two parameters.
I find the above rather unreadable.
You indicate that "simple yaml did not seem to be able to deal with
this reliably". I am not aware of why this would be true, but there is
a lot of misunderstanding about YAML¹
I recommend you take a look at ruamel.yaml (disclaimer: I am the author of that package).
It requires registration of classes for dumping and loading, uses pre-defined method names
for loading and dumping (from_yaml resp. to_yaml), and the "registration office" calls
these methods including class information. So there is no need to defer the definition
of these methods until you construct an object as in your version 2.
You can either explicitly register a class or decorate the class as
soon as the decorator is available (i.e. once you have your YAML
instance). Since B is inherting from A, you only have to provide
to_yaml and from_yaml in A and can re-use the dump methods
from the previous example:
import sys
class A:
yaml_tag = u'!object_A'
def __init__(self, x):
self._x = x
#classmethod
def to_yaml(cls, representer, node):
return representer.represent_mapping(cls.yaml_tag, cls.dump(node))
#classmethod
def from_yaml(cls, constructor, node):
instance = cls.__new__(cls)
yield instance
state = ruamel.yaml.constructor.SafeConstructor.construct_mapping(
constructor, node, deep=True)
instance.__dict__.update(state)
def dump(self):
return {'x': self._x}
import ruamel.yaml # delayed import so A cannot be decorated
yaml = ruamel.yaml.YAML()
#yaml.register_class
class B(A):
yaml_tag = u'!object_B'
def __init__(self, x, y):
super().__init__(x)
self._y = y
def dump(self):
b_data = A.dump(self)
b_data.update({'y': b._y})
return b_data
yaml.register_class(A)
# B not registered, because it is already decorated
b = B(x=3, y=4)
yaml.dump(b, sys.stdout)
print('=' * 20)
b = yaml.load("""\
!object_B
x: 42
y: 196
""")
print('b.x: {.x}, b.y: {.y}'.format(b, b))
which gives:
!object_B
x: 3
y: 4
====================
b.x: 42, b.y: 196
The yield in the above code is necessary to deal with instances that
have (indirect) circular references to themselves and for which,
obviously, not all arguments can be available at the time of object
creation.
¹ E.g. one YAML 1.2 reference
states
that a YAML document begins with ---, where that is actually called
a
directives-end-marker
and not document-start-marker for good reasons. And that ..., the
document-end-marker, can only be followed by directives or
---,
whereas the spec clearly indcates that it can be followed by comments
and also by bare documents.

Python - Choose which class to inherit

I want to make two classes A and B, in which B is a slight - but significant - variation of A, and then make a third class C that can inherit either A or B and add functionality to them. The problem is, how do I tell C to inherit A or B based on my preference?
To make things more clear, suppose I have this code:
class A:
def __init__(self, x, y):
self.x = x
self.y = y
def first(self):
return do_something(1)
def second(self):
return do_something(2)
def third(self):
return do_something(3)
def start(self):
self.first()
self.second()
self.third()
class B(A):
def __init__(self, x, y, z):
super().__init__(x, y)
self.z = z
def second(self):
super().second()
do_stuff()
def third(self):
do_other_stuff()
That is a very simplified version of the code I used. In particular, A represents a simulator of a manufacturing system, while B represents a simulator of the same manufacturing system with a modification of the behaviour of the main machine-tool.
Now, what I want is to add code to compute some statistics. What it does is something like this:
class C(A):
def __init__(self, *args):
super().__init__(*args)
self.stat = 0
def second(self):
super().second()
self.stat += 1
def third(self):
super().third()
self.stat *= 3
The problem is that the class C works the exactly same way whether if I inherit class A (as in the previous listing) or class B (exact same code, with as first line class C(B):
How can I do that? Or am I using a non-feasible way? I think an ideal solution is to be able to choose which class to inherit, A or B, when I initialize C. Or, maybe, to be able to pass to class C the class to inherit.
I made some researches, and I found also the possibility of aggregation (that I didn't know before), but I don't see it really useful. As a last note, be aware that class A might have up to 20-30 methods, and when I use class C I want class A (or B, depending on which it inherits) to work exactly as before with the added chunks of C inbetween.
P.S. I'm looking for a possibly elegant, no code-heavy, "pythonic" way of doing this. I'm also really looking forward on advices on everything you think could be done better. Finally, I can totally modify class C, but class A and B must remain (apart from small changes) the same.
You can use new-style classes and their method resolution order.
Considering these definitions:
class A(object):
def __init__(self, x):
pass
def foo(self):
print "A"
class B(object):
def __init__(self, x, y):
pass
def foo(self):
print "B"
you can build a mixin intended to add functionality to A or B:
class Cmix(object):
def foo(self):
super(Cmix, self).foo()
print "mix"
and inherit from both Cmix and A (or B, respectively):
class CA(Cmix, A):
pass
class CB(Cmix, B):
pass
Finally, you can write a convenience function to choose between CA and CB based on the number of parameters:
def C(*args):
if len(args) == 1:
return CA(*args)
else:
return CB(*args)
Now we have
C(1).foo()
# A
# mix
C(1, 2).foo()
# B
# mix
Note that C is not a real class and you cannot use it as a second argument in isinstance, for example.

Is it considered good practice to use **kwargs prolifically to aid readability?

When designing classes, I found it awkward to place default argument values in the __init__ method, as in:
class Class1(object):
def __init__(self, y=2, z=3):
self.y = self.manip_y(y)
self.z = self.manip_z(z)
def manip_y(self, y):
return y * 10
def manip_z(self, z):
return z - 30
Is it considered better practice to add **kwargs to function signatures to place default values in the function signatures as well?:
class Class2(object):
def __init__(self, **kwargs):
self.y = self.manip_y(**kwargs)
self.z = self.manip_z(**kwargs)
def manip_y(self, y=2, **kwargs):
return y * 10
def manip_z(self, z=3, **kwargs):
return z - 30
It's better to add default values in the __init__ signature -- that way someone only needs to look at the signature to figure out the options. And, in example 2, the default values are now hidden in other functions. Additionally, your documentation will be simpler.
do not do this. why? because it forces you to read not only the __init__ code to understand how to create the object but also all of the functions called therein.

Python decorator that makes OOP code FP; good or bad idea?

Recently I've been trying to figure out a solution to the 'expression problem' of choosing between implementing my code in OOP or FP (functional programming). The example I used to illustrate my problem was a Vector2D class. I could make a class that contains all the necessary functions for a 2D vector (dot product, magnitude, etc.), or I could make a set of functions that take a 2-tuple representing a vector. Which option do I chose?
To cope with this problem, I thought it might be nice to make a decorator that takes a class's methods and turns them into global functions. This is how I did it:
import types
def function(method):
method._function = True
return method
def make_functions(cls):
for key in cls.__dict__:
method = getattr(cls, key)
if not isinstance(method, types.FunctionType):
continue
if hasattr(method, '_function') and method._function:
globals()[method.__name__] = method
return cls
#make_functions
class Vector2D:
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return 'Vector(%g, %g)' % (self.x, self.y)
def __iter__(self):
for component in self.x, self.y:
yield component
def __getitem__(self, key):
return (self.x, self.y)[key]
def __setitem__(self, key, val):
if key == 0:
self.x = val
elif key == 1:
self.y = val
else:
print('not cool man')
def __add__(self, other):
x = self[0] + other[0]
y = self[1] + other[1]
return self.__class__(x, y)
__radd__ = __add__
def __sub__(self, other):
x = self[0] - other[0]
y = self[1] - other[1]
return self.__class__(x, y)
def __rsub__(self, other):
x = other[0] - self[0]
y = other[1] - self[1]
return self.__class__(x, y)
def __mul__(self, other):
x = other * self[0]
y = other * self[1]
return self.__class__(x, y)
__rmul__ = __mul__
#function
def dot_product(self, other):
return self[0]*other[1] + self[1]*other[0]
Now, dot_product is not only a method of the Vector2D class, but it is also a global function that takes in two vectors (or vector-like objects). This satisfies both the functional and object-oriented approaches to implementing an object like this. The only problem I can foresee this approach making is that any class that can be represented as another object like a tuple or a list, must be defined to work in the same ways as the objects which act like it. This is not so bad for a Vector that can also be a tuple, since all we have to do is define the __getitem__ and __iter__ methods, however I can see this getting wildly out of control for classes that have multiple contrasting implementations
Is this a fair solution to the problem? Is it bad practice or technique? Should I solely provide one or the other?
Python has a #staticmethod decorator for using class methods without an instantiation of that class. Simply annotate a class method with the static method wrapper (note the method now does not take a self reference), and you can call it from the class itself.
In your case, for the dot product, simply do:
class Vector2D():
# Magic methods here...
#staticmethod
def dot_product(a, b):
return a[0]*b[1] + a[1]*b[0]
Then, simply call Vector2D.dot_product(my_vector1, my_vector2) to use the function from the Vector2D class itself.
Assigning class methods to global functions sounds like a very dangerous, buggy, complex, and verbose solution. I would avoid it at all costs.

Categories

Resources