Defining and Calling a Function within a Python Class - python

I'm creating a class and I'm hoping to call a user-defined function within a method for that class. I'd also like to define the function within the class definition. However, when I call the class, I get the error message name *whatever function* is not defined.
For instance, something like this:
class ExampleClass():
def __init__(self, number):
self.number = number
def plus_2_times_4(x):
return(4*(x + 2))
def arithmetic(self):
return(plus_2_times_4(self.number))
But when I call:
instance = ExampleClass(number = 4)
instance.arithmetic()
I get the error message.
So basically I want to define the function in one step (def plus_2_times_4) and use the function when defining a method in another step (def arithmetic...). Is this possible?
Thanks so much in advance!

Define and call plus_2_times_4 with self, namely:
class ExampleClass():
def __init__(self, number):
self.number = number
def plus_2_times_4(self,x):
return(4*(x + 2))
def arithmetic(self):
return(self.plus_2_times_4(self.number))
This will work.

Call the method using ExampleClass.plus_2_times_4:
class ExampleClass():
def __init__(self, number):
self.number = number
def plus_2_times_4(x):
return(4*(x + 2))
def arithmetic(self):
return(ExampleClass.plus_2_times_4(self.number))
Alternatively, use the #staticmethod decorator and call the method using the normal method calling syntax:
class ExampleClass():
def __init__(self, number):
self.number = number
#staticmethod
def plus_2_times_4(x):
return(4*(x + 2))
def arithmetic(self):
return(self.plus_2_times_4(self.number))
The #staticmethod decorator ensures that self will never be implicitly passed in, like it normally is for methods.

Look at your plus_2_times_4 and arithmetic definitions. There’s no way for Python to tell that you wanted one of them to be a local function and the other one to be a method. They’re both defined exactly the same way.
And really, they’re both. In Python, anything you put in a class statement body is local while that class definition is happening, and it becomes a class attribute later.
If you want to be able to call the function as plus_2_times_4 later, you don’t want this. You just want to declare a global function, outside the class definition. And that really does seem like what you want here. The function doesn’t have any inherent connection to the class; it just takes a number and does stuff to that number without any thought of anything about your class.
Or, if you don’t want to “pollute the global namespace”, you can just define it as a local function within arithmetic. Then arithmetic can just call it—and nobody else can.
If, on the other hand, you want it to be a method, you have to make it usable as a method. A normal instance method has to take self as an extra first parameter, even if it’s not going to do anything with self. (Although not doing anything with self is usually a sign that you wanted a global function, not a method, it’s not illegal or anything.) And it has to be called on an instance, like self.plus_2_times_4(…).
You could declare it as a static method by adding the #staticmethod decorator. Then you don’t need to add the useless self parameter. But you still need to call it on an instance or on the class, because it’s still an attribute of the class, not a global name. (You could also use #classmethod if you have some idea of wanting subclasses to override it, but that doesn’t seem likely here.)
What if you really want to just capture the function value so you can call it without going through the class? Well, you could make it the default value of a parameter, like this:
def arithmetic(self, *, _func=plus_2_times_4):
return func(self.value)
Default values are captured at function definition time—that is, while the class is still being defined—so the function is still local there and can be captured there. But if this seems weird and ugly, there’s a good reason for that—this is not something you usually want to do. To a reader, the function still looks like an incorrect method rather than a disposable function needed by arithmetic. It even ends up as a member of the class, but it can’t be called normally. This is all pretty misleading. In the rare cases you need this, you probably want to give it a _private name, and del it once you’ve used it.

Related

Call specific method from parent class in multiple inheritance - Python

I have one class with multiple inheritance. I would like to concat the output from some parents' methods that share the same name. Ideally, I would be able to do this without going through all parent class but selecting explicitly the cases I want.
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return <my_class1.common_method()> + <my_class2.common_method()>
Since you don't want to use the langage mechanisms to call super-methors (which are designed to go through all the methods in the superclasses, even ones that are not known at the time the code is written), just call the methods explitly on the classes you want - by using the class name.
The only thing different that has to be done is that you have to call the method from the class, not from the instance, and then insert the instance manually as first parameter. Python's automatic self reference is only good when calling the method in the most derived sub-class (from which point, in a more common design, it will use super to run its coutnerparts in the superclasses)
For your example to work, you simply have to write it like this:
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return my_class1.common_method(self) + my_class2.common_method(self)
Note, hoever, that if any of the common_methods would call super().common_method in a common ancestor base, that super-method would be run once for each explicit invocation of a sub-class' .common_method.
If you would want to specialize that it would be though to do.
In other words, if you want, a "super" counterpart that would allow you to specify which super-classes to visit when calling the method, and ensure any super-method called by those would run only once - that i feasible, but complicated and error prone. If you can use explicit classes like in this example, it is 100 times simpler.

Setting instance method systax

The following code is of course totally pointless; it's not supposed to
do anything but illustrate what I'm confused about:
class func():
def __call__(self, x):
raise Exception("func.__call__ error")
def double(x):
return 2*x
doubler = func()
doubler.__call__ = double
print doubler(2)
Can someone explain why this works? I would have expected that if I
wanted to set doubler.__call__ to something it would be a function
that takes two variables; I'd expect the code above to raise some sort
of too-many-parameters error. What gets passed to what, when?
(And then: How could I set doubler.__call__ to a function that
will actually have access to both "self" and "x"?)
(Context: An admittedly silly of-academic-interest example of why I might want to set an instance method this way: Each computable instance needs its own Approx method; creating a separate subclass for each instance seems "wrong"...)
Edit. Probably a better example, making it clear it has nothing
to do with magic-method magic:
class func():
def call(self, x):
raise Exception("func.call error")
def double(x):
return 2*x
doubler = func()
doubler.call = double
print doubler.call(2)
On third thought, probably the following is the right way to do it.
(i) Seems cleaner somehow, using the Python object model instead of
tinkering with it (ii) even 24 hours ago with my then much cruder
understanding I would have expected it to work; somehow in this
version it simply seems to make sense to me that the function passed
to the constructor should take only one variable (iii) it seems to
work regardless of whether I inherit from object, which I think means it would also work in 3.0.
class func3(object):
def __init__(self, f):
self.f = f
def __call__(self, x):
return self.f(x)
def double(x):
return 2.0*x
f3=func3(double)
print f3(2)
When you assign to doubler.__call__, you're binding an function to an instance attribute. This hides the class attribute of the same name that was created in the class statement.
Python's method binding only kicks in when you are looking up a class attribute via an instance. If the attribute's value is a descriptor (which functions are), then the descriptor's __get__ method gets called with appropriate parameters. For a function object, that binds the method to the instance (so self gets passed in automatically as the first argument).
Your first example wouldn't actually work in Python 3, only in Python 2. That's because in Python 2 you're creating an "old-style" class, which does all its method lookups on the instance. In new-style classes (which you can get in Python 2 by inheriting from object, or by default in Python 3), __special__ methods, when they're invoked by the interpreter (e.g. when you do doubler(2) to run doubler.__call__) are looked up only in the class, not in the instance's attributes. So your first example won't work with a new-style class, but the version that uses a normal method (call instead of __call__) would be fine.
This is something between an answer to the question and a continuation of the question. I was kindly referred to another thread where more or less the same question was answered. I didn't follow the answers in that thread very well, being ignorant of the things the people there are talking about, hence the Question: Is what I say below correct? (If yes then this is an answer to the question above; if no I'd appreciate someone explaining why not...)
(i) Since I assign a function to an instance of func instead of to the class, it is now an "instance method", as opposed to a "class method".
(ii) And that's why it's not passed the instance as the first parameter; that happens with class methods but not with instance methods...

#staticmethod or function outside class?

Assuming I have a class which requires a function (or should I say method) which is:
independent from my class instance - doesn't need self argument;
is called only inside my class object
I won't need access to it at any point (to override it for example);
should I (A) place it inside the class and mark it as a #staticmethod or should I (B) define it outside my class object (but in the same namespace)? Why?
Example:
class A:
def __init__(self, my_int):
self.my_int = my_int
def my_int_and_4(self):
print(self.adder(self.my_int,4))
#staticmethod
def adder(a,b):
return a+b
or
def adder(a,b):
return a+b
class B:
def __init__(self, my_int):
self.my_int = my_int
def my_int_and_4(self):
print(adder(self.my_int,4))
EDIT: maybe the example is a bit oversimplified. I should have added that my version of "adder" is specificly used with my class and in no other case.
This is a textbook use case for a private static method.
They key point here is that you should make it a private method of that class. That way you're certain nothing else will use it and depend on its implementation. You'll be free to change it in the future, or even delete it, without breaking anything outside that class.
And yeah, make it static, because you can.
In Python, there is no way to make a method truly private, but by convention, prefixing the method name by a _ means it should be treated as private.
#staticmethod
def _adder(a,b): ## <-- note the _
return a+b
If at some point you suddenly need to use it outside the class, then exposing it will be no trouble at all, e.g. using a public wrapper method.
The reverse, however, isn't true; once exposed, it's difficult to retract that exposure.
I would definitely use a private static method in this case, for the reasons described by Jean-Francois Corbett. There are two types of methods in Python that belong to the class itself, rather than an instance: class methods and static methods.
The first parameter of a class method (created with #classmethod) references the class in exactly the same manner that the first parameter of an instance method (self) references an instance. It is the equivalent of static methods in most other languages. If your method requires access to other class members, use a class method.
A static method (created with #staticmethod) does not contain a reference to the class, and therefore cannot reference other class members. It's generally used for private helper methods and the like.
For your adder method, I would definitely use a static method. However, in this modified (and rather useless) version, a class method is necessary:
class A:
x = 1
def __init__(self, my_int):
self.my_int = my_int
def my_int_and_4(self):
print(self._adder(self.my_int,4))
#staticmethod
def _adder(a,b):
return a+b
#classmethod
def _increment(cls, n):
return n + cls.x
Both approaches will work, so it's the matter of readability and following conventions.
Does the method need to look at the instance's private attributes? If yes, it's a good reason to keep it in the class.
Is the method only used as a helper for one of different methods? If yes, it's a good reason to put it right after the calling method so that the code can be read top-down.
Does the method seem to make sense outside of the context of your class? If yes, it's a good reason to make it a free function or even move it to a different file, like utils.

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

Is it bad practice to put stuff into new function properties?

Say I have a class and a function:
class AddressValidator(self):
def __init__(self):
pass
def validate(address):
# ...
def validate_address(addr):
validator = AddressValidator()
return validator.validate(addr)
The function is a shortcut for using the class, if you will. Now, what if this function has to be run thousands of times? If the validator class actually has to do something on instantiation, like connecting to a database, creating it over and over thousands of times is pretty wasteful. I was wondering if I could perhaps do something like this:
def validate_address(addr):
if not hasattr(validate_address, 'validator'):
validate_address.validator = AddressValidator()
validator = validate_address.validator
return validator.validate(addr)
Now the validator class is only instantiated once and saved "in the function", to put it that way. I've never seen this done though, so I'm guessing it's bad practice. If so, why?
Note: I know I can just cache the validator object in a module global. I'm just curious if this is a viable solution when I want to avoid littering my module.
Despite "everithing is an object", not everithing work as nice as instances of well controlled class.
This problem looks like typical case for "functor" or "callable object" as it called in python.
the code will be look something like
class AddressValidator(self):
def __init__(self):
pass
def __call__(self,address):
# ...
validate_address = AdressValidator()
or you could just define your function as shortcut to bound method
class AddressValidator(self):
def __init__(self):
pass
def validate(self,address):
# ...
validate_adress = AdressValidator().validate
I'd go with a default argument (evaluated once at function definition time and bound to the function):
def validate_address(addr, validator=AddressValidator())
return validator.validate(addr)
This is perfectly acceptable if instances of AddressValidator are considered immutable (i.e. they don't contain methods that modify their internal state), and it also allows you to later override the choice of validator should you find the need to (e.g. to provide a validator specialized for a particular country).

Categories

Resources