Using setattr to freeze some parameters of a method [duplicate] - python

This question already has answers here:
Creating functions (or lambdas) in a loop (or comprehension)
(6 answers)
Closed 6 months ago.
in order to automatically generate parameterized tests, I am trying to add methods to a class in by freezing some parameters of an existing method. Here is the piece of Python 3 code
class A:
def f(self, n):
print(n)
params = range(10)
for i in params:
name = 'f{0}'.format(i)
method = lambda self: A.f(self, i)
setattr(A, name, method)
However, the following lines then produce rather disappointing output
a = A()
a.f0()
prints "9" (instead of "0"). I must be doing something wrong, but I can't see what. Can you help ?
Thanks a lot
Edit: this question is indeed a duplicate. I would like to acknowledge the quality of all comments, which go much deeper than the raw answer.

Try
method = lambda self, i=i: A.f(self, i)
because otherwise when you call the method i's value may have changed

The best way to "freeze" parameters in Python is to use functools.partial. It's roughly equivalent to warwaruk's lambda version, but if you have a function with lots of arguments yet only want to freeze one or two of them (or if you only know certain arguments and don't care about the rest) using partial is more elegant as you only specify the arguments you want to freeze rather than having to repeat the whole function signature in the lambda.
An example for your program:
class A:
def f(self, n):
print(n)
from functools import partial
for i in range(10): # params
setattr(A, 'f{0}'.format(i), partial(A.f, n=i))
Depending on which version of Python 3 you're using, you may not need to include the 0 in the string format placeholder; starting with 3.1, iirc, it should be automatically substituted.

Related

Can lambda work with *args as its parameter? [duplicate]

This question already has answers here:
Function chaining in Python
(6 answers)
Closed 6 years ago.
I am calculating a sum using lambda like this:
def my_func(*args):
return reduce((lambda x, y: x + y), args)
my_func(1,2,3,4)
and its output is 10.
But I want a lambda function that takes random arguments and sums all of them. Suppose this is a lambda function:
add = lambda *args://code for adding all of args
someone should be able to call the add function as:
add(5)(10) # it should output 15
add(1)(15)(20)(4) # it should output 40
That is, one should be able to supply arbitrary
number of parenthesis.
Is this possible in Python?
This is not possible with lambda, but it is definitely possible to do this is Python.
To achieve this behaviour you can subclass int and override its __call__ method to return a new instance of the same class with updated value each time:
class Add(int):
def __call__(self, val):
return type(self)(self + val)
Demo:
>>> Add(5)(10)
15
>>> Add(5)(10)(15)
30
>>> Add(5)
5
# Can be used to perform other arithmetic operations as well
>>> Add(5)(10)(15) * 100
3000
If you want to support floats as well then subclass from float instead of int.
The sort of "currying" you're looking for is not possible.
Imagine that add(5)(10) is 15. In that case, add(5)(10)(20) needs to be equivalent to 15(20). But 15 is not callable, and in particular is not the same thing as the "add 15" operation.
You can certainly say lambda *args: sum(args), but you would need to pass that its arguments in the usual way: add(5,10,20,93)
[EDITED to add:] There are languages in which functions with multiple arguments are handled in this sort of way; Haskell, for instance. But those are functions with a fixed number of multiple arguments, and the whole advantage of doing it that way is that if e.g. add 3 4 is 7 then add 3 is a function that adds 3 to things -- which is exactly the behaviour you're wanting not to get, if you want something like this to take a variable number of arguments.
For a function of fixed arity you can get Haskell-ish behaviour, though the syntax doesn't work so nicely in Python, just by nesting lambdas: after add = lambda x: lambda y: x+y you can say add(3)(4) and get 7, or you can say add(3) and get a function that adds 3 to things.
[EDITED again to add:] As Ashwini Chaudhary's ingenious answer shows, you actually can kinda do what you want by arranging for add(5)(10) to be not the actual integer 15 but another object that very closely resembles 15 (and will just get displayed as 15 in most contexts). For me, this is firmly in the category of "neat tricks you should know about but never ever actually do", but if you have an application that really needs this sort of behaviour, that's one way to do it.
(Why shouldn't you do this sort of thing? Mostly because it's brittle and liable to produce unexpected results in edge cases. For instance, what happens if you ask for add(5)(10.5)? That will fail with A.C.'s approach; PM 2Ring's approach will cope OK with that but has different problems; e.g., add(2)(3)==5 will be False. The other reason to avoid this sort of thing is because it's ingenious and rather obscure, and therefore liable to confuse other people reading your code. How much this matters depends on who else will be reading your code. I should add for the avoidance of doubt that I'm quite sure A.C. and PM2R are well aware of this, and that I think their answers are very clever and elegant; I am not criticizing them but offering a warning about what to do with what they've told you.)
You can kind of do this with a class, but I really wouldn't advise using this "party trick" in real code.
class add(object):
def __init__(self, arg):
self.arg = arg
def __call__(self, arg):
self.arg += arg
return self
def __repr__(self):
return repr(self.arg)
# Test
print(add(1)(15)(20)(4))
output
40
Initially, add(1) creates an add instance, setting its .args attribute to 1. add(1)(15) invokes the .call method, adding 15 to the current value of .args and returning the instance so we can call it again. This same process is repeated for the subsequent calls. Finally, when the instance is passed to print its __repr__ method is invoked, which passes the string representation of .args back to print.

Do Python functions know how many outputs are requested? [duplicate]

This question already has answers here:
nargout in Python
(6 answers)
Closed 7 years ago.
In Python, do functions know how many outputs are requested? For instance, could I have a function that normally returns one output, but if two outputs are requested, it does an additional calculation and returns that too?
Or is this not the standard way to do it? In this case, it would be nice to avoid an extra function argument that says to provide a second input. But I'm interested in learning the standard way to do this.
The real and easy answer is: No.
Python functions/methods does not know about how many outputs are requested, as unpacking of a returned tuple happens after the function call.
What's quite a best practice to do though is to use underscore (_) as a placeholder for unused args that are returned from functions when they're not needed, example:
def f():
return 1, 2, 3
a, b, c = f() # if you want to use all
a, _, _ = f() # use only first element in the returned tuple, 'a'
_, b, _ = f() # use only 'b'
For example, when using underscore (_) pylint will suppress any unused argument warnings.
Python functions always return exactly 1 value.
In this case:
def myfunc():
return
that value is None. In this case:
def myfunc():
return 1, 2, 3
that value is the tuple (1, 2, 3).
So there is nothing for the function to know, really.
As for returning different outputs controlled by parameters, I'm always on the fence about that. It would depend on the actual use case. For a public API that is used by others, it is probably best to provide two separate functions with different return types, that call private code that does take the parameter.

Best way to "overload" function in python? [duplicate]

This question already has answers here:
Function overloading in Python: Missing [closed]
(5 answers)
Closed 7 years ago.
I am trying to do something like this in python:
def foo(x, y):
# do something at position (x, y)
def foo(pos):
foo(pos.x, pos.y)
So I want to call a different version of foo depending on the number of parameters I provide. This of course doesn't work because I redefined foo.
What would be the most elegant way of achieving this? I would prefer not to use named parameters.
Usually you'd either define two different functions, or do something like:
def foo(x, y = None):
if y is None:
x, y = x.x, x.y
# do something at position (x, y)
The option to define two different functions seems unwieldy if you're used to languages that have overloading, but if you're used to languages like Python or C that don't, then it's just what you do. The main problem with the above code in Python is that it's a bit awkward to document the first parameter, it doesn't mean the same in the two cases.
Another option is to only define the version of foo that takes a pos, but also supply a type for users:
Pos = collections.namedtuple('Pos', 'x y')
Then anyone who would have written foo(x,y) can instead write foo(Pos(x,y)). Naturally there's a slight performance cost, since an object has to be created.
You can use default parameters, but if you want use overloading in python, the best way to do this is:
from pytyp.spec.dispatch import overload
#overload
def foo(x,y):
#foo.overload
def foo(x, y,z):
Pytyp is a python package (you can download it from https://pypi.python.org/pypi/pytyp). It includes module pytyp.spec.dispatch, which contains decorators, like overload. If it is palced on method, this method will be called for all the overloaded methods..
There are a lot of things you could do, like named default parameters.
However, what it looks like you want is "multiple dispatch" and there's a number of implementations out of there of decorators to help you do that sort of thing. Here's one implementation that looks like:
>>> from multipledispatch import dispatch
>>> #dispatch(int, int)
... def add(x, y):
... return x + y
>>> #dispatch(Position)
... def add(pos):
... return "%s + %s" % (pos.x, pos.y)
>>> add(1, 2)
3
>>> pos = Position(3, 4)
>>> add(pos)
7
While #functools.singledispatch is coming to Python 3.4 I don't think that will work for your example as in one case you have multiple arguments.
What you're trying to achieve looks like a bad design, maybe consider sticking with different function names:
def do_something_by_coordinates(x, y):
pass
def do_something_by_position(pos):
do_something_by_coordinates(pos.x, pos.y)
Or you can use kwargs if you really need to:
Understanding kwargs in Python
One way by setting a default value:
def foo(x, y=None):
# now for example, you can call foo(1) and foo(1,2)
Inside foo you can check if y == None and have a different logic for the two cases.
Note that you can have a better design if you separate the functions, don't attempt to have the same function that accept both coordinates and position.

Why does Python pointing change with small changes to this function? [duplicate]

This question already has answers here:
Creating functions (or lambdas) in a loop (or comprehension)
(6 answers)
Closed 8 years ago.
when working with python, it bothered me that while obj.method() is perfectly fine, method(obj) isn't allowed. So I figured I'd try to write some code to fix that. I came up with the next:
def globalclassfuncs(defobj):
for i in inspect.getmembers(defobj, predicate=inspect.ismethod):
def scope():
var = i[0];
setattr(sys.modules[__name__], i[0], lambda obj, *args: getattr(obj, var)(*args));
scope();
However, there's something weird with this. When I remove def scope(): and scope(), so that it'll run without a function definition in the for loop, or when I change the getattr() function to use i[0] directly instead of through var, somehow all new defined functions point to the last defined function instead of the function they should point to. Why does this behaviour change so much on such small changes in the code?
Seems like a case of late binding closure

How to create new Python methods by a loop? (Attempting results in all methods acting like the one last defined) [duplicate]

This question already has answers here:
Local variables in nested functions
(4 answers)
Closed 9 years ago.
I'm looking into generating a Python class dynamically from data.
(The purpose is to let users specify some software tests in a simple file, without knowing any Python).
I have run into an effect I did not expect;
as a toy example to quickly check that I can create methods according to a naming scheme I did the following:
import unittest
attrdict = {}
for i in range(3):
attrdict["test%s"%i]= types.MethodType(lambda self: i)
attrdict["runTest"]=lambda self: [eval("self.test%s()"%i) for i in range(3)]
dynTC = type('dynTC', (unittest.TestCase,), attrdict )
now when I execute
dynTC().runTest()
... I would expect
[0,1,2]
as output, but the actual result is
[2,2,2]
I would expect the lambda definitions to bind a deep copy of the loop index, since it is just a number rather than a more complex structure, but clearly there's stuff going on here that I don't understand.
I have a feeling this may be a common 'gotcha' for new Python programmers, but all the terms I can think of to describe the problem are so generic that my searches only return a deluge of unrelated answers.
Could you please explain to me what is happening here instead of what I expected, and preferably also what I should have done to create the desired effect of several /different/ methods.
The problem is with this line...
attrdict["test%s"%i]= types.MethodType(lambda self: i)
When you define a lambda that references a variable which isn't one of its arguments, the variable will be resolved from the scope in which the lambda was defined at the point when it's actually called, so you'll always get whatever the current value of i is, rather than the value of i at the point when you defined the lambda.
In your case, the value of i will end up as 2 after the for i in range(3) loop completes, so you need to create a closure to bind i to a specific value when creating the lambda, by changing the line to...
attrdict["test%s"%i]= types.MethodType(lambda self, i=i: i)

Categories

Resources