Can lambda work with *args as its parameter? [duplicate] - python

This question already has answers here:
Function chaining in Python
(6 answers)
Closed 6 years ago.
I am calculating a sum using lambda like this:
def my_func(*args):
return reduce((lambda x, y: x + y), args)
my_func(1,2,3,4)
and its output is 10.
But I want a lambda function that takes random arguments and sums all of them. Suppose this is a lambda function:
add = lambda *args://code for adding all of args
someone should be able to call the add function as:
add(5)(10) # it should output 15
add(1)(15)(20)(4) # it should output 40
That is, one should be able to supply arbitrary
number of parenthesis.
Is this possible in Python?

This is not possible with lambda, but it is definitely possible to do this is Python.
To achieve this behaviour you can subclass int and override its __call__ method to return a new instance of the same class with updated value each time:
class Add(int):
def __call__(self, val):
return type(self)(self + val)
Demo:
>>> Add(5)(10)
15
>>> Add(5)(10)(15)
30
>>> Add(5)
5
# Can be used to perform other arithmetic operations as well
>>> Add(5)(10)(15) * 100
3000
If you want to support floats as well then subclass from float instead of int.

The sort of "currying" you're looking for is not possible.
Imagine that add(5)(10) is 15. In that case, add(5)(10)(20) needs to be equivalent to 15(20). But 15 is not callable, and in particular is not the same thing as the "add 15" operation.
You can certainly say lambda *args: sum(args), but you would need to pass that its arguments in the usual way: add(5,10,20,93)
[EDITED to add:] There are languages in which functions with multiple arguments are handled in this sort of way; Haskell, for instance. But those are functions with a fixed number of multiple arguments, and the whole advantage of doing it that way is that if e.g. add 3 4 is 7 then add 3 is a function that adds 3 to things -- which is exactly the behaviour you're wanting not to get, if you want something like this to take a variable number of arguments.
For a function of fixed arity you can get Haskell-ish behaviour, though the syntax doesn't work so nicely in Python, just by nesting lambdas: after add = lambda x: lambda y: x+y you can say add(3)(4) and get 7, or you can say add(3) and get a function that adds 3 to things.
[EDITED again to add:] As Ashwini Chaudhary's ingenious answer shows, you actually can kinda do what you want by arranging for add(5)(10) to be not the actual integer 15 but another object that very closely resembles 15 (and will just get displayed as 15 in most contexts). For me, this is firmly in the category of "neat tricks you should know about but never ever actually do", but if you have an application that really needs this sort of behaviour, that's one way to do it.
(Why shouldn't you do this sort of thing? Mostly because it's brittle and liable to produce unexpected results in edge cases. For instance, what happens if you ask for add(5)(10.5)? That will fail with A.C.'s approach; PM 2Ring's approach will cope OK with that but has different problems; e.g., add(2)(3)==5 will be False. The other reason to avoid this sort of thing is because it's ingenious and rather obscure, and therefore liable to confuse other people reading your code. How much this matters depends on who else will be reading your code. I should add for the avoidance of doubt that I'm quite sure A.C. and PM2R are well aware of this, and that I think their answers are very clever and elegant; I am not criticizing them but offering a warning about what to do with what they've told you.)

You can kind of do this with a class, but I really wouldn't advise using this "party trick" in real code.
class add(object):
def __init__(self, arg):
self.arg = arg
def __call__(self, arg):
self.arg += arg
return self
def __repr__(self):
return repr(self.arg)
# Test
print(add(1)(15)(20)(4))
output
40
Initially, add(1) creates an add instance, setting its .args attribute to 1. add(1)(15) invokes the .call method, adding 15 to the current value of .args and returning the instance so we can call it again. This same process is repeated for the subsequent calls. Finally, when the instance is passed to print its __repr__ method is invoked, which passes the string representation of .args back to print.

Related

Do Python functions know how many outputs are requested? [duplicate]

This question already has answers here:
nargout in Python
(6 answers)
Closed 7 years ago.
In Python, do functions know how many outputs are requested? For instance, could I have a function that normally returns one output, but if two outputs are requested, it does an additional calculation and returns that too?
Or is this not the standard way to do it? In this case, it would be nice to avoid an extra function argument that says to provide a second input. But I'm interested in learning the standard way to do this.
The real and easy answer is: No.
Python functions/methods does not know about how many outputs are requested, as unpacking of a returned tuple happens after the function call.
What's quite a best practice to do though is to use underscore (_) as a placeholder for unused args that are returned from functions when they're not needed, example:
def f():
return 1, 2, 3
a, b, c = f() # if you want to use all
a, _, _ = f() # use only first element in the returned tuple, 'a'
_, b, _ = f() # use only 'b'
For example, when using underscore (_) pylint will suppress any unused argument warnings.
Python functions always return exactly 1 value.
In this case:
def myfunc():
return
that value is None. In this case:
def myfunc():
return 1, 2, 3
that value is the tuple (1, 2, 3).
So there is nothing for the function to know, really.
As for returning different outputs controlled by parameters, I'm always on the fence about that. It would depend on the actual use case. For a public API that is used by others, it is probably best to provide two separate functions with different return types, that call private code that does take the parameter.

Using setattr to freeze some parameters of a method [duplicate]

This question already has answers here:
Creating functions (or lambdas) in a loop (or comprehension)
(6 answers)
Closed 6 months ago.
in order to automatically generate parameterized tests, I am trying to add methods to a class in by freezing some parameters of an existing method. Here is the piece of Python 3 code
class A:
def f(self, n):
print(n)
params = range(10)
for i in params:
name = 'f{0}'.format(i)
method = lambda self: A.f(self, i)
setattr(A, name, method)
However, the following lines then produce rather disappointing output
a = A()
a.f0()
prints "9" (instead of "0"). I must be doing something wrong, but I can't see what. Can you help ?
Thanks a lot
Edit: this question is indeed a duplicate. I would like to acknowledge the quality of all comments, which go much deeper than the raw answer.
Try
method = lambda self, i=i: A.f(self, i)
because otherwise when you call the method i's value may have changed
The best way to "freeze" parameters in Python is to use functools.partial. It's roughly equivalent to warwaruk's lambda version, but if you have a function with lots of arguments yet only want to freeze one or two of them (or if you only know certain arguments and don't care about the rest) using partial is more elegant as you only specify the arguments you want to freeze rather than having to repeat the whole function signature in the lambda.
An example for your program:
class A:
def f(self, n):
print(n)
from functools import partial
for i in range(10): # params
setattr(A, 'f{0}'.format(i), partial(A.f, n=i))
Depending on which version of Python 3 you're using, you may not need to include the 0 in the string format placeholder; starting with 3.1, iirc, it should be automatically substituted.

Python Factory Function

Same example from the same book: Python deep nesting factory functions
def maker(N):
def action(X):
return X ** N
return action
I understand the concept behind it and i think it's really neat but I cant seem to envision when I could use this approach.
I could have easily implement the above by having maker() take both N and X as an argument instead.
Has anyone use this type of factory function and explain to me why you went this approach instead of just taking multiple arguments?
Is it just user preference?
squarer = maker(2)
print(squarer(2)) # outputs 4
print(squarer(4)) # outputs 16
print(squarer(8)) # outputs 64
Essentially, it means you only have to enter in the N value once and then you can't change it later.
I think it's mostly programming style as there are multiple ways of doing the same thing. However, this way you can only enter the N value once so you could add code to test that it's a valid value once instead of checking each time you called the function.
EDIT
just thought of a possible example (though it's usually handled by using a class):
writer = connectmaker("127.0.0.1")
writer("send this text")
writer("send this other text")
The "maker" method would then connect to the address once and then maintain that value for each call to writer(). But as I said, something like this is usually a class where the __init__ would store the values.
In a certain way, you can see some of the operator function as these as well.
For example, operator.itemgetter() works this way:
import operator
get1 = operator.itemgetter(1) # creates a function which gets the item #1 of the given object
get1([5,4,3,2,1]) # gives 4
This is often used e. g. as a key= function of sorting functions and such.
Similiar, more dedicated use cases are easily imaginable if you have a concrete problem which you can solve with that.
In the same league you have these "decorator creators":
def indirect_deco(outer_param):
def real_deco(func):
def wrapper(*a, **k):
return func(outer_param, *a, **k)
return wrapper
return real_deco
#indirect_deco(1)
def function(a, b, c):
print (((a, b, c))
function(234, 432)
Here as well, the outer function is a factory function which creates the "real deco" function. This, in turn, even creates another oner which replaces the originally given one.

explicitly passing functions in python

Out of curiosity is more desirable to explicitly pass functions to other functions, or let the function call functions from within. is this a case of Explicit is better than implicit?
for example (the following is only to illustrate what i mean)
def foo(x,y):
return 1 if x > y else 0
partialfun = functools.partial(foo, 1)
def bar(xs,ys):
return partialfun(sum(map(operator.mul,xs,ys)))
>>> bar([1,2,3], [4,5,6])
--or--
def foo(x,y):
return 1 if x > y else 0
partialfun = functools.partial(foo, 1)
def bar(fn,xs,ys):
return fn(sum(map(operator.mul,xs,ys)))
>>> bar(partialfun, [1,2,3], [4,5,6])
There's not really any difference between functions and anything else in this situation. You pass something as an argument if it's a parameter that might vary over different invocations of the function. If the function you are calling (bar in your example) is always calling the same other function, there's no reason to pass that as an argument. If you need to parameterize it so that you can use many different functions (i.e., bar might need to call many functions besides partialfun, and needs to know which one to call), then you need to pass it as an argument.
Generally, yes, but as always, it depends. What you are illustrating here is known as dependency injection. Generally, it is a good idea, as it allows separation of variability from the logic of a given function. This means, for example, that it will be extremely easy for you to test such code.
# To test the process performed in bar(), we can "inject" a function
# which simply returns its argument
def dummy(x):
return x
def bar(fn,xs,ys):
return fn(sum(map(operator.mul,xs,ys)))
>>> assert bar(dummy, [1,2,3], [4,5,6]) == 32
It depends very much on the context.
Basically, if the function is an argument to bar, then it's the responsibility of the caller to know how to implement that function. bar doesn't have to care. But consequently, bar's documentation has to describe what kind of function it needs.
Often this is very appropriate. The obvious example is the map builtin function. map implements the logic of applying a function to each item in a list, and giving back a list of results. map itself neither knows nor cares about what the items are, or what the function is doing to them. map's documentation has to describe that it needs a function of one argument, and each caller of map has to know how to implement or find a suitable function. But this arrangement is great; it allows you to pass a list of your custom objects, and a function which operates specifically on those objects, and map can go away and do its generic thing.
But often this arrangement is inappropriate. A function gives a name to a high level operation and hides the internal implementation details, so you can think of the operation as a unit. Allowing part of its operation to be passed in from outside as a function parameter exposes that it works in a way that uses that function's interface.
A more concrete (though somewhat contrived) example may help. Lets say I've implemented data types representing Person and Job, and I'm writing a function name_and_title for formatting someone's full name and job title into a string, for client code to insert into email signatures or on letterhead or whatever. It's obviously going to take a Person and Job. It could potentially take a function parameter to let the caller decide how to format the person's name: something like lambda firstname, lastname: lastname + ', ' + firstname. But to do this is to expose that I'm representing people's names with a separate first name and last name. If I want to change to supporting a middle name, then either name_and_title won't be able to include the middle name, or I have to change the type of the function it accepts. When I realise that some people have 4 or more names and decide to change to storing a list of names, then I definitely have to change the type of function name_and_title accepts.
So for your bar example, we can't say which is better, because it's an abstract example with no meaning. It depends on whether the call to partialfun is an implementation detail of whatever bar is supposed to be doing, or whether the call to partialfun is something that the caller knows about (and might want to do something else). If it's "part of" bar, then it shouldn't be a parameter. If it's "part of" the caller, then it should be a parameter.
It's worth noting that bar could have a huge number of function parameters. You call sum, map, and operator.mul, which could all be parameterised to make bar more flexible:
def bar(fn, xs,ys, g, h, i):
return fn(g(h(i,xs,ys))
And the way in which g is called on the output of h could be abstracted too:
def bar(fn, xs, ys, g, h, i, j):
return fn(j(g, h(i, xs, ys)))
And we can keep going on and on, until bar doesn't do anything at all, and everything is controlled by the functions passed in, and the caller might as well have just directly done what they want done rather than writing 100 functions to do it and passing those to bar to execute the functions.
So there really isn't a definite answer one way or the other that applies all the time. It depends on the particular code you're writing.

Parameter names in Python functions that take single object or iterable

I have some functions in my code that accept either an object or an iterable of objects as input. I was taught to use meaningful names for everything, but I am not sure how to comply here. What should I call a parameter that can a sinlge object or an iterable of objects? I have come up with two ideas, but I don't like either of them:
FooOrManyFoos - This expresses what goes on, but I could imagine that someone not used to it could have trouble understanding what it means right away
param - Some generic name. This makes clear that it can be several things, but does explain nothing about what the parameter is used for.
Normally I call iterables of objects just the plural of what I would call a single object. I know this might seem a little bit compulsive, but Python is supposed to be (among others) about readability.
I have some functions in my code that accept either an object or an iterable of objects as input.
This is a very exceptional and often very bad thing to do. It's trivially avoidable.
i.e., pass [foo] instead of foo when calling this function.
The only time you can justify doing this is when (1) you have an installed base of software that expects one form (iterable or singleton) and (2) you have to expand it to support the other use case. So. You only do this when expanding an existing function that has an existing code base.
If this is new development, Do Not Do This.
I have come up with two ideas, but I don't like either of them:
[Only two?]
FooOrManyFoos - This expresses what goes on, but I could imagine that someone not used to it could have trouble understanding what it means right away
What? Are you saying you provide NO other documentation, and no other training? No support? No advice? Who is the "someone not used to it"? Talk to them. Don't assume or imagine things about them.
Also, don't use Leading Upper Case Names.
param - Some generic name. This makes clear that it can be several things, but does explain nothing about what the parameter is used for.
Terrible. Never. Do. This.
I looked in the Python library for examples. Most of the functions that do this have simple descriptions.
http://docs.python.org/library/functions.html#isinstance
isinstance(object, classinfo)
They call it "classinfo" and it can be a class or a tuple of classes.
You could do that, too.
You must consider the common use case and the exceptions. Follow the 80/20 rule.
80% of the time, you can replace this with an iterable and not have this problem.
In the remaining 20% of the cases, you have an installed base of software built around an assumption (either iterable or single item) and you need to add the other case. Don't change the name, just change the documentation. If it used to say "foo" it still says "foo" but you make it accept an iterable of "foo's" without making any change to the parameters. If it used to say "foo_list" or "foo_iter", then it still says "foo_list" or "foo_iter" but it will quietly tolerate a singleton without breaking.
80% of the code is the legacy ("foo" or "foo_list")
20% of the code is the new feature ("foo" can be an iterable or "foo_list" can be a single object.)
I guess I'm a little late to the party, but I'm suprised that nobody suggested a decorator.
def withmany(f):
def many(many_foos):
for foo in many_foos:
yield f(foo)
f.many = many
return f
#withmany
def process_foo(foo):
return foo + 1
processed_foo = process_foo(foo)
for processed_foo in process_foo.many(foos):
print processed_foo
I saw a similar pattern in one of Alex Martelli's posts but I don't remember the link off hand.
It sounds like you're agonizing over the ugliness of code like:
def ProcessWidget(widget_thing):
# Infer if we have a singleton instance and make it a
# length 1 list for consistency
if isinstance(widget_thing, WidgetType):
widget_thing = [widget_thing]
for widget in widget_thing:
#...
My suggestion is to avoid overloading your interface to handle two distinct cases. I tend to write code that favors re-use and clear naming of methods over clever dynamic use of parameters:
def ProcessOneWidget(widget):
#...
def ProcessManyWidgets(widgets):
for widget in widgets:
ProcessOneWidget(widget)
Often, I start with this simple pattern, but then have the opportunity to optimize the "Many" case when there are efficiencies to gain that offset the additional code complexity and partial duplication of functionality. If this convention seems overly verbose, one can opt for names like "ProcessWidget" and "ProcessWidgets", though the difference between the two is a single easily missed character.
You can use *args magic (varargs) to make your params always be iterable.
Pass a single item or multiple known items as normal function args like func(arg1, arg2, ...) and pass iterable arguments with an asterisk before, like func(*args)
Example:
# magic *args function
def foo(*args):
print args
# many ways to call it
foo(1)
foo(1, 2, 3)
args1 = (1, 2, 3)
args2 = [1, 2, 3]
args3 = iter((1, 2, 3))
foo(*args1)
foo(*args2)
foo(*args3)
Can you name your parameter in a very high-level way? people who read the code are more interested in knowing what the parameter represents ("clients") than what their type is ("list_of_tuples"); the type can be defined in the function documentation string, which is a good thing since it might change, in the future (the type is sometimes an implementation detail).
I would do 1 thing,
def myFunc(manyFoos):
if not type(manyFoos) in (list,tuple):
manyFoos = [manyFoos]
#do stuff here
so then you don't need to worry anymore about its name.
in a function you should try to achieve to have 1 action, accept the same parameter type and return the same type.
Instead of filling the functions with ifs you could have 2 functions.
Since you don't care exactly what kind of iterable you get, you could try to get an iterator for the parameter using iter(). If iter() raises a TypeError exception, the parameter is not iterable, so you then create a list or tuple of the one item, which is iterable and Bob's your uncle.
def doIt(foos):
try:
iter(foos)
except TypeError:
foos = [foos]
for foo in foos:
pass # do something here
The only problem with this approach is if foo is a string. A string is iterable, so passing in a single string rather than a list of strings will result in iterating over the characters in a string. If this is a concern, you could add an if test for it. At this point it's getting wordy for boilerplate code, so I'd break it out into its own function.
def iterfy(iterable):
if isinstance(iterable, basestring):
iterable = [iterable]
try:
iter(iterable)
except TypeError:
iterable = [iterable]
return iterable
def doIt(foos):
for foo in iterfy(foos):
pass # do something
Unlike some of those answering, I like doing this, since it eliminates one thing the caller could get wrong when using your API. "Be conservative in what you generate but liberal in what you accept."
To answer your original question, i.e. what you should name the parameter, I would still go with "foos" even though you will accept a single item, since your intent is to accept a list. If it's not iterable, that is technically a mistake, albeit one you will correct for the caller since processing just the one item is probably what they want. Also, if the caller thinks they must pass in an iterable even of one item, well, that will of course work fine and requires very little syntax, so why worry about correcting their misapprehension?
I would go with a name explaining that the parameter can be an instance or a list of instances. Say one_or_more_Foo_objects. I find it better than the bland param.
I'm working on a fairly big project now and we're passing maps around and just calling our parameter map. The map contents vary depending on the function that's being called. This probably isn't the best situation, but we reuse a lot of the same code on the maps, so copying and pasting is easier.
I would say instead of naming it what it is, you should name it what it's used for. Also, just be careful that you can't call use in on a not iterable.

Categories

Resources