How to create multiple functions in a for loop? - python

I want to create multiple functions namly func1, func2, ..., funci, in a for loop that all take different definitions. I have tested the following methods but they throw me errors. How is the proper way to do it:
What I want:
for i in range(1, 10):
func1(x, args1):
do something
func2(x, args2):
do something
What I did but didn't work:
for ii in range(1,10):
def globals()["func{}".format(ii)](t, "args{}".format(ii))
for ii in range(1,10):
def "func{}".format(ii)(t, "args{}".format(ii))
For the question I have I found only the following link which does not work for my purpose.
How do I loop through functions with similar names in python?

I think this answer using inspect's signature function does what you want. If it doesn't, or you think it's too complicated, the only way I can think of is to use eval:
for i in range(1, 10):
eval(f"""def func{i}(arg{i}):
print(arg{i})""")
Please read here why you shouldn't do that.

It's not clear what exactly you're trying to accomplish here.
This is a simple example of a way to define a list of functions dynamically:
fns = [lambda:char for char in string.ascii_lowercase]
(obviously, I've defined some very simple functions to focus on the creation aspect)
Based on discussion in the comments thread, I believe what the OP is looking for is to pass a list of functions to an optimizing function (specifically, scipy.optimize_least_squares)
The most important thing to know here is that the names of the functions in the local namespace do not matter, what matters is that the functions are passed to the optimizing function with the correct parameter names. (they will then have the correct names in the namespace of the called function, which is what you want)
However, if you want to dynamically generate the keyword arguments for a function call, the easiest way to do that is to create a dict and then use the ** operator to unpack it:
kwargs = some_func_returning_a_dict_of_kwargs()
result = scipy.optimize_least_squares(**kwargs)

Use a dict and put lambda functions as value with arguments, e.g:
d = {}
for i in range(0,101):
f = "func{0}".format(i)
if f not in d:
d[f] = lambda x, args: x * args
print(d['func1'](10,10))
print(d['func2'](10,20))
print(d['func3'](10,30))
This will give you each key calling that function with different arguments.
you don't need to call the args{0}.format(i) to give you different name, you can keep track of them with your keys. this is preferable way when readability counts.

Related

Python: calling a child function using a string

Does anyone know how to call a child function that belongs to a parent function using the dot operator, but where the child function's name is stored in a string variable.
def parent_function():
# do something
def child_function():
# do something else
Now imagine I have a string called 'child_function'. Is there a way to do this:
method_name = 'child_function'
parent_function.method_name()
I understand that method_name is a string and so it is not callable. The syntax above is clearly wrong, but I wanted to know if there was a way to do this?
Thank you!
As others have pointed out in comments there will need to be a little more setup to actually call an inner function, such as a parameter like this:
def parent_function(should_call=False):
# do something
def child_function():
print("I'm the child")
if should_call:
child_function()
That being said, and to answer your specific question, you technically can call the inner function directly. I should note this is bad and you should not be doing this. You can access the inner function via the outer function's code object
exec(parent_function.__code__.co_consts[1])
As opposed to many comments, you can actually access inner functions even if the outer function is not in the system memory anymore. This technique in python is called closure.
To know more about closure, visit Programiz
Coming to your requirement, you need to call the nested method outside the nesting function.
What are we leveraging?
Closure technique of python
locals() method which returns all the local properties and methods inside an enclosing method.
lambda x: x, A anonymous (Lambda) function
def parent_function():
# do something
def child_function():
# do something else
print("child_function got invoked")
return {i: j for i, j in locals().items() if type(j) == type(lambda x: x)}
# locals() returns all the properties and nested methods inside an enclosing method.
# We are filtering out only the methods / funtions and not properties / variables
parent_function()["child_function"]()
The below is the output.
>>> child_function got invoked
Better Solution:
Instead of using nested methods, leverage the concept of classes provided by Python.
Enclose the nested functions as methods inside a class.
If you include global child_function in parent_function, then once you run parent_function, you can call child_function in the main program. It's not a very clean way of defining functions, though. If you want a function defined in the main program, then you should define it in the main program.
Consider the following case:
def parent_function():
a = 1
Can you access a from the global scope? No, because it is a local variable. It is only existent while parent_function runs, and forgotten after.
Now, in python, a function is stored in a variable just like any other value. child_function is a local variable just like a. Thus, it is in principle not possible to access it from outside parent_function.
Edit: unless you make it available to the outside somehow, e.g. by returning it. But then, still the name child_function is internal to parent_function.
Edit 2: You can get functions given by name (as string) by using the locals() and globals() dictionaries.
def my_function():
print "my function!"
func_name = "my_function"
f = globals()[func_name]
f()

Pythonic way to initialise function parameters

Which is more pythonic?
A
def my_function(arg, p=0):
while arg:
# do something
value = arg.pop()
# save value with a name which uses p for indexing
p+=1
or B
def my_function(arg):
p = 0
while arg:
# do something
value = arg.pop()
# save value with a name which uses p for indexing
p+=1
A part of me think its silly to include p as an argument to the function incase someone sets it to a weird a value. But at the same time I don't like having p=0 clutter up a function which is already very complicated.
Don't clutter up function parameters with locals. If it is not a parameter the caller should use, don't add it to your function signature.
In other words, you don't want anyone using help(my_function) and be surprised at what p might be for.
This isn't a hard and fast rule, of course; some critical-path functions can be made faster by using locals, so sometimes you'll see something like:
some_global = expensive_function()
def foo(bar, baz, _some_local=some_global):
# code using _some_local instead of some_global
to make use of faster local name lookups. The _ at the start of the argument then tells you that the name is really an internal implementation detail you should not rely on.
It depends on if p always has to start at 0 or not. If it does, then definitely go with option B. Don't give users an opportunity to mess with your code if it's not necessary.

Python Factory Function

Same example from the same book: Python deep nesting factory functions
def maker(N):
def action(X):
return X ** N
return action
I understand the concept behind it and i think it's really neat but I cant seem to envision when I could use this approach.
I could have easily implement the above by having maker() take both N and X as an argument instead.
Has anyone use this type of factory function and explain to me why you went this approach instead of just taking multiple arguments?
Is it just user preference?
squarer = maker(2)
print(squarer(2)) # outputs 4
print(squarer(4)) # outputs 16
print(squarer(8)) # outputs 64
Essentially, it means you only have to enter in the N value once and then you can't change it later.
I think it's mostly programming style as there are multiple ways of doing the same thing. However, this way you can only enter the N value once so you could add code to test that it's a valid value once instead of checking each time you called the function.
EDIT
just thought of a possible example (though it's usually handled by using a class):
writer = connectmaker("127.0.0.1")
writer("send this text")
writer("send this other text")
The "maker" method would then connect to the address once and then maintain that value for each call to writer(). But as I said, something like this is usually a class where the __init__ would store the values.
In a certain way, you can see some of the operator function as these as well.
For example, operator.itemgetter() works this way:
import operator
get1 = operator.itemgetter(1) # creates a function which gets the item #1 of the given object
get1([5,4,3,2,1]) # gives 4
This is often used e. g. as a key= function of sorting functions and such.
Similiar, more dedicated use cases are easily imaginable if you have a concrete problem which you can solve with that.
In the same league you have these "decorator creators":
def indirect_deco(outer_param):
def real_deco(func):
def wrapper(*a, **k):
return func(outer_param, *a, **k)
return wrapper
return real_deco
#indirect_deco(1)
def function(a, b, c):
print (((a, b, c))
function(234, 432)
Here as well, the outer function is a factory function which creates the "real deco" function. This, in turn, even creates another oner which replaces the originally given one.

explicitly passing functions in python

Out of curiosity is more desirable to explicitly pass functions to other functions, or let the function call functions from within. is this a case of Explicit is better than implicit?
for example (the following is only to illustrate what i mean)
def foo(x,y):
return 1 if x > y else 0
partialfun = functools.partial(foo, 1)
def bar(xs,ys):
return partialfun(sum(map(operator.mul,xs,ys)))
>>> bar([1,2,3], [4,5,6])
--or--
def foo(x,y):
return 1 if x > y else 0
partialfun = functools.partial(foo, 1)
def bar(fn,xs,ys):
return fn(sum(map(operator.mul,xs,ys)))
>>> bar(partialfun, [1,2,3], [4,5,6])
There's not really any difference between functions and anything else in this situation. You pass something as an argument if it's a parameter that might vary over different invocations of the function. If the function you are calling (bar in your example) is always calling the same other function, there's no reason to pass that as an argument. If you need to parameterize it so that you can use many different functions (i.e., bar might need to call many functions besides partialfun, and needs to know which one to call), then you need to pass it as an argument.
Generally, yes, but as always, it depends. What you are illustrating here is known as dependency injection. Generally, it is a good idea, as it allows separation of variability from the logic of a given function. This means, for example, that it will be extremely easy for you to test such code.
# To test the process performed in bar(), we can "inject" a function
# which simply returns its argument
def dummy(x):
return x
def bar(fn,xs,ys):
return fn(sum(map(operator.mul,xs,ys)))
>>> assert bar(dummy, [1,2,3], [4,5,6]) == 32
It depends very much on the context.
Basically, if the function is an argument to bar, then it's the responsibility of the caller to know how to implement that function. bar doesn't have to care. But consequently, bar's documentation has to describe what kind of function it needs.
Often this is very appropriate. The obvious example is the map builtin function. map implements the logic of applying a function to each item in a list, and giving back a list of results. map itself neither knows nor cares about what the items are, or what the function is doing to them. map's documentation has to describe that it needs a function of one argument, and each caller of map has to know how to implement or find a suitable function. But this arrangement is great; it allows you to pass a list of your custom objects, and a function which operates specifically on those objects, and map can go away and do its generic thing.
But often this arrangement is inappropriate. A function gives a name to a high level operation and hides the internal implementation details, so you can think of the operation as a unit. Allowing part of its operation to be passed in from outside as a function parameter exposes that it works in a way that uses that function's interface.
A more concrete (though somewhat contrived) example may help. Lets say I've implemented data types representing Person and Job, and I'm writing a function name_and_title for formatting someone's full name and job title into a string, for client code to insert into email signatures or on letterhead or whatever. It's obviously going to take a Person and Job. It could potentially take a function parameter to let the caller decide how to format the person's name: something like lambda firstname, lastname: lastname + ', ' + firstname. But to do this is to expose that I'm representing people's names with a separate first name and last name. If I want to change to supporting a middle name, then either name_and_title won't be able to include the middle name, or I have to change the type of the function it accepts. When I realise that some people have 4 or more names and decide to change to storing a list of names, then I definitely have to change the type of function name_and_title accepts.
So for your bar example, we can't say which is better, because it's an abstract example with no meaning. It depends on whether the call to partialfun is an implementation detail of whatever bar is supposed to be doing, or whether the call to partialfun is something that the caller knows about (and might want to do something else). If it's "part of" bar, then it shouldn't be a parameter. If it's "part of" the caller, then it should be a parameter.
It's worth noting that bar could have a huge number of function parameters. You call sum, map, and operator.mul, which could all be parameterised to make bar more flexible:
def bar(fn, xs,ys, g, h, i):
return fn(g(h(i,xs,ys))
And the way in which g is called on the output of h could be abstracted too:
def bar(fn, xs, ys, g, h, i, j):
return fn(j(g, h(i, xs, ys)))
And we can keep going on and on, until bar doesn't do anything at all, and everything is controlled by the functions passed in, and the caller might as well have just directly done what they want done rather than writing 100 functions to do it and passing those to bar to execute the functions.
So there really isn't a definite answer one way or the other that applies all the time. It depends on the particular code you're writing.

Wildcards in Python?

Over the years I have noticed the 'wildcard' variable in various bits and pieces of Python I've come across. I assumed it worked like Haskell: allowing you to put a variable where one was required in the formal parameters, but not binding it.
I've used this on, for example, the left hand side of an tuple-unpacking assignment when I don't need one of the variables.
For example:
_, extension = os.path.splitext(filename)
So when I wrote something similar to this today:
(lambda (x,_,_): x)((1,2,3))
I.E. I tried to bind the underscore twice, I received a syntax error. I was surprised to see that _ is indeed a real variable:
(lambda (x,_,z): _)((1,2,3))
> 2
Looks like _ is just a variable name like any other.
Is there a bona fide wildcard variable that I can use as I would like (i.e. able to use more than one in a tuple unpacking assignment), as per the first example?
There is no wildcard variable in Python.
I try to dissuade people from using _ as a variable name for quite some time now. You are not the first person mistaking _ as some kind of special syntax, so it's better not to use _ as a variable name at all to avoid this kind of confusion. If there ever was a "convention" to use _ as a throw-away variable name, this convention was misguided.
There are more problems than just the confusion it causes. For example, _ clashes with _ in the interactive interpreter and the common gettext alias.
Regarding the lambda expression, I'd just use lambda x, *args: ... to ignore all arguments except for the first one. In other cases, I'd use names explicitly stating I don't want to use them, like dummy. In case of loops of range()s, I usually use for i in range(n) and simply don't use i.
Edit: I just noticed (by looking at the other answers) that you use tuple unpacking in the argument list, so lambda x, *args: ... doesn't solve your problem. Tuple unpacking in parameter lists has been removed in Python 3.x because it was considered too obscure a feature. Better go with mipadi's answer instead.
Not really. Python is not Haskell. Map, apply, reduce, and lambda are kind of second-class citizens, though there is some interesting stuff in itertools.
Unless you have some need to use one-line lambdas, the correct way is this:
def f(x, *args, **kwargs):
return x
The *args argument lets you use any number of unnamed arguments (which will be available as a tuple called args). Extra named arguments will be in a dictionary called kwargs.
I don't think there's any way to do this in a lambda, but there's usually no need. A function declaration can go anywhere. Note, you do interesting / evil stuff if you put the function definition inside another function (or loop):
def make_func(s):
def f(*trash, **more_trash):
print s
return f
f1 = make_func('hello')
f2 = make_func('world')
f1(1,2,'ham','spam')
f2(1,2,a=1,b=2)
will output:
>>> hello
>>> world
As #rplnt pointed out, this won't be the same for loops:
funcs = []
for s in ('hello','world'):
def f():
print s
funcs.append(f)
for f in funcs:
f()
will output:
>>> world
>>> world
because loops only have one namespace.
No, Python doesn't have any equivalent to Haskell's _. The convention in Python is to use _ for "throwaway" variables, but it's an actual variable name, and as you found, you can't use it twice in the same context (like a lambda parameter list).
In the examples you gave, I'd just use indexing:
lambda tup: tup[0]
or
lambda tup: tup[1]
Not as pretty, but one of the better ways to do it.
It is possible, with a little trick:
class _:
def __eq__(x,y): return true
_=_() #always create a new _ instance if used, the class name itself is not needed anymore
[(a,b) for (a,_,_,b) in [(1,2,3,4),(5,6,7,8)]]
gives
[(1, 4), (5, 8)]
I'm using it often, because it makes code more elegant,
a part of Haskells beauty in Python.
Short answer is no. Could just follow your existing convention. That is
(lambda (x, _1, _2): x)((1,2,3))

Categories

Resources