This question already has answers here:
Passing functions with arguments to another function in Python? [duplicate]
(9 answers)
Closed 1 year ago.
Imagine that I want to create a function called "execute()". This function takes the name of another function and its input as parameters and outputs whatever it returns.
Here is an example:
execute(print, "Hello, World!") # "Hello, World!"
execute(str, 68) # "68"
Of course, this function wouldn't be of any use, but I want to grasp the main idea of putting another function in as a parameter.
How could I do this?
Functions can easily be passed into functions. To pass a variable length argument list, capture it with *args in the function definition and when calling the func use the same syntax to expand the arguments again into multiple parameters.
def execute(fn, *args):
return fn(*args)
Note: we are not passing the name of a function to execute(), we are passing the function itself.
You can just do this,
def execute (func, argv):
return func(argv)
execute(print, 'test')
returns test
execute(str, 65)
returns '65'
I believe this should work:
def execute(fn, *args, **kwargs):
return fn(*args, **kwargs)
Here:
args = Arguments (list)
kwargs = Keyworded Arguments (dictionary)
If you want to do more, then you can look for Decorators in Python.
Yes, you can pass functions as parameters into another function.
Functions that can accept other functions as arguments are also called higher-order functions.
I hope the following example helps:
def shout(text):
return text.upper()
def greet(func):
greeting = func("Hi, I am created by a function passed as an argument.")
print(greeting)
greet(shout)
The output of the code will be :
HI, I AM CREATED BY A FUNCTION PASSED AS AN ARGUMENT.
Related
I have a python function that only takes in keyword arguments:
def my_func(**kwargs):
I am splitting the keyword arguments among two separate functions, which have their keyword arguments defined explicitly:
def my_subfunc_1(a=None,b=None):
def my_subfunc_2(c=None,d=None):
When I issue help(my_func) I only get the description for my_func(**kwargs). However, ideally I would like the result of this to be my_func(a=None,b=None,c=None,d=None).
I can fetch the arguments of my_subfunc_1 and my_subfunc_2 with inspect.getfullargspec(). However, I am not sure how to use this information to override the part of my_func that the help() function reads from to fetch the displayed **kwargs.
I have two pieces of code that do almost the same thing. Can anyone tell me what is the difference between them or which one is the proper syntax in Python?
Code 1:
p = lambda content: print(content)
p("Hello")
# Prints Hello
Code 2:
p = print
p("Hello")
# Also Prints Hello
Code 1 is defining a new function, which calls the print() function. It would be more pythonic to write it as:
def p(content):
print(content)
lambda is normally only used for anonymous functions, not named functions.
Case 2 is simply giving another name to the print function. The two names can be used interchangeably.
The lambda function only accepts one argument, while the standard print function alloes multiple positional arguments and named arguments.
So with Code 2 you can write:
p("Hello", "world", end="")
but if you try this with Code 1 you'll get an error because you gave too many arguments to the function.
If you want to define a new function that can take all the arguments that print() takes, you can use *.
def p(*args, **kwds):
print(*args, **kwds)
or:
p = lambda *args, **kwds: print(*args, **kwds)
See What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
The first you are creating a anonymous function and assigning it to the variable p.
While in the second you are assigning the function print directly to the variable p.
Since in python functions are first class citizens, you can perform this kind of operations. Both are valid syntax, but if you only want to give a shorter name to a function, the second is simpler.
I have already found various answers to this question (eg. lambda function acessing outside variable) and all point to the same hack, namely (eg.) lambda n=i : n*2 with i a variable in the external scope of lambda (hoping I'm not misusing the term scope). However, this is not working and given that all answers I found are generally from couple of years ago, I thought that maybe this has been deprecated and only worked with older versions of python. Does anybody have an idea or suggestion on how to solve this?
SORRY, forgot the MWE
from inspect import getargspec
params = ['a','b']
def test(*args):
return args[0]*args[1]
func = lambda p=params : test(p)
I expected the signature of func to be ['a','b'] but if I try
func(3,2)
I get a Type error (TypeError: <lambda>() takes at most 1 argument (2 given) )
and it's true signature (from getargspec(func)[0] ) is ['p']
In my real code the thing is more complicated. Shortly:
def fit(self, **kwargs):
settings = self.synch()
freepars = self.loglike.get_args()
func = lambda p=freeparams : self.loglike(p)
minuit = Minuit(func,**settings)
I need lambda because it's the only way I could think to create inplace a function object depending on a non-hardcoded list of variables (extracted via a method get_params() of the instance self.loglike). So func has to have the correct signature, to match the info inside the dict settings
The inspector gives ['p'] as argument of func, not the list of parameters which should go in loglike. Hope you can easily spot my mistake. Thank you
There's no way to do exactly what you want. The syntax you're trying to use to set the signature of the function you're creating doesn't do what you want. It instead sets a default value for the argument you've defined. Python's function syntax allows you to define a function that accepts an arbitrary number of arguments, but it doesn't let you define a function with argument names in a variable.
What you can do is accept *args (or **kwargs) and then do some processing on the value to match it up with a list of argument names. Here's an example where I turn positional arguments in a specific order into keyword arguments to be passed on to another function:
arg_names = ['a', 'b']
def foo(*args):
if len(args) != len(arg_names):
raise ValueError("wrong number of arguments passed to foo")
args_by_name = dict(zip(arg_names, args))
some_other_function(**args_by_name)
This example isn't terribly useful, but you could do more sophisticated processing on the args_by_name dict (e.g. combining it with another dict), which might be relevant to your actual use case.
(Python 3)
First of all, I feel my title isn't quite what it should be, so if you stick through the question and come up with a better title, please feel free to edit it.
I have recently learned about Python Decorators and Python Annotations, and so I wrote two little functions to test what I have recently learned.
One of them, called wraps is supposed to mimic the behaviour of the functools wraps, while the other, called ensure_types is supposed to check, for a given function and through its annotations, if the arguments passed to some function are the correct ones.
This is the code I have for those functions:
def wraps(original_func):
"""Update the decorated function with some important attributes from the
one that was decorated so as not to lose good information"""
def update_attrs(new_func):
# Update the __annotations__
for key, value in original_func.__annotations__.items():
new_func.__annotations__[key] = value
# Update the __dict__
for key, value in original_func.__dict__.items():
new_func.__dict__[key] = value
# Copy the __name__
new_func.__name__ = original_func.__name__
# Copy the docstring (__doc__)
new_func.__doc__ = original_func.__doc__
return new_func
return update_attrs # return the decorator
def ensure_types(f):
"""Uses f.__annotations__ to check the expected types for the function's
arguments. Raises a TypeError if there is no match.
If an argument has no annotation, object is returned and so, regardless of
the argument passed, isinstance(arg, object) evaluates to True"""
#wraps(f) # say that test_types is wrapping f
def test_types(*args, **kwargs):
# Loop through the positional args, get their name and check the type
for i in range(len(args)):
# function.__code__.co_varnames is a tuple with the names of the
##arguments in the order they are in the function def statement
var_name = f.__code__.co_varnames[i]
if not(isinstance(args[i], f.__annotations__.get(var_name, object))):
raise TypeError("Bad type for function argument named '{}'".format(var_name))
# Loop through the named args, get their value and check the type
for key in kwargs.keys():
if not(isinstance(kwargs[key], f.__annotations__.get(key, object))):
raise TypeError("Bad type for function argument named '{}'".format(key))
return f(*args, **kwargs)
return test_types
Supposedly, everything is alright until now. Both the wraps and the ensure_types are supposed to be used as decorators. The problem comes when I defined a third decorator, debug_dec that is supposed to print to the console when a function is called and its arguments. The function:
def debug_dec(f):
"""Does some annoying printing for debugging purposes"""
#wraps(f)
def profiler(*args, **kwargs):
print("{} function called:".format(f.__name__))
print("\tArgs: {}".format(args))
print("\tKwargs: {}".format(kwargs))
return f(*args, **kwargs)
return profiler
That also works cooly. The problem comes when I try to use debug_dec and ensure_types at the same time.
#ensure_types
#debug_dec
def testing(x: str, y: str = "lol"):
print(x)
print(y)
testing("hahaha", 3) # raises no TypeError as expected
But if I change the order with which the decorators are called, it works just fine.
Can someone please help me understand what is going wrong, and if is there any way of solving the problem besides swapping those two lines?
EDIT
If I add the lines:
print(testing.__annotations__)
print(testing.__code__.co_varnames)
The output is as follows:
#{'y': <class 'str'>, 'x': <class 'str'>}
#('args', 'kwargs', 'i', 'var_name', 'key')
Although wraps maintains the annotations, it doesn't maintain the function signature. You see this when you print out the co_varnames. Since ensure_types does its checking by comparing the names of the arguments with the names in the annotation dict, it fails to match them up, because the wrapped function has no arguments named x and y (it just accepts generic *args and **kwargs).
You could try using the decorator module, which lets you write decorators that act like functools.wrap but also preserve the function signature (including annotations).
There is probably also a way to make it work "manually", but it would be a bit of a pain. Basically what you would have to do is have wraps store the original functions argspec (the names of its arguments), then have ensure_dict use this stored argspec instead of the wrapper's argspec in checking the types. Essentially your decorators would pass the argspec in parallel with the wrapped functions. However, using decorator is probably easier.
Sorry for the newbie question guys, but I'm relatively new to python. I want to write a function that passes keyword and value arguments into another function:
e.g.
def function_that_passes_arguments(arguments):
some_other_function(arguments)
so when I call the first function they are passed into the second... e.g.
function_that_passes_arguments(arg1=1, arg2=2)
is effectively
some_other_function(arg1=1, arg2=2)
The argument names will change so it is important that I pass both keyword and value from one function to another.
Accept *args, **kwargs and pass those to the called function:
def function_that_passes_arguments(*args, **kwargs):
some_other_function(*args, **kwargs)
In both places you can also use regular arguments - the only requirement is that the * and ** arguments are the last ones.