Using a lambda function in python - python

Is there a way to convert a function to a lambda function (automatically) in python? Something like:
def func(arg):
print("Hi", arg)
==>
lambda arg: compile(func) # <== something like this

Functions in Python are first-class citizens of the language. This means that a function can be manipulated as if it was a data value. Thus, a function can be passed to another function as an argument, or returned from a function as a result. This is referred to as higher-order programming.
Now, both the def and the lambda keywords define such function values. The main difference is that def assigns a name to such a function, whereas lambdas are anonymous functions. Another difference is that lambdas are limited to one line of code (although there are languages where this is not the case).
For example, here are two equivalent functions, one defined with def, the other with lambda:
def say_hello(name):
print('Hi ', name)
lambda name: print('Hi ', name)
Another way to interpret the def keyword is as follows:
say_hello = lambda name: print('Hi ', name)
Although such code is highly discouraged.
In essence, both of these two functions are (nearly) equivalent data values, but in the snippet above, the one defined with def could later be called, since it has a name and we can therefore later access it. The one defined by lambda cannot, since it does not have a name, and is thus immediately lost if not assigned to a variable or passed to another function, say f, as an argument, where it is then given a name in f's local scope as per the f's parameters.
However, lambdas are useful in cases where we do not want to clutter up our scope with new, short functions that are only passed to another function as an argument:
map(lambda x: x * 2, [1, 2, 3])
# >>> [2, 4, 6]
If you already have a function defined with def that does the multiplication shown above, as would be the case in your example, e.g.:
def double(x):
return x * 2
Then you can simply refer to this function by name in the call to map:
map(double, [1, 2, 3])
# >>> [2, 4, 6]
In short: There is no need to "compile a function to a lambda", a lambda is just a special case of a function which is not given a name. Refer to your function by name instead.
P.S.: Don't use map and filter, use list comprehensions instead, as they are considered more pythonic.

You can make yours easily:
def auto_lam(f):
return lambda *args, **kwargs: f(*args, **kwargs)
Then to use it:
mylam = auto_lam(func)
In this form, you can see that this is quite useless because you are simply calling the same function, but wrapping functions is something that is very common in python because it allows us to modify the behaviour of the function. These are called decorator functions.
For example:
def logging_func(f):
def do_work(*args, **kwargs):
print ("You called me")
res = f(*args, **kwargs)
print ("Finished")
return res
return do_work
Now you can wrap your existing function with this one like so:
#logging_func
def func(arg):
print("Hi", arg)
Now every time you call func, it will log something before and after. I argue this is a better use of function wrappers than one that simply gives you back the same function.

Related

Python: Decorator that reduces number of parameters in a function by fixing others

Let's say that I have the function
def add(a,b,c):
return a+b+c
I want a decorator that fixes the value of b, say to 5, and return a function with only two parameters a and c.
def add5(a,c):
return a+c+5
The function add5 should not have any other parameter. I'm not looking to solve this with a default parameters for b.
You can use functools.partial:
functools.partial(func, /, *args, **keywords)
Return a new partial
object which when called will behave like func called with the
positional arguments args and keyword arguments keywords.
from functools import partial
def add(a,b,c):
return a+b+c
If you want to give a fixed value to the first positional argument, you can do
add5 = partial(add, 5)
print(add5(1, 2))
# 8
As the first positional argument (a) will be replaced by 5, you can't do:
print(add5(a=3, b=4))
# TypeError: add() got multiple values for argument 'a'
If you want to control which parameter you fix, use keyword arguments:
add5 = partial(add, b=5)
print(add5(a=1, c=2))
# 8
In Python, functions are the first class objects, which means that –
Functions are objects; they can be referenced to, passed to a variable and returned from other functions as well.
Functions can be defined inside another function and can also be passed as argument to another function.
Decorators are very powerful and useful tool in Python since it allows programmers to modify the behavior of function or class. Decorators allow us to wrap another function in order to extend the behavior of wrapped function, without permanently modifying it.
In Decorators, functions are taken as the argument into another function and then called inside the wrapper function.
in your case:
def my_custom_decorator(f):
def outer_function(*args):
res = f(*args)
return res + 5
return outer_function
#my_custom_decorator
def A_and_C(a, c):
return a+c
print(A_and_C(2,3))
You can do it by
def add5(*arg):
return sum(args)+5
print(add5(1,2))
This will sum all the argument that you are passing to the function and will add 5 to the sum of the args.
Output
8

what is difference between foo=bar(foo) and something=bar(foo) in decorator in python?

I read about we can create reference of any function in python but i also read that while creating a decorator we use a special syntax called "#" : ex: #decorator_function
and this #decorator_function is equal to new_function=decorator_function(new_function)
so my doubt is in my view both :
anything = decorator_function(new_function)
new_function=decorator_function(new_function)
both are playing the role of closure but both result different output. so what is big difference between both of them?
example code :
def deco(fn):
def wrapper(*args):
print('called')
return fn(*args)
return wrapper
def something(x):
if x == 0:
print('first')
something(x+1)
else:
print('hello')
print('new name')
a = deco(something)
a(0)
print('\nreassigning to the same name')
something = deco(something)
something(0)
The original something function you wrote makes a recursive call to something, not a.
If you assign deco(something) to a, then something is still the original function, and the recursive call will call the original function:
new function calls original function
original function looks up something, finds original function
original function calls original function...
If you assign deco(something) to something, then something is now the new function, and the recursive call will call the new function:
new function calls original function
original function looks up something, finds new function
original function calls new function
new function calls original function...
For the first one, a = deco(something)
def deco(fn):
def wrapper(*args):
print('called')
return something(*args) # Notice here
return wrapper
The second one, something = deco(something) is just the same except your original function something now has become the wrapper function that deco returned.
>>> something
<function deco.<locals>.wrapper at 0x7fbae4622f28>
>>> a
<function deco.<locals>.wrapper at 0x7fbae4622ea0>
Both something and a wrap the original something before it was overridden by something = deco(something) assignment. Python internally stored the original something function somewhere in the wrapper functions:
>>> something.__closure__[0].cell_contents
<function something at 0x7fbae4622bf8>
>>> a.__closure__[0].cell_contents
<function something at 0x7fbae4622bf8>
In the last assignment something has become something different:
>>> something
<function deco.<locals>.wrapper at 0x7fbae4622f28>
Both of your assignments using manual calls to the decorator work. But one of them (the one that rebinds something) replaces the original function so that it can't be reached by its original name any more. It's not any different than using any other assignment. For instance:
def foo(x):
return x + 1
a = 10
a = foo(a)
When you assign the result of foo(a) to a, it replaces the old value of 10 with a new value 11. You can't get the 10 any more.
Decorator syntax does the same thing.
def deco(fn):
def wrapper(*args):
print('called')
return fn(*args)
return wrapper
def func_1(x):
pass
func_1 = deco(func_1) # replace the old func_1 with a decorated version
#deco # the # syntax does the same thing!
def func_2(x):
pass
It's not forbidden to use a decorator to create a differently named function, it's just not normally as useful (and so there's no special syntax for it):
def func_3(x):
pass
func_4 = deco(func_3) # this works, creating a new function name without hiding the old one

Can I implement a function or better a decorator that makes func(a1)(a2)(a3)...(an) == func(a1, a2, a3,...,an)? [duplicate]

On Codewars.com I encountered the following task:
Create a function add that adds numbers together when called in succession. So add(1) should return 1, add(1)(2) should return 1+2, ...
While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function f(x) that can be called as f(x)(y)(z).... Thus far, I'm not even sure how to interpret this notation.
As a mathematician, I'd suspect that f(x)(y) is a function that assigns to every x a function g_{x} and then returns g_{x}(y) and likewise for f(x)(y)(z).
Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.
How do you call this concept and where can I read more about it?
I don't know whether this is function chaining as much as it's callable chaining, but, since functions are callables I guess there's no harm done. Either way, there's two ways I can think of doing this:
Sub-classing int and defining __call__:
The first way would be with a custom int subclass that defines __call__ which returns a new instance of itself with the updated value:
class CustomInt(int):
def __call__(self, v):
return CustomInt(self + v)
Function add can now be defined to return a CustomInt instance, which, as a callable that returns an updated value of itself, can be called in succession:
>>> def add(v):
... return CustomInt(v)
>>> add(1)
1
>>> add(1)(2)
3
>>> add(1)(2)(3)(44) # and so on..
50
In addition, as an int subclass, the returned value retains the __repr__ and __str__ behavior of ints. For more complex operations though, you should define other dunders appropriately.
As #Caridorc noted in a comment, add could also be simply written as:
add = CustomInt
Renaming the class to add instead of CustomInt also works similarly.
Define a closure, requires extra call to yield value:
The only other way I can think of involves a nested function that requires an extra empty argument call in order to return the result. I'm not using nonlocal and opt for attaching attributes to the function objects to make it portable between Pythons:
def add(v):
def _inner_adder(val=None):
"""
if val is None we return _inner_adder.v
else we increment and return ourselves
"""
if val is None:
return _inner_adder.v
_inner_adder.v += val
return _inner_adder
_inner_adder.v = v # save value
return _inner_adder
This continuously returns itself (_inner_adder) which, if a val is supplied, increments it (_inner_adder += val) and if not, returns the value as it is. Like I mentioned, it requires an extra () call in order to return the incremented value:
>>> add(1)(2)()
3
>>> add(1)(2)(3)() # and so on..
6
You can hate me, but here is a one-liner :)
add = lambda v: type("", (int,), {"__call__": lambda self, v: self.__class__(self + v)})(v)
Edit: Ok, how this works? The code is identical to answer of #Jim, but everything happens on a single line.
type can be used to construct new types: type(name, bases, dict) -> a new type. For name we provide empty string, as name is not really needed in this case. For bases (tuple) we provide an (int,), which is identical to inheriting int. dict are the class attributes, where we attach the __call__ lambda.
self.__class__(self + v) is identical to return CustomInt(self + v)
The new type is constructed and returned within the outer lambda.
If you want to define a function to be called multiple times, first you need to return a callable object each time (for example a function) otherwise you have to create your own object by defining a __call__ attribute, in order for it to be callable.
The next point is that you need to preserve all the arguments, which in this case means you might want to use Coroutines or a recursive function. But note that Coroutines are much more optimized/flexible than recursive functions, specially for such tasks.
Here is a sample function using Coroutines, that preserves the latest state of itself. Note that it can't be called multiple times since the return value is an integer which is not callable, but you might think about turning this into your expected object ;-).
def add():
current = yield
while True:
value = yield current
current = value + current
it = add()
next(it)
print(it.send(10))
print(it.send(2))
print(it.send(4))
10
12
16
Simply:
class add(int):
def __call__(self, n):
return add(self + n)
If you are willing to accept an additional () in order to retrieve the result you can use functools.partial:
from functools import partial
def add(*args, result=0):
return partial(add, result=sum(args)+result) if args else result
For example:
>>> add(1)
functools.partial(<function add at 0x7ffbcf3ff430>, result=1)
>>> add(1)(2)
functools.partial(<function add at 0x7ffbcf3ff430>, result=3)
>>> add(1)(2)()
3
This also allows specifying multiple numbers at once:
>>> add(1, 2, 3)(4, 5)(6)()
21
If you want to restrict it to a single number you can do the following:
def add(x=None, *, result=0):
return partial(add, result=x+result) if x is not None else result
If you want add(x)(y)(z) to readily return the result and be further callable then sub-classing int is the way to go.
The pythonic way to do this would be to use dynamic arguments:
def add(*args):
return sum(args)
This is not the answer you're looking for, and you may know this, but I thought I would give it anyway because if someone was wondering about doing this not out of curiosity but for work. They should probably have the "right thing to do" answer.

Can I change function parameters when passing them as variables?

Excuse my poor wording in the title, but here's a longer explanation:
I have a function which as arguments takes some functions which are used to determine which data to retrieve from a database, as such:
def customer_data(customer_name, *args):
# initialize dictionary with ids
codata = dict([(data.__name__, []) for data in args])
codata['customer_observer_id'] = _customer_observer_ids(customer_name)
# add values to dictionary using function name as key
for data in args:
for coid in codata['customer_observer_id']:
codata[data.__name__].append(data(coid))
return codata
Which makes the call to the function looking something like this:
customer_data('customername', target_parts, source_group, ...)
One of these functions is defined with an extra parameter:
def polarization_value(customer_observer_id, timespan='day')
What I would like is a way to change the timespan variable in a clever way. One obvious way is to include a keyword argument in customer_observer and add an exception when the function name being called is 'polarization_value', but I have a feeling there is a better way to do this.
You can use functools.partial and pass polarization_value as :
functools.partial(polarization_value, timespan='day')
Example:
>>> import functools
def func(x, y=1):
print x, y
...
>>> new_func = functools.partial(func, y=20)
>>> new_func(100)
100 20
You may also find this helpful: Python: Why is functools.partial necessary?

Set function signature in Python

Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customized signature.
More detail
Given a list l and and dictionary d I want to be able to:
Set the non-keyword arguments of f2 to the strings in l
Set the keyword arguments of f2 to the keys in d and the default values to the values of d
ie. Suppose we have
l = ["x", "y"]
d = {"opt": None}
def f(*args, **kwargs):
# My code
Then I would want a function with signature:
def f2(x, y, opt=None):
# My code
A specific use case
This is just a simplified version of my specific use case. I am giving this as an example only.
My actual use case (simplified) is as follows. We have a generic initiation function:
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__, args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
We want to use this function in a number of classes. In particular, we want to create a function __init__ that behaves like generic_init, but has the signature defined by some class variables at creation time:
class my_class:
__init_args__ = ["x", "y"]
__kw_init_args__ = {"my_opt": None}
__init__ = create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
We want create_initiation_function to create a new function with the signature defined using __init_args__ and __kw_init_args__. Is it possible to write create_initiation_function?
Please note:
If I just wanted to improve the help, I could set __doc__.
We want to set the function signature on creation. After that, it doesn't need to be changed.
Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init
We want to define create_initiation_function. We don't want to manually specify the new function!
Related
Preserving signatures of decorated functions: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value
From PEP-0362, there actually does appear to be a way to set the signature in py3.3+, using the fn.__signature__ attribute:
from inspect import signature
from functools import wraps
def shared_vars(*shared_args):
"""Decorator factory that defines shared variables that are
passed to every invocation of the function"""
def decorator(f):
#wraps(f)
def wrapper(*args, **kwargs):
full_args = shared_args + args
return f(*full_args, **kwargs)
# Override signature
sig = signature(f)
sig = sig.replace(parameters=tuple(sig.parameters.values())[1:])
wrapper.__signature__ = sig
return wrapper
return decorator
Then:
>>> #shared_vars({"myvar": "myval"})
>>> def example(_state, a, b, c):
>>> return _state, a, b, c
>>> example(1,2,3)
({'myvar': 'myval'}, 1, 2, 3)
>>> str(signature(example))
'(a, b, c)'
Note: the PEP is not exactly right; Signature.replace moved the params from a positional arg to a kw-only arg.
For your usecase, having a docstring in the class/function should work -- that will show up in help() okay, and can be set programmatically (func.__doc__ = "stuff").
I can't see any way of setting the actual signature. I would have thought the functools module would have done it if it was doable, but it doesn't, at least in py2.5 and py2.6.
You can also raise a TypeError exception if you get bad input.
Hmm, if you don't mind being truly vile, you can use compile()/eval() to do it. If your desired signature is specified by arglist=["foo","bar","baz"], and your actual function is f(*args, **kwargs), you can manage:
argstr = ", ".join(arglist)
fakefunc = "def func(%s):\n return real_func(%s)\n" % (argstr, argstr)
fakefunc_code = compile(fakefunc, "fakesource", "exec")
fakeglobals = {}
eval(fakefunc_code, {"real_func": f}, fakeglobals)
f_with_good_sig = fakeglobals["func"]
help(f) # f(*args, **kwargs)
help(f_with_good_sig) # func(foo, bar, baz)
Changing the docstring and func_name should get you a complete solution. But, uh, eww...
I wrote a package named forge that solves this exact problem for Python 3.5+:
With your current code looking like this:
l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
And your desired code looking like this:
def f2(x, y, opt=None):
#My code
Here is how you would solve that using forge:
f2 = forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)(f)
As forge.sign is a wrapper, you could also use it directly:
#forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)
def func(*args, **kwargs):
# signature becomes: func(x, y, opt=None)
return (args, kwargs)
assert func(1, 2) == ((), {'x': 1, 'y': 2, 'opt': None})
Have a look at makefun, it was made for that (exposing variants of functions with more or less parameters and accurate signature), and works in python 2 and 3.
Your example would be written like this:
try: # python 3.3+
from inspect import signature, Signature, Parameter
except ImportError:
from funcsigs import signature, Signature, Parameter
from makefun import create_function
def create_initiation_function(cls, gen_init):
# (1) check which signature we want to create
params = [Parameter('self', kind=Parameter.POSITIONAL_OR_KEYWORD)]
for mandatory_arg_name in cls.__init_args__:
params.append(Parameter(mandatory_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD))
for default_arg_name, default_arg_val in cls.__opt_init_args__.items():
params.append(Parameter(default_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD, default=default_arg_val))
sig = Signature(params)
# (2) create the init function dynamically
return create_function(sig, generic_init)
# ----- let's use it
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
assert len(args) == 0
for name, val in kwargs.items():
setattr(self, name, val)
class my_class:
__init_args__ = ["x", "y"]
__opt_init_args__ = {"my_opt": None}
my_class.__init__ = create_initiation_function(my_class, generic_init)
and works as expected:
# check
o1 = my_class(1, 2)
assert vars(o1) == {'y': 2, 'x': 1, 'my_opt': None}
o2 = my_class(1, 2, 3)
assert vars(o2) == {'y': 2, 'x': 1, 'my_opt': 3}
o3 = my_class(my_opt='hello', y=3, x=2)
assert vars(o3) == {'y': 3, 'x': 2, 'my_opt': 'hello'}
You can't do this with live code.
That is, you seem to be wanting to take an actual, live function that looks like this:
def f(*args, **kwargs):
print args[0]
and change it to one like this:
def f(a):
print a
The reason this can't be done--at least without modifying actual Python bytecode--is because these compile differently.
The former results in a function that receives two parameters: a list and a dict, and the code you're writing operates on that list and dict. The second results in a function that receives one parameter, and which is accessed as a local variable directly. If you changed the function "signature", so to speak, it'd result in a function like this:
def f(a):
print a[0]
which obviously wouldn't work.
If you want more detail (though it doesn't really help you), a function that takes an *args or *kwargs has one or two bits set in f.func_code.co_flags; you can examine this yourself. The function that takes a regular parameter has f.func_code.co_argcount set to 1; the *args version is 0. This is what Python uses to figure out how to set up the function's stack frame when it's called, to check parameters, etc.
If you want to play around with modifying the function directly--if only to convince yourself that it won't work--see this answer for how to create a code object and live function from an existing one to modify bits of it. (This stuff is documented somewhere, but I can't find it; it's nowhere in the types module docs...)
That said, you can dynamically change the docstring of a function. Just assign to func.__doc__. Be sure to only do this at load time (from the global context or--most likely--a decorator); if you do it later on, tools that load the module to examine docstrings will never see it.
Maybe I didn't understand the problem well, but if it's about keeping the same behavior while changing the function signature, then you can do something like :
# define a function
def my_func(name, age) :
print "I am %s and I am %s" % (name, age)
# label the function with a backup name
save_func = my_func
# rewrite the function with a different signature
def my_func(age, name) :
# use the backup name to use the old function and keep the old behavior
save_func(name, age)
# you can use the new signature
my_func(35, "Bob")
This outputs :
I am Bob and I am 35
We want create_initiation_function to change the signature
Please don't do this.
We want to use this function in a number of classes
Please use ordinary inheritance.
There's no value in having the signature "changed" at run time.
You're creating a maintenance nightmare. No one else will ever bother to figure out what you're doing. They'll simply rip it out and replace it with inheritance.
Do this instead. It's simple and obvious and makes your generic init available in all subclasses in an obvious, simple, Pythonic way.
class Super( object ):
def __init__( self, *args, **kwargs ):
# the generic __init__ that we want every subclass to use
class SomeSubClass( Super ):
def __init__( self, this, that, **kwdefaults ):
super( SomeSubClass, self ).__init__( this, that, **kwdefaults )
class AnotherSubClass( Super ):
def __init__( self, x, y, **kwdefaults ):
super( AnotherSubClass, self ).__init__( x, y, **kwdefaults )
Edit 1: Answering new question:
You ask how you can create a function with this signature:
def fun(a, b, opt=None):
pass
The correct way to do that in Python is thus:
def fun(a, b, opt=None):
pass
Edit 2: Answering explanation:
"Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customised signature."
def f(*args, **kw):
pass
OK, then f2 looks like so:
def f2(a, b, opt=None):
f(a, b, opt=opt)
Again, the answer to your question is so trivial, that you obviously want to know something different that what you are asking. You really do need to stop asking abstract questions, and explain your concrete problem.

Categories

Resources