Setting numpy slice in lambda function - python

I want to create a lambda function that takes two numpy arrays and sets a slice of the first to the second and returns the newly set numpy array.
Considering you can't assign things in lambda functions is there a way to do something similar to this?
The context of this is that I want to set the centre of a zeros array to another array in a single line, and the only solution I could come up with is to use reduce and lambda functions.
I.e. I'm thinking about the condensation of this (where b is given):
a = numpy.zeros( numpy.array(b.shape) + 2)
a[1:-1,1:-1] = b
Into a single line. Is this possible?
This is just an exercise in oneliners. I have the code doing what I want it to do, I'm just wondering about this for the fun of it :).

This is ugly; you should not use it. But it is oneline lambda as you've asked:
f = lambda b, a=None, s=slice(1,-1): f(b, numpy.zeros(numpy.array(b.shape) + 2))\
if a is None else (a.__setitem__([s]*a.ndim, b), a)[1]
What is __setitem__?
obj.__setitem__(index, value) is equivalent to obj[index] = value in this case. Example:
class A:
def __setitem__(self, index, value):
print 'index=%s, value=%s' % (index, value)
a = A()
a[1, 2] = 3
It prints:
index=(1, 2), value=3
Why does __setitem__() return None?
There is a general convention in Python that methods such as list.extend(), list.append() that modify an object in-place should return None. There are exceptions e.g., list.pop().
Y Combinator in Python
Here's blog post On writing Python one-liners which shows how write nameless recursive functions using lambdas (the link is suggested by #Peter Hansen).

Related

Multi-argument null coalesce and built-in "or" function in Python

Python has a great syntax for null coalescing:
c = a or b
This sets c to a if a is not False, None, empty, or 0, otherwise c is set to b.
(Yes, technically this is not null coalescing, it's more like bool coalescing, but it's close enough for the purpose of this question.)
There is not an obvious way to do this for a collection of objects, so I wrote a function to do this:
from functools import reduce
def or_func(x, y):
return x or y
def null_coalesce(*a):
return reduce(or_func, a)
This works, but writing my own or_func seems suboptimal - surely there is a built-in like __or__? I've attempted to use object.__or__ and operator.__or__, but the first gives an AttributeError and the second refers to the bitwise | (or) operator.
As a result I have two questions:
Is there a built-in function which acts like a or b?
Is there a built-in implementation of such a null coalesce function?
The answer to both seems to be no, but that would be somewhat surprising to me.
It's not exactly a single built-in, but what you want to achieve can be easily done with:
def null_coalesce(*a):
return next(x for x in a if x)
It's lazy, so it does short-circuit like a or b or c, but unlike reduce.
You can also make it null-specific with:
def null_coalesce(*a):
return next(x for x in a if x is not None)
Is there a built-in function which I can use which acts like a or b?
No. Quoting from this answer on why:
The or and and operators can't be expressed as functions because of their short-circuiting behavior:
False and some_function()
True or some_function()
in these cases, some_function() is never called.
A hypothetical or_(True, some_function()), on the other hand, would have to call some_function(), because function arguments are always evaluated before the function is called.
Is there a built-in implementation of such a null coalesce function?
No, there isn't. However, the Python documentation page for itertools suggests the following:
def first_true(iterable, default=False, pred=None):
"""Returns the first true value in the iterable.
If no true value is found, returns *default*
If *pred* is not None, returns the first item
for which pred(item) is true.
"""
# first_true([a,b,c], x) --> a or b or c or x
# first_true([a,b], x, f) --> a if f(a) else b if f(b) else x
return next(filter(pred, iterable), default)
Marco has it right, there's no built-in, and itertools has a recipe. You can also pip install boltons to use the boltons.iterutils.first() utility, which is perfect if you want short-circuiting.
from boltons.iterutils import first
c = first([a, b])
There are a few other related and handy reduction tools in iterutils, too, like one().
I've done enough of the above that I actually ended up wanting a higher-level tool that could capture the entire interaction (including the a and b references) in a Python data structure, yielding glom and its Coalesce functionality.
from glom import glom, Coalesce
target = {'b': 1}
spec = Coalesce('a', 'b')
c = glom(target, spec)
# c = 1
(Full disclosure, as hinted above, I maintain glom and boltons, which is good news, because you can bug me if you find bugs.)

What is the [x] in a "Function()[x]" in Python?

I only want to know what is the meaning of [x] in a function (In general, not about the code that I will show), which I always think as a list but found nothing about it.
I will show two codes that I have seen using it, the first one is using PyTorch Library (Convolution):
Short one:
x.size()[0]
Long one:
def forward(self, x):
conv_out = self.conv(x).view(x.size()[0], -1)
return self.fc(conv_out)
The second one is using GYM library for RL, but also part of the code above:
Short one:
assert env.unwrapped.get_action_meanings()[1] == 'FIRE'
Long one:
def __init__(self, env=None):
"""For environments where the user need to press FIRE for the game to start."""
super(FireResetEnv, self).__init__(env)
assert env.unwrapped.get_action_meanings()[1] == 'FIRE'
assert len(env.unwrapped.get_action_meanings()) >= 3
I don't want to know why they are using the function()[x], I only want to know what is the [x] in general.
Thank for the answer.
[] is the indexing operator in Python.
If you have a list or tuple l, l[n] means the nth element of it.
If you have a dictionary d, d[x] means the element whose key is x.
If you have a string s, s[n]means then`th character in the string.
Some other datatypes define their own indexing functions, but they generally implement the same idea, possibly extending it (Numpy arrays allow you to use a tuple to perform multi-dimensional indexing and slicing).
If you put [x] after a function call, it performs the indexing on whatever the function returns.
y = function()[x]
is equivalent to
temp = function()
y = temp[x]
The [x] that comes after the several types of input you describe is simply an index reference. It can apply to lists, tuples or dicts, depending on how you use them. For example:
z = np.zeros((2, 3)) #Creates a numpy array of zeros with size (2, 3)
print(z.size()) #Outputs (2, 3)
print(z.size()[0]) #outputs the 0th index of the tuple
print(z.size()[2]) #Returns index error as it is out of range
The same applies for lists and dicts but comes with varied problems. Most of the time, for functions, this is used only if you know what is the format of the return value and you only need a part of it.
Hope this helps.

creating a dictionary with anonymous functions in Python [duplicate]

This question already has answers here:
What do lambda function closures capture?
(7 answers)
Creating functions (or lambdas) in a loop (or comprehension)
(6 answers)
Closed 3 years ago.
I have a vector with some parameters and I would like to create a dictionary of anonymous (lambda) functions in Python3.
The goal of this is to create a callable function that gives values equal to the sum of these functions with the same argument.
I am struggling even to create a dictionary with the original lambda function objects and get them to behave consistently. I use the following code:
import numpy as np
a = np.linspace(0,2.0,10)
func = {}
for i,val in enumerate(a):
print(i)
func['f{}'.format(i)] = lambda x: x**val
print(func['f0'](0.5))
print(func['f1'](0.5))
print(func['f2'](0.5))
print(func['f3'](0.5))
The output of the final print statements gives the same value, whereas I would like it to give the values corresponding to x**val with the value of val coming from the originally constructed array a.
I guess what's happening is that the lambda functions always reference the "current" value of val, which, after the loop is executed is always the last value in the array? This makes sense because the output is:
0
1
2
3
4
5
6
7
8
9
0.25
0.25
0.25
0.25
The output makes sense because it is the result of 0.5**2.0 and the exponent is the last value that val takes on in the loop.
I don't really understand this because I would have thought val would go out of scope after the loop is run, but I'm assuming this is part of the "magic" of lambda functions in that they will keep variables that they need to compute the function in scope for longer.
I guess what I need to do is to insert the "literal" value of val at that point into the lambda function, but I've never done that and don't know how.
I would like to know how to properly insert the literal value of val into the lambda functions constructed at each iteration of the loop. I would also like to know if there is a better way to accomplish what I need to.
EDIT: it has been suggested that this question is a duplicate. I think it is a duplicate of the list comprehension post because the best answer is virtually identical and lambda functions are used.
I think it is not a duplicate of the lexical closures post, although I think it is important that this post was mentioned. That post gives a deeper understanding of the underlying causes for this behavior but the original question specifically states "mindful avoidance of lambda functions," which makes it a bit different. I'm not sure what the purpose of that mindful avoidance is, but the post did teach related lessons on scoping.
The problem with this approach is that val used inside your lambda function is the live variable, outside. When each lambda is called, the value used for val in the formula is the current value of val, therefore all your results are the same.
The solution is to "freeze" the value for val when creating each lambda function - the way that is easier to understand what is going on is to have an outer lambda function, that will take val as an input, and return your desided (inner) lambda - but with val frozen in a different scope. Note that the outer function is called and imedially discarded - its return value is the original function you had:
for i,val in enumerate(a):
print(i)
func[f'f{i}'] = (lambda val: (lambda x: x**val))(val)
shorter version
Now, due to the way Python stores default arguments to functions, it is possible to store the "current val value" as a default argument in the lambda, and avoid the need for an outer function. But that spoils the lambda signature, and the "why" that value is there is harder to understand -
for i,val in enumerate(a):
print(i)
func[f'f{i}'] = lambda x, val=val: x**val

Initialising a list of lambda functions in python

I have a tuple of functions that I want to pre-load with some data. Currently the way I am doing this is below. Essentially, I make a list of the new functions, and add the lambda functions to it one at a time, then reconvert to a tuple. However, when I use these functions in a different part of the code, every one of them acts as if it were the last one in the list.
def newfuncs(data, funcs):
newfuncs = []
for f in funcs:
newf = lambda x: f(x, data)
newfuncs.append(newf)
return tuple(newfuncs)
Here is a simple example of the problem
funcs = (lambda x, y: x + y, lambda a, b: a - b)
funcs = newfuncs(10, funcs)
print(funcs[0](5))
print(funcs[1](5))
I would expect the number 15 to be printed, then -5. However, this code prints the number -5 twice. If anyone can help my understand why this is happening, it would be greatly appreciated. Thanks!
As mentioned, the issue is with the variable f, which is the same variable assigned to all lambda functions, so at the end of the loop, every lambda sees the same f.
The solution here is to either use functools.partial, or create a scoped default argument for the lambda:
def newfuncs(data, funcs):
newfuncs = []
for f in funcs:
newf = lambda x, f=f: f(x, data) # note the f=f bit here
newfuncs.append(newf)
return tuple(newfuncs)
Calling these lambdas as before now gives:
15
-5
If you're using python3.x, make sure to take a look at this comment by ShadowRanger as a possible safety feature to the scoped default arg approach.
This is a well-known Python "issue," or should I say "this is just the way Python is."
You created the tuple:
( x => f(x, data), x => f(x, data) )
But what is f? f is not evaluated until you finally call the functions!
First, f was (x, y)=>x+y. Then in your for-loop, f was reassigned to (x, y)=>x-y.
When you finally get around to calling your functions, then, and only then, will the value of f be looked up. What is the value of f at this point? The value is (x, y)=>x-y for all of your functions. All of your functions do subtraction. This is because f is reassigned. There is ONLY ONE f. And the value of that one and only one f is set to the subtraction function, before any of your lambdas ever get called.
ADDENDUM
In case anyone is interested, different languages approach this problem in different ways. JavaScript is rather interesting (some would say confusing) here because it does things the Python way, which the OP found unexpected, as well as a different way, which the OP would have expected. In JavaScript:
> let funcs = []
> for (let f of [(x,y)=>x+y, (x,y)=>x-y]) {
funcs.push(x=>f(x,10));
}
> funcs[0](5)
15
funcs[1](5)
-5
However, if you change let to var above, it behaves like Python and you get -5 for both! This is because with let, you get a different f for each iteration of the for-loop; with var, the whole function shares the same f, which keeps getting reassigned. This is how Python works.
cᴏʟᴅsᴘᴇᴇᴅ has shown you that the way to do what you expect is to make sure you get that different f for each iteration, which Python allows you do in a pretty neat way, defaulting the second argument of the lambda to a f local to that iteration. It's pretty cool, so their answer should be accepted if it helps you.

How to get part of a list in Python without creating a new list?

I have a list a = [1,2,3,4,5]. And I have a function, say, Func(x). I know that if I do Func(a) then the reference of a will be passed into Func(x). And if I do Func(a[:]), a new list will be created and passed into Func(x).
So my question is: Is it possible to only pass the first three elements into Func(x) by reference, like Func(a) (I don't want to pass the whole list a into function due to certain reason)? If I do Func(a[:4]) a new list will be created and that's what I want to avoid.
The only way I can think about is to pass a and the indexes into Func(x), like Func(a, start, end).
There is no way to create a 'window' on a list, no.
Your only options are to create a slice, or to pass in start and end indices to the function and have the function honour those.
The latter is what the bisect module functions do for example; each function takes a lo and hi parameter that default to 0 and len(list) respectively:
def func(lst, lo=0, hi=None):
if hi is None:
hi = len(lst)
Why not create a second argument so that Func (a) becomes Func (a, n) where a is the reference to your array and n is the position in the array to which you want to evaluate to?
Something like:
Func (a, 2)
With that example the first three elements are evaluated.

Categories

Resources