Attach functions to array elements? - python

Is there a way to attach a function (same function) to all the elements of an array without looping through and attaching it one by one?
So like
# create function foo from some computation
foo # some def
# list
objects # list of objects
# attach same foo function to all elements of objects
# maybe using a decorator?
# loop through list to execute foo
for obj in objects:
obj.foo()
Let me explain this more:
Of course I can just assign the value of an object like
obj.attr = value
or for an object list:
for obj in objects:
obj.attr = value
What I am trying to avoid is setting the value of an attr on each single object, but rather applying a function on the entire list/array and each element would execute that function.

You could make a function to wrap it up:
def for_each(l, f):
for item in l:
f(item)
Then for a function foo you could do this:
for_each(objects, foo)
For a method foo you could do this:
for_each(objects, lambda item: item.foo())
Or this:
from operator import methodcaller
for_each(objects, methodcaller('foo'))
In Python 2, you can also use map:
map(foo, objects)
For Python 3, you'll have to wrap that in list(...). In either version, you can use list comprehensions:
[foo(item) for item in objects]
However, if you're calling the function just for its side effect rather than transforming the list somehow, I'd recommend against these last two ways as it's against the Zen of Python:
Explicit is better than implicit.
And frankly, one more line for a for loop isn't that much.

You can use map. It is generally used to create a second list, and will return that value, but you can just ignore it.
map(lambda x: x.foo(), objects)

use numpy vectorize! It will work perfectly for you!
from numpy import np
def fun(x):
#do something
array = np.array(your_list)
vectfun = np.vectorize(fun)
answer = vectfun(array)
So now answer will be a resulting array consisting of all the items in the previous list with the function applied to them!! Here is an example:
>>> your_list = [1,2,3,4,5]
>>> def fun(x):
... return x**x
...
>>> array = np.array((your_list))
>>> vectfun = np.vectorize(fun)
>>> answer = vectfun(array)
>>> answer
array([ 1, 4, 27, 256, 3125])

Related

Create a list with a value in it using only pure functions - python

Why this problem has no trivial solution is because I needs to be solved using only pure functions.
Using only pure functions from Python's functional programming page (https://docs.python.org/3/howto/functional.html#), how can one create a list with a value in it? If we'd like to create a list with a value in it, we'd (in code) just do
x = [1]
I do not consider [] to be to be a part of the functions we're looking at here, since it has no signature and is not callable like any other function.
Using only functions to do this is not so trivial. One thought I had was to create a new list using list() and then append values to it. But list().append is mutable and does not return a new, or the, list with the item in it.
What I really want to do is to turn ["a","b","c"] into [["a"],["b"],["c"]], with above constraints.
Other proposals has been made like creating my own (pure) function doing what I want:
def create_list(value) -> list:
return [value]
and then just do map(create_list, ["a","b","c"]) to get solution.
But this is a custom made function and is not from any of the python package functions (within https://docs.python.org/3/howto/functional.html, as mentioned)
lst=[1,2,3];
#this will print [[1],[2],[3]]
print(list(map(lambda x: [x],lst)));
Single element:
def to_list(elem):
return list(range(elem, elem+1)))
To convert [1,2,3] into [[1], [2], [3]] with list comprehesion (it can be easily changed to map):
return [to_list(el) for el in input_list]
And without (ugly, but works ^^)
import itertools
def make_gen(elem):
yield elem
def to_list(elem):
return list(make_gen(elem))
def helper(elem, l):
return list(itertools.chain(to_list(to_list(elem)), l))
def convert(l):
if not l:
return []
return helper(l[0], convert(l[1:]))
print(convert([1, 2, 3]))
To ensure non-mutability, you probably want to use tuples instead of lists (or be very disciplined with your lists).
Using a list comprehension would be a valid functional approach:
A = [1,2,3]
B = [ [i] for i in A ] # [[1], [2], [3]]
or with tuples:
A = (1,2,3)
B = tuple( (i,) for i in A ) # ((1,), (2,), (3,))
If you must use functions, then map() is probably a good solution to this:
A = [1,2,3]
B = list(map(lambda i:[i],A))
If even [i] is proscribed (but why would it be), you can use a a function to make a list directly from its arguments:
def makeList(*v): return list(*v)
A = makeList(1,2,3)
B = makeList(*map(makeList,A))
# combined
makeList(*map(makeList,makeList(1,2,3)))
BTW functional programming is not about "only using functions", it is more about non-mutability of results (and avoidance of side effects). You may want to question whoever is sending you on this wild goose chase.
Using only pure functions from Python's functional programming page
(https://docs.python.org/3/howto/functional.html#), how can one create
a list with a value in it? If we'd like to create a list with number 1
in it
You might exploit generator as generator are described therein as follows
def justone():
yield 1
lst = list(justone())
print(lst)
output
[1]
justone is function (which might be checked using inspect.isfunction) and is pure (as it does not alter anything outside)
In the documentation you link, there are references to Iterators and Generators, which are powerful constructs present in Python (and other languages). You can consider a function to build a list as follows:
def list_from_args(*args):
return [*args]
This is a (superfluous) wrapper around Iterator functionality. You can leverage the Iterator pattern in Python to accomplish a lot, whether that be creating/consuming objects (e.g. lists, tuples, dictionaries), or for processing data (e.g. reading/writing to a file line-by-line, paginating an API or DB Query, etc.)
The code above does the following, for example:
>>> example = list_from_args(1, 'a', 'ham', 'eggs', 44)
>>> example
[1, 'a', 'ham', 'eggs', 44]
The reason I labeled the above function as superfluous: Oftentimes, if you need to create a list on the fly, you can use list comprehensions.
This does it using only functions from https://docs.python.org/3/library/functional.html
import functools
import itertools
map(
list,
map(
functools.partial(
itertools.repeat,
times=1,
),
[1,2,3]
)
)
functools.partial creates a new function of itertools.repeat with "times" parameter set to 1. Each value in the list is then repeated once and turned into a new list using list function.
>>> [[1], [2], [3]]

Python generator function to loop over iterable sequence while eliminating duplicates

I am trying to create a generator function that loops over an iterable sequence while eliminating duplicates and then returns each result in order one at a time (not as a set or list), but I am having difficulty getting it to work. I have found similar questions here, but the responses pretty uniformly result in a list being produced.
I would like the output to be something like:
>>> next(i)
2
>>> next(i)
8
>>> next(i)
4....
I was able to write it as a regular function that produces a list:
def unique(series):
new_series = []
for i in series:
if i not in new_series:
new_series.append(i)
return new_series
series = ([2,8,4,5,5,6,6,6,2,1])
print(unique(series))
I then tried rewriting it as a generator function by eliminating the lines that create a blank list and that append to that list, and then using "yield" instead of "return"; but I’m not getting it to work:
def unique(series):
for i in series:
if i not in new_series:
yield new_series
I don't know if I'm leaving something out or putting too much in. Thank you for any assistance.
Well, to put it simply, you need something to "remember" the values you find. In your first function you were using the new list itself, but in the second one you don't have it, so it fails. You can use a set() for this purpose.
def unique(series):
seen = set()
for i in series:
if i not in seen:
seen.add(i)
yield i
Also, yield should "yield" a single value at once, not the entire new list.
To print out the elements, you'll have to iterate on the generator. Simply doing print(unique([1, 2, 3])) will print the resulting generator object.
>>> print(unique([1, 1, 2, 3]))
<generator object unique at 0x1023bda98>
>>> print(*unique([1, 1, 2, 3]))
1 2 3
>>> for x in unique([1, 1, 2, 3]):
print(x)
1
2
3
Note: * in the second example is the iterable unpack operator.
Try this:
def unique(series):
new_se = []
for i in series:
if i not in new_se:
new_se.append(i)
new_se = list(dict.fromkeys(new_se)) # this will remove duplicates
return new_se
series = [2,8,4,5,5,6,6,6,2,1]
print(unique(series))

python: flat list of dict values

I have a list of dicts like so:
a = [ {'list':[1,2,3]}, {'list':[1,4,5]} ]
Am trying to get a flat set of the values in the list key like {1,2,3,4,5}. What's the quickest way?
You can write a loop like:
result = set()
for row in a:
result.update(row['list'])
which I think will work reasonably fast.
Or you can simply use set comprehension and that will result in the following one-liner:
result = {x for row in a for x in row['list']}
In case not all elements contain a 'list' key, you can use .get(..) with an empty tuple (this will reduce construction time):
result = {x for row in a for x in row.get('list',())}
It is not clear what your definition of "quickest" is, but whether it is speed or number of lines I would use a combination of itertools and a generator.
>>> import itertools
>>> a = [ {'list':[1,2,3]}, {'list':[1,4,5]} ]
>>> b = set(itertools.chain.from_iterable(x['list'] for x in a if 'list' in x))
Note that I have added a guard against any elements that may not contain a 'list' key; you can omit that if you know this will always be true.
flat list can be made through reduce easily.
All you need to use initializer - third argument in the reduce function.
reduce(
lambda _set, _dict, key='list': _set.update(
_dict.get(key) or set()) or _set,
a,
set())
Above code works for both python2 and python3, but you need to import reduce module as from functools import reduce. Refer below link for details.
for python2
for python3

__getitem__ for a list vs a dict

The Dictionary __getitem__ method does not seem to work the same way as it does for List, and it is causing me headaches. Here is what I mean:
If I subclass list, I can overload __getitem__ as:
class myList(list):
def __getitem__(self,index):
if isinstance(index,int):
#do one thing
if isinstance(index,slice):
#do another thing
If I subclass dict, however, the __getitem__ does not expose index, but key instead as in:
class myDict(dict):
def __getitem__(self,key):
#Here I want to inspect the INDEX, but only have access to key!
So, my question is how can I intercept the index of a dict, instead of just the key?
Example use case:
a = myDict()
a['scalar'] = 1 # Create dictionary entry called 'scalar', and assign 1
a['vector_1'] = [1,2,3,4,5] # I want all subsequent vectors to be 5 long
a['vector_2'][[0,1,2]] = [1,2,3] # I want to intercept this and force vector_2 to be 5 long
print(a['vector_2'])
[1,2,3,0,0]
a['test'] # This should throw a KeyError
a['test'][[0,2,3]] # So should this
Dictionaries have no order; there is no index to pass in; this is why Python can use the same syntax ([..]) and the same magic method (__getitem__) for both lists and dictionaries.
When you index a dictionary on an integer like 0, the dictionary treats that like any other key:
>>> d = {'foo': 'bar', 0: 42}
>>> d.keys()
[0, 'foo']
>>> d[0]
42
>>> d['foo']
'bar'
Chained indexing applies to return values; the expression:
a['vector_2'][0, 1, 2]
is executed as:
_result = a['vector_2'] # via a.__getitem__('vector_2')
_result[0, 1, 2] # via _result.__getitem__((0, 1, 2))
so if you want values in your dictionary to behave in a certain way, you must return objects that support those operations.

How to run an operation on a collection in Python and collect the results?

How to run an operation on a collection in Python and collect the results?
So if I have a list of 100 numbers, and I want to run a function like this for each of them:
Operation ( originalElement, anotherVar ) # returns new number.
and collect the result like so:
result = another list...
How do I do it? Maybe using lambdas?
List comprehensions. In Python they look something like:
a = [f(x) for x in bar]
Where f(x) is some function and bar is a sequence.
You can define f(x) as a partially applied function with a construct like:
def foo(x):
return lambda f: f*x
Which will return a function that multiplies the parameter by x. A trivial example of this type of construct used in a list comprehension looks like:
>>> def foo (x):
... return lambda f: f*x
...
>>> a=[1,2,3]
>>> fn_foo = foo(5)
>>> [fn_foo (y) for y in a]
[5, 10, 15]
Although I don't imagine using this sort of construct in any but fairly esoteric cases. Python is not a true functional language, so it has less scope to do clever tricks with higher order functions than (say) Haskell. You may find applications for this type of construct, but it's not really that pythonic. You could achieve a simple transformation with something like:
>>> y=5
>>> a=[1,2,3]
>>> [x*y for x in a]
[5, 10, 15]
Another (somewhat depreciated) method of doing this is:
def kevin(v):
return v*v
vals = range(0,100)
map(kevin,vals)
List comprehensions, generator expressions, reduce function.

Categories

Resources