I have found myself frequently iterating over a subset of a list according to some condition that is only needed for that loop, and would like to know if there is a more efficient way to write this.
Take for example the list:
foo = [1, 2, 3, 4, 5]
If I wanted to build a for loop that iterates through every element greater than 2, I would typically do something like this:
for x in [y for y in foo if y > 2]:
# Do something
However this seems redundant, and isn't extremely readable in my opinion. I don't think it is particularly inefficient, especially when using a generator instead as #iota pointed out below, however I would much rather be able to write something like:
for x in foo if x > 2:
# Do something
Ideally avoiding the need for a second for and a whole other temporary variable. Is there a syntax for this? I use Python3 but I assume any such syntax would likely have a Python2 variant as well.
Note: This is obviously a very simple example that could be better handled by something like range() or sorting & slicing, so assume foo is any arbitrary list that must be filtered by any arbitrary condition
Not quite the syntax you are after but you can also use filter using a lambda:
for x in filter(lambda y: y > 2, foo):
print(x)
Or using a function for readbility sake:
def greaterthantwo(y):
return y > 2
for x in filter(greaterthantwo, foo):
print(x)
filter also has the advantage of creating a generator expression so it doesn't evaluate all the values if you exit the loop early (as opposed to using a new list)
There's filter as discussed in #salparadise but you can also use generators:
def filterbyvalue(seq, value):
for el in seq:
if el.attribute==value:
yield el
for x in filterbyvalue(foo,2):
#Do something
It may look bigger but it is useful when you have to do something complex instead of filtering, it is also performes better than first creating a list comprehension and then looping over it.
I would do like this
For efficient code
data = (x for x in foo if x>2)
print(next(data))
For more readable code
[print(x) for x in foo if x>2]
Related
I am trying to call a function for a range of values. That function returns a list. The goal is to combine all the returned lists into a list.
Here is a test function that returns a list:
def f(i):
return [chr(ord('a') + i), chr(ord('b') + i), chr(ord('c') + i)]
Here is a list comprehension that does what I need that I came up with after some experimentation and a lot of StackOverflow reading:
y = [a for x in (f(i) for i in range(5)) for a in x]
However, I do not understand why and how it works when a simple loop that solves this problem looks like this:
y = []
for x in (f(i) for i in range(5)):
for a in x:
y.append(a)
Can someone explain?
Thanks!
This may be a better illustration, following Bendik Knapstad's answer:
[
a # element added to the list
for x in (f(i) for i in range(5)) # outer loop
for a in x # inner loop that assigns to element to be added to the list
]
Answering to this:
However, I do not understand why and how it works (list comprehensions) when a simple loop that solves this problem looks like this (for loops)
Yes, they both can work but there are some differences.
First, with list comprehensions, you are able to generate a list (because that's the output) after assigning it to a variable. Whereas in a for loop you must have the list created (regardless if it's empty or not) if you wish to use append later on perform any updating/deleting/re-indexing operation.
Second, simplicity. While for loops might be used in complex tasks where you need to apply a wide variety of functions, and maybe use RNGs, list comprehensions are always preferrable when it comes to dealing with lists and performing rather 'basic' operations (of course you can start nesting them and turn them into something more complex).
Third and finally, speed. List comprehensions tend to perform baster when compared to for loops for simple tasks.
More in-depth information regarding listcomp and for loops can be read in python's official tutorial. https://docs.python.org/3/tutorial/datastructures.html
Nested list comprehensions are hard to read.
But if you look at the two expressions you'll se that they contain the same logic.
In the list comprehension the first a is the part you want to keep in the list. It's equal to the y.append(a) in the for loop.
The for x in (f(i) for i in range(5)) is the same as in your for loop
The same goes for the next line for a in x
So for x in (f(i) for i in range(5)) creates a list x
So if we had the list x already we could write
y= [a for a in x]
I love python. However, one thing that bugs me a bit is that I don't know how to format functional activities in a fluid manner like a can in javascript.
example (randomly created on the spot): Can you help me convert this to python in a fluent looking manner?
var even_set = [1,2,3,4,5]
.filter(function(x){return x%2 === 0;})
.map(function(x){
console.log(x); // prints it for fun
return x;
})
.reduce(function(num_set, val) {
num_set[val] = true;
}, {});
I'd like to know if there are fluid options? Maybe a library.
In general, I've been using list comprehensions for most things but it's a real problem if I want to print
e.g., How can I print every even number between 1 - 5 in python 2.x using list comprehension (Python 3 print() as a function but Python 2 it doesn't). It's also a bit annoying that a list is constructed and returned. I'd rather just for loop.
Update Here's yet another library/option : one that I adapted from a gist and is available on pipy as infixpy:
from infixpy import *
a = (Seq(range(1,51))
.map(lambda x: x * 4)
.filter(lambda x: x <= 170)
.filter(lambda x: len(str(x)) == 2)
.filter( lambda x: x % 20 ==0)
.enumerate() Ï
.map(lambda x: 'Result[%d]=%s' %(x[0],x[1]))
.mkstring(' .. '))
print(a)
pip3 install infixpy
Older
I am looking now at an answer that strikes closer to the heart of the question:
fluentpy https://pypi.org/project/fluentpy/ :
Here is the kind of method chaining for collections that a streams programmer (in scala, java, others) will appreciate:
import fluentpy as _
(
_(range(1,50+1))
.map(_.each * 4)
.filter(_.each <= 170)
.filter(lambda each: len(str(each))==2)
.filter(lambda each: each % 20 == 0)
.enumerate()
.map(lambda each: 'Result[%d]=%s' %(each[0],each[1]))
.join(',')
.print()
)
And it works fine:
Result[0]=20,Result[1]=40,Result[2]=60,Result[3]=80
I am just now trying this out. It will be a very good day today if this were working as it is shown above.
Update: Look at this: maybe python can start to be more reasonable as one-line shell scripts:
python3 -m fluentpy "lib.sys.stdin.readlines().map(str.lower).map(print)"
Here is it in action on command line:
$echo -e "Hello World line1\nLine 2\Line 3\nGoodbye"
| python3 -m fluentpy "lib.sys.stdin.readlines().map(str.lower).map(print)"
hello world line1
line 2
line 3
goodbye
There is an extra newline that should be cleaned up - but the gist of it is useful (to me anyways).
Generators, iterators, and itertools give added powers to chaining and filtering actions. But rather than remember (or look up) rarely used things, I gravitate toward helper functions and comprehensions.
For example in this case, take care of the logging with a helper function:
def echo(x):
print(x)
return x
Selecting even values is easy with the if clause of a comprehension. And since the final output is a dictionary, use that kind of comprehension:
In [118]: d={echo(x):True for x in s if x%2==0}
2
4
In [119]: d
Out[119]: {2: True, 4: True}
or to add these values to an existing dictionary, use update.
new_set.update({echo(x):True for x in s if x%2==0})
another way to write this is with an intermediate generator:
{y:True for y in (echo(x) for x in s if x%2==0)}
Or combine the echo and filter in one generator
def even(s):
for x in s:
if x%2==0:
print(x)
yield(x)
followed by a dict comp using it:
{y:True for y in even(s)}
Comprehensions are the fluent python way of handling filter/map operations.
Your code would be something like:
def evenize(input_list):
return [x for x in input_list if x % 2 == 0]
Comprehensions don't work well with side effects like console logging, so do that in a separate loop. Chaining function calls isn't really that common an idiom in python. Don't expect that to be your bread and butter here. Python libraries tend to follow the "alter state or return a value, but not both" pattern. Some exceptions exist.
Edit: On the plus side, python provides several flavors of comprehensions, which are awesome:
List comprehension: [x for x in range(3)] == [0, 1, 2]
Set comprehension: {x for x in range(3)} == {0, 1, 2}
Dict comprehension: ` {x: x**2 for x in range(3)} == {0: 0, 1: 1, 2: 4}
Generator comprehension (or generator expression): (x for x in range(3)) == <generator object <genexpr> at 0x10fc7dfa0>
With the generator comprehension, nothing has been evaluated yet, so it is a great way to prevent blowing up memory usage when pipelining operations on large collections.
For instance, if you try to do the following, even with python3 semantics for range:
for number in [x**2 for x in range(10000000000000000)]:
print(number)
you will get a memory error trying to build the initial list. On the other hand, change the list comprehension into a generator comprehension:
for number in (x**2 for x in range(1e20)):
print(number)
and there is no memory issue (it just takes forever to run). What happens is the range object gets built (which only stores the start, stop and step values (0, 1e20, and 1)) the object gets built, and then the for-loop begins iterating over the genexp object. Effectively, the for-loop calls
GENEXP_ITERATOR = `iter(genexp)`
number = next(GENEXP_ITERATOR)
# run the loop one time
number = next(GENEXP_ITERATOR)
# run the loop one time
# etc.
(Note the GENEXP_ITERATOR object is not visible at the code level)
next(GENEXP_ITERATOR) tries to pull the first value out of genexp, which then starts iterating on the range object, pulls out one value, squares it, and yields out the value as the first number. The next time the for-loop calls next(GENEXP_ITERATOR), the generator expression pulls out the second value from the range object, squares it and yields it out for the second pass on the for-loop. The first set of numbers are no longer held in memory.
This means that no matter how many items in the generator comprehension, the memory usage remains constant. You can pass the generator expression to other generator expressions, and create long pipelines that never consume large amounts of memory.
def pipeline(filenames):
basepath = path.path('/usr/share/stories')
fullpaths = (basepath / fn for fn in filenames)
realfiles = (fn for fn in fullpaths if os.path.exists(fn))
openfiles = (open(fn) for fn in realfiles)
def read_and_close(file):
output = file.read(100)
file.close()
return output
prefixes = (read_and_close(file) for file in openfiles)
noncliches = (prefix for prefix in prefixes if not prefix.startswith('It was a dark and stormy night')
return {prefix[:32]: prefix for prefix in prefixes}
At any time, if you need a data structure for something, you can pass the generator comprehension to another comprehension type (as in the last line of this example), at which point, it will force the generators to evaluate all the data they have left, but unless you do that, the memory consumption will be limited to what happens in a single pass over the generators.
The biggest dealbreaker to the code you wrote is that Python doesn't support multiline anonymous functions. The return value of filter or map is a list, so you can continue to chain them if you so desire. However, you'll either have to define the functions ahead of time, or use a lambda.
Arguments against doing this notwithstanding, here is a translation into Python of your JS code.
from __future__ import print_function
from functools import reduce
def print_and_return(x):
print(x)
return x
def isodd(x):
return x % 2 == 0
def add_to_dict(d, x):
d[x] = True
return d
even_set = list(reduce(add_to_dict,
map(print_and_return,
filter(isodd, [1, 2, 3, 4, 5])), {}))
It should work on both Python 2 and Python 3.
There's a library that already does exactly what you are looking for, i.e. the fluid syntaxt, lazy evaluation and the order of operations is the same with how it's written, as well as many more other good stuff like multiprocess or multithreading Map/Reduce.
It's named pyxtension and it's prod ready and maintained on PyPi.
Your code would be rewritten in this form:
from pyxtension.strams import stream
def console_log(x):
print(x)
return x
even_set = stream([1,2,3,4,5])\
.filter(lambda x:x%2 === 0)\
.map(console_log)\
.reduce(lambda num_set, val: num_set.__setitem__(val,True))
Replace map with mpmap for multiprocessed map, or fastmap for multithreaded map.
We can use Pyterator for this (disclaimer: I am the author).
We define the function that prints and returns (which I believe you can omit completely however).
def print_and_return(x):
print(x)
return x
then
from pyterator import iterate
even_dict = (
iterate([1,2,3,4,5])
.filter(lambda x: x%2==0)
.map(print_and_return)
.map(lambda x: (x, True))
.to_dict()
)
# {2: True, 4: True}
where I have converted your reduce into a sequence of tuples that can be converted into a dictionary.
The following is a simplified example of my code.
>>> def action(num):
print "Number is", num
>>> items = [1, 3, 6]
>>> for i in [j for j in items if j > 4]:
action(i)
Number is 6
My question is the following: is it bad practice (for reasons such as code clarity) to simply replace the for loop with a comprehension which will still call the action function? That is:
>>> (action(j) for j in items if j > 2)
Number is 6
This shouldn't use a generator or comprehension at all.
def action(num):
print "Number is", num
items = [1, 3, 6]
for j in items:
if j > 4:
action(i)
Generators evaluate lazily. The expression (action(j) for j in items if j > 2) will merely return a generator expression to the caller. Nothing will happen in it unless you explicitly exhaust it. List comprehensions evaluate eagerly, but, in this particular case, you are left with a list with no purpose. Just use a regular loop.
This is bad practice. Firstly, your code fragment does not produce the desired output. You would instead get something like: <generator object <genexpr> at 0x03D826F0>.
Secondly, a list comprehension is for creating sequences, and generators a for creating streams of objects. Typically, they do not have side effects. Your action function is a prime example of a side effect -- it prints its input and returns nothing. Rather, a generator should for each item it generates, take an input and compute some output. eg.
doubled_odds = [x*2 for x in range(10) if x % 2 != 0]
By using a generator you are obfuscating the purpose of your code, which is to mutate global state (printing something), and not to create a stream of objects.
Whereas, just using a for loop makes the code slightly longer (basically just more whitespace), but immediately you can see that the purpose is to apply function to a selection of items (as opposed to creating a new stream/list of items).
for i in items:
if i < 4:
action(i)
Remember that generators are still looping constructs and that the underlying bytecode is more or less the same (if anything, generators are marginally less efficient), and you lose clarity. Generators and list comprehensions are great, but this is not the right situation for them.
While I personally favour Tigerhawk's solution, there might be a middle ground between his and willywonkadailyblah's solution (now deleted).
One of willywonkadailyblah's points was:
Why create a new list instead of just using the old one? You already have the condition to filter out the correct elements, so why put them away in memory and come back for them?
One way to avoid this problem is to use lazy evaluation of the filtering i.e. have the filtering done only when iterating using the for loop by making the filtering part of a generator expression rather than a list comprehension:
for i in (j for j in items if j > 4):
action(i)
Output
Number is 6
In all honesty, I think Tigerhawk's solution is the best for this, though. This is just one possible alternative.
The reason that I proposed this is that it reminds me a lot of LINQ queries in C#, where you define a lazy way to extract, filter and project elements from a sequence in one statement (the LINQ expression) and can then use a separate for each loop with that query to perform some action on each element.
In python you can do list.pop(i) which removes and returns the element in index i, but is there a built in function like list.remove(e) where it removes and returns the first element equal to e?
Thanks
I mean, there is list.remove, yes.
>>> x = [1,2,3]
>>> x.remove(1)
>>> x
[2, 3]
I don't know why you need it to return the removed element, though. You've already passed it to list.remove, so you know what it is... I guess if you've overloaded __eq__ on the objects in the list so that it doesn't actually correspond to some reasonable notion of equality, you could have problems. But don't do that, because that would be terrible.
If you have done that terrible thing, it's not difficult to roll your own function that does this:
def remove_and_return(lst, item):
return lst.pop(lst.index(item))
Is there a builtin? No. Probably because if you already know the element you want to remove, then why bother returning it?1
The best you can do is get the index, and then pop it. Ultimately, this isn't such a big deal -- Chaining 2 O(n) algorithms is still O(n), so you still scale roughly the same ...
def extract(lst, item):
idx = lst.index(item)
return lst.pop(idx)
1Sure, there are pathological cases where the item returned might not be the item you already know... but they aren't important enough to warrant a new method which takes only 3 lines to write yourself :-)
Strictly speaking, you would need something like:
def remove(lst, e):
i = lst.index(e)
# error if e not in lst
a = lst[i]
lst.pop(i)
return a
Which would make sense only if e == a is true, but e is a is false, and you really need a instead of e.
In most case, though, I would say that this suggest something suspicious in your code.
A short version would be :
a = lst.pop(lst.index(e))
I have a list comprehension which approximates to:
[f(x) for x in l if f(x)]
Where l is a list and f(x) is an expensive function which returns a list.
I want to avoid evaluating f(x) twice for every non-empty occurance of f(x). Is there some way to save its output within the list comprehension?
I could remove the final condition, generate the whole list and then prune it, but that seems wasteful.
Edit:
Two basic approaches have been suggested:
An inner generator comprehension:
[y for y in (f(x) for x in l) if y]
or memoization.
I think the inner generator comprehension is elegant for the problem as stated. In actual fact I simplified the question to make it clear, I really want:
[g(x, f(x)) for x in l if f(x)]
For this more complicated situation, I think memoization produces a cleaner end result.
[y for y in (f(x) for x in l) if y]
Will do.
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), it's possible to use a local variable within a list comprehension in order to avoid calling twice the same function:
In our case, we can name the evaluation of f(x) as a variable y while using the result of the expression to filter the list but also as the mapped value:
[y for x in l if (y := f(x))]
A solution (the best if you have repeated value of x) would be to memoize the function f, i.e. to create a wrapper function that saves the argument by which the function is called and save it, than return it if the same value is asked.
a really simple implementation is the following:
storage = {}
def memoized(value):
if value not in storage:
storage[value] = f(value)
return storage[value]
[memoized(x) for x in l if memoized(x)]
and then use this function in the list comprehension. This approach is valid under two condition, one theoretical and one practical. The first one is that the function f should be deterministic, i.e. returns the same results given the same input, and the other is that the object x can be used as a dictionary keys. If the first one is not valid than you should recompute f each timeby definition, while if the second one fails it is possible to use some slightly more robust approaches.
You can find a lot of implementation of memoization around the net, and I think that the new versions of python have something included in them too.
On a side note, never use the small L as a variable name, is a bad habit as it can be confused with an i or a 1 on some terminals.
EDIT:
as commented, a possible solution using generators comprehension (to avoid creating useless duplicate temporaries) would be this expression:
[g(x, fx) for x, fx in ((x,f(x)) for x in l) if fx]
You need to weight your choice given the computational cost of f, the number of duplication in the original list and memory at you disposition. Memoization make a space-speed tradeoff, meaning that it keep tracks of each result saving it, so if you have huge lists it can became costly on the memory occupation front.
You should use a memoize decorator. Here is an interesting link.
Using memoization from the link and your 'code':
def memoize(f):
""" Memoization decorator for functions taking one or more arguments. """
class memodict(dict):
def __init__(self, f):
self.f = f
def __call__(self, *args):
return self[args]
def __missing__(self, key):
ret = self[key] = self.f(*key)
return ret
return memodict(f)
#memoize
def f(x):
# your code
[f(x) for x in l if f(x)]
[y for y in [f(x) for x in l] if y]
For your updated problem, this might be useful:
[g(x,y) for x in l for y in [f(x)] if y]
Nope. There's no (clean) way to do this. There's nothing wrong with a good-old-fashioned loop:
output = []
for x in l:
result = f(x)
if result:
output.append(result)
If you find that hard to read, you can always wrap it in a function.
As the previous answers have shown, you can use a double comprehension or use memoization. For reasonably-sized problems it's a matter of taste (and I agree that memoization looks cleaner, since it hides the optimization). But if you're examining a very large list, there's a huge difference: Memoization will store every single value you've calculated, and can quickly blow out your memory. A double comprehension with a generator (round parens, not square brackets) only stores what you want to keep.
To come to your actual problem:
[g(x, f(x)) for x in series if f(x)]
To calculate the final value you need both x and f(x). No problem, pass them both like this:
[g(x, y) for (x, y) in ( (x, f(x)) for x in series ) if y ]
Again: this should be using a generator (round parens), not a list comprehension (square brackets). Otherwise you will build the whole list before you start filtering the results. This is the list comprehension version:
[g(x, y) for (x, y) in [ (x, f(x)) for x in series ] if y ] # DO NOT USE THIS
There have been a lot of answers regarding memoizing. The Python 3 standard library now has a lru_cache, which is a Last Recently Used Cache. So you can:
from functools import lru_cache
#lru_cache()
def f(x):
# function body here
This way your function will only be called once. You can also specify the size of the lru_cache, by default this is 128. The problem with the memoize decorators shown above is that the size of the lists can grow well out of hand.
You can use memoization. It is a technique which is used in order to avoid doing the same computation twice by saving somewhere the result for each calculated value.
I saw that there is already an answer that uses memoization, but I would like to propose a generic implementation, using python decorators:
def memoize(func):
def wrapper(*args):
if args in wrapper.d:
return wrapper.d[args]
ret_val = func(*args)
wrapper.d[args] = ret_val
return ret_val
wrapper.d = {}
return wrapper
#memoize
def f(x):
...
Now f is a memoized version of itself.
With this implementation you can memoize any function using the #memoize decorator.
Use map() !!
comp = [x for x in map(f, l) if x]
f is the function f(X), l is the list
map() will return the result of f(x) for each x in the list.
Here is my solution:
filter(None, [f(x) for x in l])
How about defining:
def truths(L):
"""Return the elements of L that test true"""
return [x for x in L if x]
So that, for example
> [wife.children for wife in henry8.wives]
[[Mary1], [Elizabeth1], [Edward6], [], [], []]
> truths(wife.children for wife in henry8.wives)
[[Mary1], [Elizabeth1], [Edward6]]