I have some code like:
good = [x for x in mylist if x in goodvals]
bad = [x for x in mylist if x not in goodvals]
The goal is to split up the contents of mylist into two other lists, based on whether or not they meet a condition.
How can I do this more elegantly? Can I avoid doing two separate iterations over mylist? Can I improve performance by doing so?
Iterate manually, using the condition to select a list to which each element will be appended:
good, bad = [], []
for x in mylist:
(bad, good)[x in goodvals].append(x)
good = [x for x in mylist if x in goodvals]
bad = [x for x in mylist if x not in goodvals]
How can I do this more elegantly?
That code is already perfectly elegant.
There might be slight performance improvements using sets, but the difference is trivial. set based approaches will also discard duplicates and will not preserve the order of elements. I find the list comprehension far easier to read, too.
In fact, we could even more simply just use a for loop:
good, bad = [], []
for x in mylist:
if x in goodvals:
good.append(f)
else:
bad.append(f)
This approach makes it easier to add additional logic. For example, the code is easily modified to discard None values:
good, bad = [], []
for x in mylist:
if x is None:
continue
if x in goodvals:
good.append(f)
else:
bad.append(f)
Here's the lazy iterator approach:
from itertools import tee
def split_on_condition(seq, condition):
l1, l2 = tee((condition(item), item) for item in seq)
return (i for p, i in l1 if p), (i for p, i in l2 if not p)
It evaluates the condition once per item and returns two generators, first yielding values from the sequence where the condition is true, the other where it's false.
Because it's lazy you can use it on any iterator, even an infinite one:
from itertools import count, islice
def is_prime(n):
return n > 1 and all(n % i for i in xrange(2, n))
primes, not_primes = split_on_condition(count(), is_prime)
print("First 10 primes", list(islice(primes, 10)))
print("First 10 non-primes", list(islice(not_primes, 10)))
Usually though the non-lazy list returning approach is better:
def split_on_condition(seq, condition):
a, b = [], []
for item in seq:
(a if condition(item) else b).append(item)
return a, b
Edit: For your more specific usecase of splitting items into different lists by some key, heres a generic function that does that:
DROP_VALUE = lambda _:_
def split_by_key(seq, resultmapping, keyfunc, default=DROP_VALUE):
"""Split a sequence into lists based on a key function.
seq - input sequence
resultmapping - a dictionary that maps from target lists to keys that go to that list
keyfunc - function to calculate the key of an input value
default - the target where items that don't have a corresponding key go, by default they are dropped
"""
result_lists = dict((key, []) for key in resultmapping)
appenders = dict((key, result_lists[target].append) for target, keys in resultmapping.items() for key in keys)
if default is not DROP_VALUE:
result_lists.setdefault(default, [])
default_action = result_lists[default].append
else:
default_action = DROP_VALUE
for item in seq:
appenders.get(keyfunc(item), default_action)(item)
return result_lists
Usage:
def file_extension(f):
return f[2].lower()
split_files = split_by_key(files, {'images': IMAGE_TYPES}, keyfunc=file_extension, default='anims')
print split_files['images']
print split_files['anims']
Problem with all proposed solutions is that it will scan and apply the filtering function twice. I'd make a simple small function like this:
def split_into_two_lists(lst, f):
a = []
b = []
for elem in lst:
if f(elem):
a.append(elem)
else:
b.append(elem)
return a, b
That way you are not processing anything twice and also are not repeating code.
My take on it. I propose a lazy, single-pass, partition function,
which preserves relative order in the output subsequences.
1. Requirements
I assume that the requirements are:
maintain elements' relative order (hence, no sets and
dictionaries)
evaluate condition only once for every element (hence not using
(i)filter or groupby)
allow for lazy consumption of either sequence (if we can afford to
precompute them, then the naïve implementation is likely to be
acceptable too)
2. split library
My partition function (introduced below) and other similar functions
have made it into a small library:
python-split
It's installable normally via PyPI:
pip install --user split
To split a list base on condition, use partition function:
>>> from split import partition
>>> files = [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi') ]
>>> image_types = ('.jpg','.jpeg','.gif','.bmp','.png')
>>> images, other = partition(lambda f: f[-1] in image_types, files)
>>> list(images)
[('file1.jpg', 33L, '.jpg')]
>>> list(other)
[('file2.avi', 999L, '.avi')]
3. partition function explained
Internally we need to build two subsequences at once, so consuming
only one output sequence will force the other one to be computed
too. And we need to keep state between user requests (store processed
but not yet requested elements). To keep state, I use two double-ended
queues (deques):
from collections import deque
SplitSeq class takes care of the housekeeping:
class SplitSeq:
def __init__(self, condition, sequence):
self.cond = condition
self.goods = deque([])
self.bads = deque([])
self.seq = iter(sequence)
Magic happens in its .getNext() method. It is almost like .next()
of the iterators, but allows to specify which kind of element we want
this time. Behind the scene it doesn't discard the rejected elements,
but instead puts them in one of the two queues:
def getNext(self, getGood=True):
if getGood:
these, those, cond = self.goods, self.bads, self.cond
else:
these, those, cond = self.bads, self.goods, lambda x: not self.cond(x)
if these:
return these.popleft()
else:
while 1: # exit on StopIteration
n = self.seq.next()
if cond(n):
return n
else:
those.append(n)
The end user is supposed to use partition function. It takes a
condition function and a sequence (just like map or filter), and
returns two generators. The first generator builds a subsequence of
elements for which the condition holds, the second one builds the
complementary subsequence. Iterators and generators allow for lazy
splitting of even long or infinite sequences.
def partition(condition, sequence):
cond = condition if condition else bool # evaluate as bool if condition == None
ss = SplitSeq(cond, sequence)
def goods():
while 1:
yield ss.getNext(getGood=True)
def bads():
while 1:
yield ss.getNext(getGood=False)
return goods(), bads()
I chose the test function to be the first argument to facilitate
partial application in the future (similar to how map and filter
have the test function as the first argument).
I basically like Anders' approach as it is very general. Here's a version that puts the categorizer first (to match filter syntax) and uses a defaultdict (assumed imported).
def categorize(func, seq):
"""Return mapping from categories to lists
of categorized items.
"""
d = defaultdict(list)
for item in seq:
d[func(item)].append(item)
return d
First go (pre-OP-edit): Use sets:
mylist = [1,2,3,4,5,6,7]
goodvals = [1,3,7,8,9]
myset = set(mylist)
goodset = set(goodvals)
print list(myset.intersection(goodset)) # [1, 3, 7]
print list(myset.difference(goodset)) # [2, 4, 5, 6]
That's good for both readability (IMHO) and performance.
Second go (post-OP-edit):
Create your list of good extensions as a set:
IMAGE_TYPES = set(['.jpg','.jpeg','.gif','.bmp','.png'])
and that will increase performance. Otherwise, what you have looks fine to me.
itertools.groupby almost does what you want, except it requires the items to be sorted to ensure that you get a single contiguous range, so you need to sort by your key first (otherwise you'll get multiple interleaved groups for each type). eg.
def is_good(f):
return f[2].lower() in IMAGE_TYPES
files = [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi'), ('file3.gif', 123L, '.gif')]
for key, group in itertools.groupby(sorted(files, key=is_good), key=is_good):
print key, list(group)
gives:
False [('file2.avi', 999L, '.avi')]
True [('file1.jpg', 33L, '.jpg'), ('file3.gif', 123L, '.gif')]
Similar to the other solutions, the key func can be defined to divide into any number of groups you want.
Elegant and Fast
Inspired by DanSalmo's comment, here is a solution that is concise, elegant, and at the same time is one of the fastest solutions.
good_set = set(goodvals)
good, bad = [], []
for item in my_list:
good.append(item) if item in good_set else bad.append(item)
Tip: Turning goodvals into a set gives us an easy speed boost.
Fastest
For maximum speed, we take the fastest answer and turbocharge it by turning good_list into a set. That alone gives us a 40%+ speed boost, and we end up with a solution that is more than 5.5x as fast as the slowest solution, even while it remains readable.
good_list_set = set(good_list) # 40%+ faster than a tuple.
good, bad = [], []
for item in my_origin_list:
if item in good_list_set:
good.append(item)
else:
bad.append(item)
A little shorter
This is a more concise version of the previous answer.
good_list_set = set(good_list) # 40%+ faster than a tuple.
good, bad = [], []
for item in my_origin_list:
out = good if item in good_list_set else bad
out.append(item)
Elegance can be somewhat subjective, but some of the Rube Goldberg style solutions that are cute and ingenious are quite concerning and should not be used in production code in any language, let alone python which is elegant at heart.
Benchmark results:
filter_BJHomer 80/s -- -3265% -5312% -5900% -6262% -7273% -7363% -8051% -8162% -8244%
zip_Funky 118/s 4848% -- -3040% -3913% -4450% -5951% -6085% -7106% -7271% -7393%
two_lst_tuple_JohnLaRoy 170/s 11332% 4367% -- -1254% -2026% -4182% -4375% -5842% -6079% -6254%
if_else_DBR 195/s 14392% 6428% 1434% -- -882% -3348% -3568% -5246% -5516% -5717%
two_lst_compr_Parand 213/s 16750% 8016% 2540% 967% -- -2705% -2946% -4786% -5083% -5303%
if_else_1_line_DanSalmo 292/s 26668% 14696% 7189% 5033% 3707% -- -331% -2853% -3260% -3562%
tuple_if_else 302/s 27923% 15542% 7778% 5548% 4177% 343% -- -2609% -3029% -3341%
set_1_line 409/s 41308% 24556% 14053% 11035% 9181% 3993% 3529% -- -569% -991%
set_shorter 434/s 44401% 26640% 15503% 12303% 10337% 4836% 4345% 603% -- -448%
set_if_else 454/s 46952% 28358% 16699% 13349% 11290% 5532% 5018% 1100% 469% --
The full benchmark code for Python 3.7 (modified from FunkySayu):
good_list = ['.jpg','.jpeg','.gif','.bmp','.png']
import random
import string
my_origin_list = []
for i in range(10000):
fname = ''.join(random.choice(string.ascii_lowercase) for i in range(random.randrange(10)))
if random.getrandbits(1):
fext = random.choice(list(good_list))
else:
fext = "." + ''.join(random.choice(string.ascii_lowercase) for i in range(3))
my_origin_list.append((fname + fext, random.randrange(1000), fext))
# Parand
def two_lst_compr_Parand(*_):
return [e for e in my_origin_list if e[2] in good_list], [e for e in my_origin_list if not e[2] in good_list]
# dbr
def if_else_DBR(*_):
a, b = list(), list()
for e in my_origin_list:
if e[2] in good_list:
a.append(e)
else:
b.append(e)
return a, b
# John La Rooy
def two_lst_tuple_JohnLaRoy(*_):
a, b = list(), list()
for e in my_origin_list:
(b, a)[e[2] in good_list].append(e)
return a, b
# # Ants Aasma
# def f4():
# l1, l2 = tee((e[2] in good_list, e) for e in my_origin_list)
# return [i for p, i in l1 if p], [i for p, i in l2 if not p]
# My personal way to do
def zip_Funky(*_):
a, b = zip(*[(e, None) if e[2] in good_list else (None, e) for e in my_origin_list])
return list(filter(None, a)), list(filter(None, b))
# BJ Homer
def filter_BJHomer(*_):
return list(filter(lambda e: e[2] in good_list, my_origin_list)), list(filter(lambda e: not e[2] in good_list, my_origin_list))
# ChaimG's answer; as a list.
def if_else_1_line_DanSalmo(*_):
good, bad = [], []
for e in my_origin_list:
_ = good.append(e) if e[2] in good_list else bad.append(e)
return good, bad
# ChaimG's answer; as a set.
def set_1_line(*_):
good_list_set = set(good_list)
good, bad = [], []
for e in my_origin_list:
_ = good.append(e) if e[2] in good_list_set else bad.append(e)
return good, bad
# ChaimG set and if else list.
def set_shorter(*_):
good_list_set = set(good_list)
good, bad = [], []
for e in my_origin_list:
out = good if e[2] in good_list_set else bad
out.append(e)
return good, bad
# ChaimG's best answer; if else as a set.
def set_if_else(*_):
good_list_set = set(good_list)
good, bad = [], []
for e in my_origin_list:
if e[2] in good_list_set:
good.append(e)
else:
bad.append(e)
return good, bad
# ChaimG's best answer; if else as a set.
def tuple_if_else(*_):
good_list_tuple = tuple(good_list)
good, bad = [], []
for e in my_origin_list:
if e[2] in good_list_tuple:
good.append(e)
else:
bad.append(e)
return good, bad
def cmpthese(n=0, functions=None):
results = {}
for func_name in functions:
args = ['%s(range(256))' % func_name, 'from __main__ import %s' % func_name]
t = Timer(*args)
results[func_name] = 1 / (t.timeit(number=n) / n) # passes/sec
functions_sorted = sorted(functions, key=results.__getitem__)
for f in functions_sorted:
diff = []
for func in functions_sorted:
if func == f:
diff.append("--")
else:
diff.append(f"{results[f]/results[func]*100 - 100:5.0%}")
diffs = " ".join(f'{x:>8s}' for x in diff)
print(f"{f:27s} \t{results[f]:,.0f}/s {diffs}")
if __name__=='__main__':
from timeit import Timer
cmpthese(1000, 'two_lst_compr_Parand if_else_DBR two_lst_tuple_JohnLaRoy zip_Funky filter_BJHomer if_else_1_line_DanSalmo set_1_line set_if_else tuple_if_else set_shorter'.split(" "))
good.append(x) if x in goodvals else bad.append(x)
This elegant and concise answer by #dansalmo showed up buried in the comments, so I'm just reposting it here as an answer so it can get the prominence it deserves, especially for new readers.
Complete example:
good, bad = [], []
for x in my_list:
good.append(x) if x in goodvals else bad.append(x)
bad = []
good = [x for x in mylist if x in goodvals or bad.append(x)]
append returns None, so it works.
Personally, I like the version you cited, assuming you already have a list of goodvals hanging around. If not, something like:
good = filter(lambda x: is_good(x), mylist)
bad = filter(lambda x: not is_good(x), mylist)
Of course, that's really very similar to using a list comprehension like you originally did, but with a function instead of a lookup:
good = [x for x in mylist if is_good(x)]
bad = [x for x in mylist if not is_good(x)]
In general, I find the aesthetics of list comprehensions to be very pleasing. Of course, if you don't actually need to preserve ordering and don't need duplicates, using the intersection and difference methods on sets would work well too.
If you want to make it in FP style:
good, bad = [ sum(x, []) for x in zip(*(([y], []) if y in goodvals else ([], [y])
for y in mylist)) ]
Not the most readable solution, but at least iterates through mylist only once.
Sometimes, it looks like list comprehension is not the best thing to use !
I made a little test based on the answer people gave to this topic, tested on a random generated list. Here is the generation of the list (there's probably a better way to do, but it's not the point) :
good_list = ('.jpg','.jpeg','.gif','.bmp','.png')
import random
import string
my_origin_list = []
for i in xrange(10000):
fname = ''.join(random.choice(string.lowercase) for i in range(random.randrange(10)))
if random.getrandbits(1):
fext = random.choice(good_list)
else:
fext = "." + ''.join(random.choice(string.lowercase) for i in range(3))
my_origin_list.append((fname + fext, random.randrange(1000), fext))
And here we go
# Parand
def f1():
return [e for e in my_origin_list if e[2] in good_list], [e for e in my_origin_list if not e[2] in good_list]
# dbr
def f2():
a, b = list(), list()
for e in my_origin_list:
if e[2] in good_list:
a.append(e)
else:
b.append(e)
return a, b
# John La Rooy
def f3():
a, b = list(), list()
for e in my_origin_list:
(b, a)[e[2] in good_list].append(e)
return a, b
# Ants Aasma
def f4():
l1, l2 = tee((e[2] in good_list, e) for e in my_origin_list)
return [i for p, i in l1 if p], [i for p, i in l2 if not p]
# My personal way to do
def f5():
a, b = zip(*[(e, None) if e[2] in good_list else (None, e) for e in my_origin_list])
return list(filter(None, a)), list(filter(None, b))
# BJ Homer
def f6():
return filter(lambda e: e[2] in good_list, my_origin_list), filter(lambda e: not e[2] in good_list, my_origin_list)
Using the cmpthese function, the best result is the dbr answer :
f1 204/s -- -5% -14% -15% -20% -26%
f6 215/s 6% -- -9% -11% -16% -22%
f3 237/s 16% 10% -- -2% -7% -14%
f4 240/s 18% 12% 2% -- -6% -13%
f5 255/s 25% 18% 8% 6% -- -8%
f2 277/s 36% 29% 17% 15% 9% --
def partition(pred, iterable):
'Use a predicate to partition entries into false entries and true entries'
# partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9
t1, t2 = tee(iterable)
return filterfalse(pred, t1), filter(pred, t2)
Check this
I think a generalization of splitting a an iterable based on N conditions is handy
from collections import OrderedDict
def partition(iterable,*conditions):
'''Returns a list with the elements that satisfy each of condition.
Conditions are assumed to be exclusive'''
d= OrderedDict((i,list())for i in range(len(conditions)))
for e in iterable:
for i,condition in enumerate(conditions):
if condition(e):
d[i].append(e)
break
return d.values()
For instance:
ints,floats,other = partition([2, 3.14, 1, 1.69, [], None],
lambda x: isinstance(x, int),
lambda x: isinstance(x, float),
lambda x: True)
print " ints: {}\n floats:{}\n other:{}".format(ints,floats,other)
ints: [2, 1]
floats:[3.14, 1.69]
other:[[], None]
If the element may satisfy multiple conditions, remove the break.
Yet another solution to this problem. I needed a solution that is as fast as possible. That means only one iteration over the list and preferably O(1) for adding data to one of the resulting lists. This is very similar to the solution provided by sastanin, except much shorter:
from collections import deque
def split(iterable, function):
dq_true = deque()
dq_false = deque()
# deque - the fastest way to consume an iterator and append items
deque((
(dq_true if function(item) else dq_false).append(item) for item in iterable
), maxlen=0)
return dq_true, dq_false
Then, you can use the function in the following way:
lower, higher = split([0,1,2,3,4,5,6,7,8,9], lambda x: x < 5)
selected, other = split([0,1,2,3,4,5,6,7,8,9], lambda x: x in {0,4,9})
If you're not fine with the resulting deque object, you can easily convert it to list, set, whatever you like (for example list(lower)). The conversion is much faster, that construction of the lists directly.
This methods keeps order of the items, as well as any duplicates.
If you don't mind using an external library there two I know that nativly implement this operation:
>>> files = [ ('file1.jpg', 33, '.jpg'), ('file2.avi', 999, '.avi')]
>>> IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png')
iteration_utilities.partition:
>>> from iteration_utilities import partition
>>> notimages, images = partition(files, lambda x: x[2].lower() in IMAGE_TYPES)
>>> notimages
[('file2.avi', 999, '.avi')]
>>> images
[('file1.jpg', 33, '.jpg')]
more_itertools.partition
>>> from more_itertools import partition
>>> notimages, images = partition(lambda x: x[2].lower() in IMAGE_TYPES, files)
>>> list(notimages) # returns a generator so you need to explicitly convert to list.
[('file2.avi', 999, '.avi')]
>>> list(images)
[('file1.jpg', 33, '.jpg')]
For example, splitting list by even and odd
arr = range(20)
even, odd = reduce(lambda res, next: res[next % 2].append(next) or res, arr, ([], []))
Or in general:
def split(predicate, iterable):
return reduce(lambda res, e: res[predicate(e)].append(e) or res, iterable, ([], []))
Advantages:
Shortest posible way
Predicate applies only once for each element
Disadvantages
Requires knowledge of functional programing paradigm
Inspired by #gnibbler's great (but terse!) answer, we can apply that approach to map to multiple partitions:
from collections import defaultdict
def splitter(l, mapper):
"""Split an iterable into multiple partitions generated by a callable mapper."""
results = defaultdict(list)
for x in l:
results[mapper(x)] += [x]
return results
Then splitter can then be used as follows:
>>> l = [1, 2, 3, 4, 2, 3, 4, 5, 6, 4, 3, 2, 3]
>>> split = splitter(l, lambda x: x % 2 == 0) # partition l into odds and evens
>>> split.items()
>>> [(False, [1, 3, 3, 5, 3, 3]), (True, [2, 4, 2, 4, 6, 4, 2])]
This works for more than two partitions with a more complicated mapping (and on iterators, too):
>>> import math
>>> l = xrange(1, 23)
>>> split = splitter(l, lambda x: int(math.log10(x) * 5))
>>> split.items()
[(0, [1]),
(1, [2]),
(2, [3]),
(3, [4, 5, 6]),
(4, [7, 8, 9]),
(5, [10, 11, 12, 13, 14, 15]),
(6, [16, 17, 18, 19, 20, 21, 22])]
Or using a dictionary to map:
>>> map = {'A': 1, 'X': 2, 'B': 3, 'Y': 1, 'C': 2, 'Z': 3}
>>> l = ['A', 'B', 'C', 'C', 'X', 'Y', 'Z', 'A', 'Z']
>>> split = splitter(l, map.get)
>>> split.items()
(1, ['A', 'Y', 'A']), (2, ['C', 'C', 'X']), (3, ['B', 'Z', 'Z'])]
solution
from itertools import tee
def unpack_args(fn):
return lambda t: fn(*t)
def separate(fn, lx):
return map(
unpack_args(
lambda i, ly: filter(
lambda el: bool(i) == fn(el),
ly)),
enumerate(tee(lx, 2)))
test
[even, odd] = separate(
lambda x: bool(x % 2),
[1, 2, 3, 4, 5])
print(list(even) == [2, 4])
print(list(odd) == [1, 3, 5])
If the list is made of groups and intermittent separators, you can use:
def split(items, p):
groups = [[]]
for i in items:
if p(i):
groups.append([])
groups[-1].append(i)
return groups
Usage:
split(range(1,11), lambda x: x % 3 == 0)
# gives [[1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
Use Boolean logic to assign data to two arrays
>>> images, anims = [[i for i in files if t ^ (i[2].lower() in IMAGE_TYPES) ] for t in (0, 1)]
>>> images
[('file1.jpg', 33, '.jpg')]
>>> anims
[('file2.avi', 999, '.avi')]
For perfomance, try itertools.
The itertools module standardizes a core set of fast, memory efficient tools that are useful by themselves or in combination. Together, they form an “iterator algebra” making it possible to construct specialized tools succinctly and efficiently in pure Python.
See itertools.ifilter or imap.
itertools.ifilter(predicate, iterable)
Make an iterator that filters elements from iterable returning only those for which the predicate is True
If you insist on clever, you could take Winden's solution and just a bit spurious cleverness:
def splay(l, f, d=None):
d = d or {}
for x in l: d.setdefault(f(x), []).append(x)
return d
Sometimes you won't need that other half of the list.
For example:
import sys
from itertools import ifilter
trustedPeople = sys.argv[1].split(',')
newName = sys.argv[2]
myFriends = ifilter(lambda x: x.startswith('Shi'), trustedPeople)
print '%s is %smy friend.' % (newName, newName not in myFriends 'not ' or '')
Already quite a few solutions here, but yet another way of doing that would be -
anims = []
images = [f for f in files if (lambda t: True if f[2].lower() in IMAGE_TYPES else anims.append(t) and False)(f)]
Iterates over the list only once, and looks a bit more pythonic and hence readable to me.
>>> files = [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi'), ('file1.bmp', 33L, '.bmp')]
>>> IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png')
>>> anims = []
>>> images = [f for f in files if (lambda t: True if f[2].lower() in IMAGE_TYPES else anims.append(t) and False)(f)]
>>> print '\n'.join([str(anims), str(images)])
[('file2.avi', 999L, '.avi')]
[('file1.jpg', 33L, '.jpg'), ('file1.bmp', 33L, '.bmp')]
>>>
I'd take a 2-pass approach, separating evaluation of the predicate from filtering the list:
def partition(pred, iterable):
xs = list(zip(map(pred, iterable), iterable))
return [x[1] for x in xs if x[0]], [x[1] for x in xs if not x[0]]
What's nice about this, performance-wise (in addition to evaluating pred only once on each member of iterable), is that it moves a lot of logic out of the interpreter and into highly-optimized iteration and mapping code. This can speed up iteration over long iterables, as described in this answer.
Expressivity-wise, it takes advantage of expressive idioms like comprehensions and mapping.
Not sure if this is a good approach but it can be done in this way as well
IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png')
files = [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi')]
images, anims = reduce(lambda (i, a), f: (i + [f], a) if f[2] in IMAGE_TYPES else (i, a + [f]), files, ([], []))
Yet another answer, short but "evil" (for list-comprehension side effects).
digits = list(range(10))
odd = [x.pop(i) for i, x in enumerate(digits) if x % 2]
>>> odd
[1, 3, 5, 7, 9]
>>> digits
[0, 2, 4, 6, 8]
Related
So I programmed this code to print out how many times a number would be printed in the list that I provided, and the output works, but I want to put all the values that I get into a list, how can I do that?
This is my code...
i = [5,5,7,9,9,9,9,9,8,8]
def num_list(i):
return [(i.count(x),x) for x in set(i)]
for tv in num_list(i):
if tv[1] > 1:
print(tv)
The output that I get is
(2, 8)
(5, 9)
(2, 5)
(1, 7)
but I want the output to be like
[2,8,5,9,2,5,1,7)
How can I do that??
Just do:
tvlist = []
for tv in num_list(i):
if tv[1] > 1:
tvlist.extend(tv)
print(tvlist)
Or a list comprehension:
tvlist = [x for tv in num_list(i) if tv[1] > 1 for x in tv]
Also your function could just simply be collections.Counter:
from collections import Counter
def num_list(i):
return Counter(i).items()
flattened_iter = itertools.chain.from_iterable(num_list(i))
print(list(flattened_iter))
is how i would flatten a list
as mentioned by everyone else collections.Counter is likely to be significantly better performance for large lists...
if you would rather implement it yourself you can pretty easily
def myCounter(a_list):
counter = {}
for item in a_list:
# in modern python versions order is preserved in dicts
counter[item] = counter.get(item,0) + 1
for unique_item in counter:
# make it a generator just for ease
# we will just yield twice to create a flat list
yield counter[unique_item]
yield unique_item
i = [5,5,7,9,9,9,9,9,8,8]
print(list(myCounter(i)))
Using a collections.Counter is more efficient. This paired with itertools.chain will get you your desired result:
from collections import Counter
from itertools import chain
i = [5,5,7,9,9,9,9,9,8,8]
r = list(chain(*((v, k) for k, v in Counter(i).items() if v > 1)))
print(r)
[2, 5, 5, 9, 2, 8]
Without itertools.chain
r = []
for k, v in Counter(i).items():
if v > 1:
r.extend((v, k))
I have a string and a list:
seq = '01202112'
l = [(0,1,0),(1,1,0)]
I would like a pythonic way of replacing each '2' with the value at the corresponding index in the list l such that I obtain two new strings:
list_seq = [01001110, 01101110]
By using .replace(), I could iterate through l, but I wondered is there a more pythonic way to get list_seq?
I might do something like this:
out = [''.join(c if c != '2' else str(next(f, c)) for c in seq) for f in map(iter, l)]
The basic idea is that we call iter to turn the tuples in l into iterators. At that point every time we call next on them, we get the next element we need to use instead of the '2'.
If this is too compact, the logic might be easier to read as a function:
def replace(seq, to_replace, fill):
fill = iter(fill)
for element in seq:
if element != to_replace:
yield element
else:
yield next(fill, element)
giving
In [32]: list(replace([1,2,3,2,2,3,1,2,4,2], to_replace=2, fill="apple"))
Out[32]: [1, 'a', 3, 'p', 'p', 3, 1, 'l', 4, 'e']
Thanks to #DanD in the comments for noting that I had assumed I'd always have enough characters to fill from! We'll follow his suggestion to keep the original characters if we run out, but modifying this approach to behave differently is straightforward and left as an exercise for the reader. :-)
[''.join([str(next(digit, 0)) if x is '2' else x for x in seq])
for digit in map(iter, l)]
I don't know if this solution is 'more pythonic' but:
def my_replace(s, c=None, *other):
return s if c is None else my_replace(s.replace('2', str(c), 1), *other)
seq = '01202112'
l = [(0,1,0),(1,1,0)]
list_req = [my_replace(seq, *x) for x in l]
seq = '01202112'
li = [(0,1,0),(1,1,0)]
def grunch(s, tu):
it = map(str,tu)
return ''.join(next(it) if c=='2' else c for c in s)
list_seq = [grunch(seq,tu) for tu in li]
I know this is probably an easy answer but I can't figure it out. What is the best way in Python to keep the duplicates in a list:
x = [1,2,2,2,3,4,5,6,6,7]
The output should be:
[2,6]
I found this link: Find (and keep) duplicates of sublist in python, but I'm still relatively new to Python and I can't get it to work for a simple list.
I'd use a collections.Counter:
from collections import Counter
x = [1, 2, 2, 2, 3, 4, 5, 6, 6, 7]
counts = Counter(x)
output = [value for value, count in counts.items() if count > 1]
Here's another version which keeps the order of when the item was first duplicated that only assumes that the sequence passed in contains hashable items and it will work back to when set or yeild was introduced to the language (whenever that was).
def keep_dupes(iterable):
seen = set()
dupes = set()
for x in iterable:
if x in seen and x not in dupes:
yield x
dupes.add(x)
else:
seen.add(x)
print list(keep_dupes([1,2,2,2,3,4,5,6,6,7]))
This is a short way to do it if the list is sorted already:
x = [1,2,2,2,3,4,5,6,6,7]
from itertools import groupby
print [key for key,group in groupby(x) if len(list(group)) > 1]
List Comprehension in combination with set() will do exactly what you want.
list(set([i for i in x if x.count(i) >= 2]))
>>> [2,6]
keepin' it simple:
array2 = []
aux = 0
aux2=0
for i in x:
aux2 = i
if(aux2==aux):
array2.append(i)
aux= i
list(set(array2))
That should work
Not efficient but just to get the output, you could try:
import numpy as np
def check_for_repeat(check_list):
repeated_list = []
for idx in range(len(check_list)):
elem = check_list[idx]
check_list[idx] = None
if elem in temp_list:
repeated_list.append(elem)
repeated_list = np.array(repeated_list)
return list(np.unique(repeated_list))
I'm doing this but it feels this can be achieved with much less code. It is Python after all. Starting with a list, I split that list into subsets based on a string prefix.
# Splitting a list into subsets
# expected outcome:
# [['sub_0_a', 'sub_0_b'], ['sub_1_a', 'sub_1_b']]
mylist = ['sub_0_a', 'sub_0_b', 'sub_1_a', 'sub_1_b']
def func(l, newlist=[], index=0):
newlist.append([i for i in l if i.startswith('sub_%s' % index)])
# create a new list without the items in newlist
l = [i for i in l if i not in newlist[index]]
if len(l):
index += 1
func(l, newlist, index)
func(mylist)
You could use itertools.groupby:
>>> import itertools
>>> mylist = ['sub_0_a', 'sub_0_b', 'sub_1_a', 'sub_1_b']
>>> for k,v in itertools.groupby(mylist,key=lambda x:x[:5]):
... print k, list(v)
...
sub_0 ['sub_0_a', 'sub_0_b']
sub_1 ['sub_1_a', 'sub_1_b']
or exactly as you specified it:
>>> [list(v) for k,v in itertools.groupby(mylist,key=lambda x:x[:5])]
[['sub_0_a', 'sub_0_b'], ['sub_1_a', 'sub_1_b']]
Of course, the common caveats apply (Make sure your list is sorted with the same key you're using to group), and you might need a slightly more complicated key function for real world data...
In [28]: mylist = ['sub_0_a', 'sub_0_b', 'sub_1_a', 'sub_1_b']
In [29]: lis=[]
In [30]: for x in mylist:
i=x.split("_")[1]
try:
lis[int(i)].append(x)
except:
lis.append([])
lis[-1].append(x)
....:
In [31]: lis
Out[31]: [['sub_0_a', 'sub_0_b'], ['sub_1_a', 'sub_1_b']]
Use itertools' groupby:
def get_field_sub(x): return x.split('_')[1]
mylist = sorted(mylist, key=get_field_sub)
[ (x, list(y)) for x, y in groupby(mylist, get_field_sub)]
There must be a simpler, more pythonic way of doing this.
Given this list of pairs:
pp = [('a',1),('b',1),('c',1),('d',2),('e',2)]
How do I most easily find the first item in adjacent pairs where the second item changes (here, from 1 to 2). Thus I'm looking for ['c','d']. Assume there will only be one change in pair[1] for the entire list, but that it may be a string.
This code works but seems excruciatingly long and cumbersome.
for i, pair in enumerate(pp):
if i == 0:
pInitial = pair[0]
sgInitial = pair[1]
pNext = pair[0]
sgNext = pair[1]
if sgInitial == sgNext:
sgInitial = sgNext
pInitial = pNext
else:
pOne = pInitial
pTwo = pNext
x = [pOne, pTwo]
print x
break
Thanks
Tim
import itertools as it
pp = [('a',1),('b',1),('c',1),('d',2),('e',2)]
# with normal zip and slicing
for a,b in zip(pp,pp[1:]):
if a[1] != b[1]:
x=(a[0],b[0])
print x
break
# with generators and izip
iterfirst = (b for a,b in pp)
itersecond = (b for a,b in pp[1:])
iterfirstsymbol = (a for a,b in pp)
itersecondsymbol = (a for a,b in pp[1:])
iteranswer = it.izip(iterfirstsymbol, itersecondsymbol, iterfirst, itersecond)
print next((symbol1, symbol2)
for symbol1,symbol2, first, second in iteranswer
if first != second)
Added my readable generator version.
You could try somethingl like :
[[pp[i][0],pp[i+1][0]] for i in xrange(len(pp)-1) if pp[i][1]!=pp[i+1][1]][0]
(using list comprehension)
try comparing pp[:-1] to pp[1:], something like
[a for a in zip(pp[:-1], pp[1:]) if a[0][1] != a[1][1]]
(look at zip(pp[:-1], pp[1:]) first to see what's going on
edit:
i guess you'd need
([a[0][0], a[1][0]] for a in zip(pp[:-1], pp[1:]) if a[0][1] != a[1][1]).next()
>>> import itertools
>>> pp = [('a',1),('b',1),('c',1),('d',2),('e',2)]
>>> gb = itertools.groupby(pp, key=lambda x: x[1])
>>> f = lambda x: list(next(gb)[1])[x][0]
>>> f(-1), f(0)
('c', 'd')
Here is something (simple?) with recursion:
def first_diff( seq, key=lambda x:x ):
""" returns the first items a,b of `seq` with `key(a) != key(b)` """
it = iter(seq)
def test(last): # recursive function
cur = next(it)
if key(last) != key(cur):
return last, cur
else:
return test(cur)
return test(next(it))
print first_diff( pp, key=lambda x:x[1]) # (('c', 1), ('d', 2))
pp = [('a',1),('b',1),('c',1),('d',2),('e',2)]
def find_first(pp):
for i,(a,b) in enumerate(pp):
if i == 0: oldb = b
else:
if b != oldb: return i
return None
print find_first(pp)
>>> pp = [('a',1),('b',1),('c',1),('d',2),('e',2)]
>>> [[t1, t2] for ((t1, v1), (t2, v2)) in zip(pp, pp[1:]) if v1 != v2] [0]
['c', 'd']
>>>
I like this for clarity...if you find list comprehensions clear. It does create two temporary lists: pp[1:] and the zip() result. Then it compares all the adjacent pairs and gives you the first change it found.
This similar-looking generator expression doesn't create temporary lists and stops processing when it reaches the first change:
>>> from itertools import islice, izip
>>> ([t1, t2] for ((t1, v1), (t2, v2)) in izip(pp, islice(pp, 1, None))
... if v1 != v2
... ).next()
['c', 'd']
>>>
Everybody's examples on this page are more compact than they would be if you wanted to catch errors.