Related
I have a manifold of lists containing integers. I store them in a list (a list of lists) that I call biglist.
Then I have a second list, eg [1, 2].
Now I want to find all lists out of the big_list that start with the same items as the small list. The lists I want to find must have at least all the items from the second list.
I was thinking this could be done recursively, and came up with this working example:
def find_lists_starting_with(start, biglist, depth=0):
if not biglist: # biglist is empty
return biglist
try:
new_big_list = []
# try:
for smallist in biglist:
if smallist[depth] == start[depth]:
if not len(start) > len(smallist):
new_big_list.append(smallist)
new_big_list = find_lists_starting_with(start,
new_big_list,
depth=depth+1)
return new_big_list
except IndexError:
return biglist
biglist = [[1,2,3], [2,3,4], [1,3,5], [1, 2], [1]]
start = [1, 2]
print(find_lists_starting_with(start, biglist))
However I am not very satisfied with the code example.
Do you have suggestions as how to improve:
- understandability of the code
- efficiency
You can try it via an iterator, like so:
[x for x in big_list if x[:len(start_list)] == start_list]
Here's how I would write it:
def find_lists_starting_with(start, biglist):
for small_list in biglist:
if start == small_list[:len(start)]:
yield small_list
This returns a generator instead, but you can call list to its result to get a list.
For either of the two solutions (#mortezaipo, #francisco-couzo) proposed so far, space efficiency could be improved via a custom startswith method to avoid constructing a new list in small_list[:len(start_list)]. For example:
def startswith(lst, start):
for i in range(len(start)):
if start[i] != lst[i]:
return False
return True
and then
[lst for lst in big_list if startswith(lst, start_list)]
(modeled after #mortezaipo's solution).
This question already has answers here:
How can I use list comprehensions to process a nested list?
(13 answers)
Closed 7 months ago.
I recently looked for a way to flatten a nested python list, like this: [[1,2,3],[4,5,6]], into this: [1,2,3,4,5,6].
Stackoverflow was helpful as ever and I found a post with this ingenious list comprehension:
l = [[1,2,3],[4,5,6]]
flattened_l = [item for sublist in l for item in sublist]
I thought I understood how list comprehensions work, but apparently I haven't got the faintest idea. What puzzles me most is that besides the comprehension above, this also runs (although it doesn't give the same result):
exactly_the_same_as_l = [item for item in sublist for sublist in l]
Can someone explain how python interprets these things? Based on the second comprension, I would expect that python interprets it back to front, but apparently that is not always the case. If it were, the first comprehension should throw an error, because 'sublist' does not exist. My mind is completely warped, help!
Let's take a look at your list comprehension then, but first let's start with list comprehension at it's easiest.
l = [1,2,3,4,5]
print [x for x in l] # prints [1, 2, 3, 4, 5]
You can look at this the same as a for loop structured like so:
for x in l:
print x
Now let's look at another one:
l = [1,2,3,4,5]
a = [x for x in l if x % 2 == 0]
print a # prints [2,4]
That is the exact same as this:
a = []
l = [1,2,3,4,5]
for x in l:
if x % 2 == 0:
a.append(x)
print a # prints [2,4]
Now let's take a look at the examples you provided.
l = [[1,2,3],[4,5,6]]
flattened_l = [item for sublist in l for item in sublist]
print flattened_l # prints [1,2,3,4,5,6]
For list comprehension start at the farthest to the left for loop and work your way in. The variable, item, in this case, is what will be added. It will produce this equivalent:
l = [[1,2,3],[4,5,6]]
flattened_l = []
for sublist in l:
for item in sublist:
flattened_l.append(item)
Now for the last one
exactly_the_same_as_l = [item for item in sublist for sublist in l]
Using the same knowledge we can create a for loop and see how it would behave:
for item in sublist:
for sublist in l:
exactly_the_same_as_l.append(item)
Now the only reason the above one works is because when flattened_l was created, it also created sublist. It is a scoping reason to why that did not throw an error. If you ran that without defining the flattened_l first, you would get a NameError
The for loops are evaluated from left to right. Any list comprehension can be re-written as a for loop, as follows:
l = [[1,2,3],[4,5,6]]
flattened_l = []
for sublist in l:
for item in sublist:
flattened_l.append(item)
The above is the correct code for flattening a list, whether you choose to write it concisely as a list comprehension, or in this extended version.
The second list comprehension you wrote will raise a NameError, as 'sublist' has not yet been defined. You can see this by writing the list comprehension as a for loop:
l = [[1,2,3],[4,5,6]]
flattened_l = []
for item in sublist:
for sublist in l:
flattened_l.append(item)
The only reason you didn't see the error when you ran your code was because you had previously defined sublist when implementing your first list comprehension.
For more information, you may want to check out Guido's tutorial on list comprehensions.
For the lazy dev that wants a quick answer:
>>> a = [[1,2], [3,4]]
>>> [i for g in a for i in g]
[1, 2, 3, 4]
While this approach definitely works for flattening lists, I wouldn't recommend it unless your sublists are known to be very small (1 or 2 elements each).
I've done a bit of profiling with timeit and found that this takes roughly 2-3 times longer than using a single loop and calling extend…
def flatten(l):
flattened = []
for sublist in l:
flattened.extend(sublist)
return flattened
While it's not as pretty, the speedup is significant. I suppose this works so well because extend can more efficiently copy the whole sublist at once instead of copying each element, one at a time. I would recommend using extend if you know your sublists are medium-to-large in size. The larger the sublist, the bigger the speedup.
One final caveat: obviously, this only holds true if you need to eagerly form this flattened list. Perhaps you'll be sorting it later, for example. If you're ultimately going to just loop through the list as-is, this will not be any better than using the nested loops approach outlined by others. But for that use case, you want to return a generator instead of a list for the added benefit of laziness…
def flatten(l):
return (item for sublist in l for item in sublist) # note the parens
Note, of course, that the sort of comprehension will only "flatten" a list of lists (or list of other iterables). Also if you pass it a list of strings you'll "flatten" it into a list of characters.
To generalize this in a meaningful way you first want to be able to cleanly distinguish between strings (or bytearrays) and other types of sequences (or other Iterables). So let's start with a simple function:
import collections
def non_str_seq(p):
'''p is putatively a sequence and not a string nor bytearray'''
return isinstance(p, collections.Iterable) and not (isinstance(p, str) or isinstance(p, bytearray))
Using that we can then build a recursive function to flatten any
def flatten(s):
'''Recursively flatten any sequence of objects
'''
results = list()
if non_str_seq(s):
for each in s:
results.extend(flatten(each))
else:
results.append(s)
return results
There are probably more elegant ways to do this. But this works for all the Python built-in types that I know of. Simple objects (numbers, strings, instances of None, True, False are all returned wrapped in list. Dictionaries are returned as lists of keys (in hash order).
Maybe this is just from mental exhaustion, but I can not for the life of me figure this out, even though i used the same principle on another program I created..
I have two lists:
compare_list = [0,1,1,2,3,3,4,7,5,8,9,9]
master_list = [0,1,2,3,4,8,9]
As you can see both lists contain some numbers that are the same, and in compare_list you have values that have duplicates..
What I want completed is to compare both lists, and delete from the compare_list if it finds it in the master_list.
This is the code i have so far:
for x in compare_list:
for y in master_list:
if x == y:
compare_list.remove(x)
The result is that i do have some items being deleted from compare_list, but i still have some duplicates left..
output:
print(compare_list)
[1,3,7,5,9]
how do i get it right where it deletes all instances of duplicates from master_list. so that compare_list just contains numbers that aren't found in master_list?
Seems like a straight-forward use case for filter
>>> compare_list = [0,1,1,2,3,3,4,7,5,8,9,9]
>>> master_list = [0,1,2,3,4,8,9]
>>> filter(lambda i: i not in master_list, compare_list)
[7, 5]
compare_list = [x for x in compare_list if x not in master_list]
If master_list has more than a few items, it will be more efficient to use a set
master_set = set(master_list)
compare_list = [x for x in compare_list if x not in master_set]
As suggested by the python official documentation list.remove
"Remove the first item from the list whose value is x. It is an error if there is no such item."
So if you have duplicates in your compare_list you won't remove them.
If you already have a list and you need to remove duplicates from it I think that you need to convert your list in a set:
compare_list = [1,2,2,3,4,5,5,5]
my_set = set(compare_list)
If you need to filter the compare_list removing some specific elements you can do this step after you've transformed the list into a set using the set's method remove.
Why not using collections.Counter:
import collections
compare_list = [0,1,1,2,3,3,4,7,5,8,9,9]
master_list = [0,1,2,3,4,8,9]
comp_set = collections.Counter(compare_list)
master_set = collections.Counter(master_list)
print comp_set-master_set
Producing:
Counter({1: 1, 3: 1, 9: 1, 5: 1, 7: 1})
Please beware that substract() might lead to 0 or negative counts. So to use effectively your result you probably have to add an extra pass:
print [k for k,v in (comp_set-master_set).items() if v > 0]
# ^^^^^^^^
Producing:
[1, 3, 9, 5, 7]
I need to be able to find the first common list (which is a list of coordinates in this case) between a variable amount of lists.
i.e. this list
>>> [[[1,2],[3,4],[6,7]],[[3,4],[5,9],[8,3],[4,2]],[[3,4],[9,9]]]
should return
>>> [3,4]
If easier, I can work with a list of all common lists(coordinates) between the lists that contain the coordinates.
I can't use sets or dictionaries because lists are not hashable(i think?).
Correct, list objects are not hashable because they are mutable. tuple objects are hashable (provided that all their elements are hashable). Since your innermost lists are all just integers, that provides a wonderful opportunity to work around the non-hashableness of lists:
>>> lists = [[[1,2],[3,4],[6,7]],[[3,4],[5,9],[8,3],[4,2]],[[3,4],[9,9]]]
>>> sets = [set(tuple(x) for x in y) for y in lists]
>>> set.intersection(*sets)
set([(3, 4)])
Here I give you a set which contains tuples of the coordinates which are present in all the sublists. To get a list of list like you started with:
[list(x) for x in set.intersection(*sets)]
does the trick.
To address the concern by #wim, if you really want a reference to the first element in the intersection (where first is defined by being first in lists[0]), the easiest way is probably like this:
#... Stuff as before
intersection = set.intersection(*sets)
reference_to_first = next( (x for x in lists[0] if tuple(x) in intersection), None )
This will return None if the intersection is empty.
If you are looking for the first child list that is common amongst all parent lists, the following will work.
def first_common(lst):
first = lst[0]
rest = lst[1:]
for x in first:
if all(x in r for r in rest):
return x
Solution with recursive function. :)
This gets first duplicated element.
def get_duplicated_element(array):
global result, checked_elements
checked_elements = []
result = -1
def array_recursive_check(array):
global result, checked_elements
if result != -1: return
for i in array:
if type(i) == list:
if i in checked_elements:
result = i
return
checked_elements.append(i)
array_recursive_check(i)
array_recursive_check(array)
return result
get_duplicated_element([[[1,2],[3,4],[6,7]],[[3,4],[5,9],[8,3],[4,2]],[[3,4],[9,9]]])
[3, 4]
you can achieve this with a list comprehension:
>>> l = [[[1,2],[3,4],[6,7]],[[3,4],[5,9],[8,3],[4,2]],[[3,4],[9,9]]]
>>> lcombined = sum(l, [])
>>> [k[0] for k in [(i,lcombined.count(i)) for i in lcombined] if k[1] > 1][0]
[3, 4]
Given: a list of lists, such as [[3,2,1], [3,2,1,4,5], [3,2,1,8,9], [3,2,1,5,7,8,9]]
Todo: Find the longest common prefix of all sublists.
Exists: In another thread "Common elements between two lists not using sets in Python", it is suggested to use "Counter", which is available above python 2.7. However our current project was written in python 2.6, so "Counter" is not used.
I currently code it like this:
l = [[3,2,1], [3,2,1,4,5], [3,2,1,8,9], [3,2,1,5,7,8,9]]
newl = l[0]
if len(l)>1:
for li in l[1:]:
newl = [x for x in newl if x in li]
But I find it not very pythonic, is there a better way of coding?
Thanks!
New edit: Sorry to mention: in my case, the shared elements of the lists in 'l' have the same order and alway start from the 0th item. So you wont have cases like [[1,2,5,6],[2,1,7]]
os.path.commonprefix() works well for lists :)
>>> x = [[3,2,1], [3,2,1,4,5], [3,2,1,8,9], [3,2,1,5,7,8,9]]
>>> import os
>>> os.path.commonprefix(x)
[3, 2, 1]
I am not sure how pythonic it is
from itertools import takewhile,izip
x = [[3,2,1], [3,2,1,4,5], [3,2,1,8,9], [3,2,1,5,7,8,9]]
def allsame(x):
return len(set(x)) == 1
r = [i[0] for i in takewhile(allsame ,izip(*x))]
Here's an alternative way using itertools:
>>> import itertools
>>> L = [[3,2,1,4], [3,2,1,4,5], [3,2,1,8,9], [3,2,1,5,7,8,9]]
>>> common_prefix = []
>>> for i in itertools.izip(*L):
... if i.count(i[0]) == len(i):
... common_prefix.append(i[0])
... else:
... break
...
>>> common_prefix
[3, 2, 1]
Not sure how "pythonic" it might be considered though.
Given your example code, you seem to want a version of reduce(set.intersection, map(set, l)) that preserves the initial order of the first list.
This requires algorithmic improvements, not stylistic improvements; "pythonic" code alone won't do you any good here. Think about the situation that must hold for all values that occur in every list:
Given a list of lists, a value occurs in every list if and only if it occurs in nlist lists, where nlist is the total number of lists.
If we can guarantee that each value occurs only once in every list, then the above can be rephrased:
Given a list of lists of unique items, a value occurs in every list if and only if it occurs nlist times total.
We can use sets to guarantee that the items in our lists are unique, so we can combine this latter principle with a simple counting strategy:
>>> l = [[3,2,1], [3,2,1,4,5], [3,2,1,8,9], [3,2,1,5,7,8,9]]
>>> count = {}
>>> for i in itertools.chain.from_iterable(map(set, l)):
... count[i] = count.get(i, 0) + 1
...
Now all we have to do is filter the original list:
>>> [i for i in l[0] if count[i] == len(l)]
[3, 2, 1]
It is inefficient as it doesn't early-out as soon as a mismatch is found, but its tidy:
([i for i,(j,k) in enumerate(zip(a,b)) if j!=k] or [0])[0]