I would like to create a new nested list from an already existing nested list. This new list should include the indices+1 from the existing list.
Example:
my_list = [[20, 45, 80],[56, 29],[76],[38,156,11,387]]
Result:
my_new_list = [[1,2,3],[1,2],[1],[1,2,3,4]]
How can I create such a list?
save a python loop, force iteration of range (required for python 3) in a list comprehension, so it's faster than a classical double nested comprehension:
my_list = [[20, 45, 80],[56, 29],[76],[38,156,11,387]]
index_list = [list(range(1,len(x)+1)) for x in my_list]
There's a few ways to do this, but the first that comes to mind is to enumerate the elements with a starting index of 1 in a nested list comprehension.
>>> [[index for index, value in enumerate(sub, 1)] for sub in my_list]
[[1, 2, 3], [1, 2], [1], [1, 2, 3, 4]]
Another solution could be:
new_list = [list(range(1,len(item)+1)) for item in my_list]
Here is are some simple solutions:
>>> lst = [[20, 45, 80],[56, 29],[76],[38,156,11,387]]
>>> out = [[x+1 for x,_ in enumerate(y)] for y in lst]
>>> out
[[1, 2, 3], [1, 2], [1], [1, 2, 3, 4]]
>>>
>>>
>>> out = [[x+1 for x in range(len(y))] for y in lst]
>>> out
[[1, 2, 3], [1, 2], [1], [1, 2, 3, 4]]
Using nested list comprehension
First you want a list with every number from 1 to the length of your sublist. There are several ways to do this with list comprehension.
For example
[i for i in range(1, len(sublist) + 1)]
or
[i + 1 for i in range(len(sublist))]
Second you want to do this for every sublist inside your my_list. Therefore you have to use nested list comprehension:
>>> my_list = [[20, 45, 80],[56, 29],[76],[38,156,11,387]]
>>> my_new_list = [[i+1 for i in range(len(sublist))] for sublist in my_list]
>>> my_new_list
[[1, 2, 3], [1, 2], [1], [1, 2, 3, 4]]
Using list comprehension with range
Another way would be using the range built-in function as a generator for your sublists:
>>> [list(range(1, len(sublist) + 1)) for sublist in my_list]
[[1, 2, 3], [1, 2], [1], [1, 2, 3, 4]]
Using map with range
Or you can use the map built-in function
>>> list(map(
... lambda sublist: list(range(1, len(sublist) + 1)),
... my_list
... ))
[[1, 2, 3], [1, 2], [1], [1, 2, 3, 4]]
Related
How can I make the 1st element of list of lists to match the order of elements in another list? For example:
list1 = [3, 7, 1, 10, 4]
list2 = [[1,0],[3,2],[4,11],[7,9],[10,1]]
newlist = [[3,2],[7,9],[1,0],[10,1],[4,11]]
You can use list.index:
list1 = [3, 7, 1, 10, 4]
list2 = [[1, 0], [3, 2], [4, 11], [7, 9], [10, 1]]
newlist = sorted(list2, key=lambda l: list1.index(l[0]))
print(newlist)
Prints:
[[3, 2], [7, 9], [1, 0], [10, 1], [4, 11]]
If the lists are large, I'd suggest to create a mapping from list1 and use that in the sorting key function:
m = {v: i for i, v in enumerate(list1)}
newlist = sorted(list2, key=lambda l: m[l[0]])
print(newlist)
If the firsts in the second list are always a permutation of the first list (i.e., if that's not a misleading example):
firsts = next(zip(*list2))
first2full = dict(zip(firsts, list2))
newlist = [*map(first2full.get, list1)]
Try it online!
You can do that like this. But be careful when you have repititive first elements in list2 elements !
a = []
for i in range(len(list1)):
for j in range(len(list2)):
if list1[i] == list2[j][0]:
a.append(list2[j])
print(a)
I have a list of lists, but some lists are "sublists" of other lists. What I want to do is remove the sublists from the larger list so that we only have the largest unique sublists.
For example:
>>> some_list = [[1], [1, 2], [1, 2, 3], [1, 4]]
>>> ideal_list = [[1, 2, 3], [1, 4]]
The code that I've written right now is:
new_list = []
for i in range(some_list)):
for j in range(i + 1, len(some_list)):
count = 0
for k in some_list[i]:
if k in some_list[j]:
count += 1
if count == len(some_list[i]):
new_list.append(some_list[j])
The basic algorithm that I had in mind is that we'd check if a list's elements were in the following sublists, and if so then we use the other larger sublist. It doesn't give the desired output (it actually gives [[1, 2], [1, 2, 3], [1, 4], [1, 2, 3]]) and I'm wondering what I could do to achieve what I want.
I don't want to use sets because duplicate elements matter.
Same idea as set, but using Counter instead. It should be a lot more efficient in sublist check part than brute force
from collections import Counter
new_list = []
counters = []
for arr in sorted(some_list, key=len, reverse=True):
arr_counter = Counter(arr)
if any((c & arr_counter) == arr_counter for c in counters):
continue # it is a sublist of something else
new_list.append(arr)
counters.append(arr_counter)
With some inspiration from #mkrieger1's comment, one possible solution would be:
def merge_sublists(some_list):
new_list = []
for i in range(len(some_list)):
true_or_false = []
for j in range(len(some_list)):
if some_list[j] == some_list[i]:
continue
true_or_false.append(all([x in some_list[j] for x in some_list[i]]))
if not any(true_or_false):
new_list.append(some_list[i])
return new_list
As is stated in the comment, a brute-force solution would be to loop through each element and check if it's a sublist of any other sublist. If it's not, then append it to the new list.
Test cases:
>>> merge_sublists([[1], [1, 2], [1, 2, 3], [1, 4]])
[[1, 2, 3], [1, 4]]
>>> merge_sublists([[1, 2, 3], [4, 5], [3, 4]])
[[1, 2, 3], [4, 5], [3, 4]]
Input:
l = [[1], [1, 2], [1, 2, 3], [1, 4]]
One way here:
l1 = l.copy()
for i in l:
for j in l:
if set(i).issubset(set(j)) and i!=j:
l1.remove(i)
break
This prints:
print(l1)
[[1, 2, 3], [1, 4]]
EDIT: (Taking care of duplicates as well)
l1 = [list(tupl) for tupl in {tuple(item) for item in l }]
l2 = l1.copy()
for i in l1:
for j in l1:
if set(i).issubset(set(j)) and i!=j:
l2.remove(i)
break
Suppose I have a list like this:
[[1, 2], [2, 3], [5, 4]]
What I want is two different lists from the above list with first element in one list and the second element in the other.
The result will look like this:
[1,2,5] and [2,3,4]
Is there any way to do it by using list splicing ?
Use zip() to pair up the elements of the input lists:
lista, listb = zip(*inputlist)
The * applies the elements in inputlist as separate arguments, as if you called zip() as zip([1, 2], [2, 3], [5, 4]). zip() takes the first element of each argument and returns those together, and then the second element, etc.
This produces tuples, not lists, really, but that's easy to remedy:
lista, listb = map(list, zip(*inputlist))
Demo:
>>> inputlist = [[1, 2], [2, 3], [5, 4]]
>>> zip(*inputlist)
[(1, 2, 5), (2, 3, 4)]
>>> lista, listb = map(list, zip(*inputlist))
>>> lista
[1, 2, 5]
>>> listb
[2, 3, 4]
How about use numpy array?
import numpy as np
np.array(myList).transpose()
# array([[1, 2, 5],
# [2, 3, 4]])
Or
np.array(myList)[:, 0]
# array([1, 2, 5])
np.array(myList)[:, 1]
# array([2, 3, 4])
I've seen some questions here very related but their answer doesn't work for me. I have a list of lists where some sublists are repeated but their elements may be disordered. For example
g = [[1, 2, 3], [3, 2, 1], [1, 3, 2], [9, 0, 1], [4, 3, 2]]
The output should be, naturally according to my question:
g = [[1,2,3],[9,0,1],[4,3,2]]
I've tried with set but only removes those lists that are equal (I thought It should work because sets are by definition without order). Other questions i had visited only has examples with lists exactly duplicated or repeated like this: Python : How to remove duplicate lists in a list of list?. For now order of output (for list and sublists) is not a problem.
(ab)using side-effects version of a list comp:
seen = set()
[x for x in g if frozenset(x) not in seen and not seen.add(frozenset(x))]
Out[4]: [[1, 2, 3], [9, 0, 1], [4, 3, 2]]
For those (unlike myself) who don't like using side-effects in this manner:
res = []
seen = set()
for x in g:
x_set = frozenset(x)
if x_set not in seen:
res.append(x)
seen.add(x_set)
The reason that you add frozensets to the set is that you can only add hashable objects to a set, and vanilla sets are not hashable.
If you don't care about the order for lists and sublists (and all items in sublists are unique):
result = set(map(frozenset, g))
If a sublist may have duplicates e.g., [1, 2, 1, 3] then you could use tuple(sorted(sublist)) instead of frozenset(sublist) that removes duplicates from a sublist.
If you want to preserve the order of sublists:
def del_dups(seq, key=frozenset):
seen = {}
pos = 0
for item in seq:
if key(item) not in seen:
seen[key(item)] = True
seq[pos] = item
pos += 1
del seq[pos:]
Example:
del_dups(g, key=lambda x: tuple(sorted(x)))
See In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique while preserving order?
What about using mentioned by roippi frozenset this way:
>>> g = [list(x) for x in set(frozenset(i) for i in [set(i) for i in g])]
[[0, 9, 1], [1, 2, 3], [2, 3, 4]]
I would convert each element in the list to a frozenset (which is hashable), then create a set out of it to remove duplicates:
>>> g = [[1, 2, 3], [3, 2, 1], [1, 3, 2], [9, 0, 1], [4, 3, 2]]
>>> set(map(frozenset, g))
set([frozenset([0, 9, 1]), frozenset([1, 2, 3]), frozenset([2, 3, 4])])
If you need to convert the elements back to lists:
>>> map(list, set(map(frozenset, g)))
[[0, 9, 1], [1, 2, 3], [2, 3, 4]]
Lets say I have a list:
List = [1,2,3,4,5]
I want to use a comprehension to output a list of lists for every element, let's say i, in "List" containing 1,2,...,i. So the comprehension would output:
[[1],[1,2],[1,2,3],[1,2,3,4],[1,2,3,4,5]]
The same would work for a List of List = [1,3,5] where the output would be:
[[1],[1,2,3],[1,2,3,4,5]
I do not want to use any modules like numpy or itertools
Any help would me much appreciated!
Sure:
>>> [range(1, i+1) for i in List]
[[1], [1, 2], [1, 2, 3], [1, 2, 3, 4], [1, 2, 3, 4, 5]]