Python Sorting with lambda - python

I am new to python and I have a question
A = [3,2,4,1]
N = len(A)
B = sorted(range(N), key = lambda i: A[i])
print(B)
output #[3, 1, 0, 2]
input #A = [7,2,4,1]
output #[3, 1, 2, 0]
I do not understand the output ?? Can anyone explain to me?

Let's talk about the specific example you have used
A = [3, 2, 4, 1]
N = len(A) . # N = 4
B = sorted(range(N), key = lambda i: A[i]) # sorted([0,1,2,3], key= lambda i:A[i])
Basically you are trying to sort [0,1,2,3] based on the values A[i] which are [3,2,4,1]
Now, A[3] < A[1] < A[0] < A[2]
And so you get the answer as [3, 1, 0, 2]

In the sorted function the first element is the item you would like to sort. If you can you sort a generator that is converted to a list of size 4. The sorting function sorts according to the value given by the anonymous function.
In your case - A = [3, 2, 4, 1]
List to sort - [0, 1, 2, 3]. Keys for each element [3, 2, 4, 1]. Basically you can imagine you sort [(0, 3), (1, 2), (2, 4), (3, 1)] according to the second element and then left with the first element and this results [3, 1, 0, 2] you get.

Related

Filter a list based on another list with multiple choices of elements

Suppose I have a list a=[1, 3, 4, 5, 7] and another list b=[0,0,1,1,3].
Now I want to filter a to make a new list where the corresponding position in b is 0 or 3. If I only want b to be 0, it's simply a = a[b==0], but now it's filtering based on a subset.
What I did is :
subset = [0, 1]
a = a[b in subset]
which is not correct and the error goes:ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I wonder what is the correct way to do so. Thanks!
If I understand your question right then you're looking for np.isin:
a = np.array([1, 3, 4, 5, 7])
b = np.array([0, 0, 1, 1, 3])
print(a[np.isin(b, [0, 3])])
Prints:
[1 3 7]
new_list = [i[0] for i in zip(a,b) if i[1] in [0,1]]
This uses a concept known as list comprehension.
It first creates a zip object that looks like
[(1, 0), (3, 0), (4, 1), (5, 1), (7,3)]
The list comprehension cycles through all the tuples, and filters for all the ones where the second element is either 0 or 1. And returns the first part of that
You can do with filter,
In [18]: d = dict(zip(a,b))
In [19]: list(filter(lambda x:d.get(x) in (0,1), d))
Out[19]: [1, 3, 4, 5]

How can I reverse a range of list in python? [duplicate]

This question already has answers here:
How do I reverse a part (slice) of a list in Python?
(8 answers)
Closed 3 years ago.
I have 2 test [0, 2] and [3, 4]
and my list is [1, 2, 3, 4, 5]
how can I reverse only in range from 0-2 and 3-4 to my list
[1, 2, 3, 4, 5] -> [3, 2, 1, 4, 5] -> [3, 2, 1, 5, 4]
this is my code. It only works in first time, and second time it not works!
How can I fix it?
def solution(arr):
test1 = [1, 3]
test2 = [4, 5]
totalTest = [test1, test2]
print(arr)
for x in totalTest:
a = []
for i in x:
a.append(i-1)
lenght = (a[1] - a[0] + 1)/2
index = a[1] - a[0]
# print(lenght)
for i in range(a[0], lenght): # i is the low index pointer
arr[index], arr[i] = arr[i], arr[index]
index -= 1
print(arr)
arr = [1, 2, 3, 4, 5]
solution(arr)
The above code outputs the result:
[3, 2, 1, 4, 5]
[3, 2, 1, 4, 5]
You can use list slicing as follow:
keep in mind that the lower bound is inclusive, but the upper bound is not
so if you want to slice a list from index 0 to index 2 you would do mylist[0:3]
mylist = [1, 2, 3, 4, 5]
mylist = mylist[0:3][::-1] + mylist[3:5][::-1]
# [3, 2, 1, 5, 4]
You can make a more general function to do this:
def reverse_parts(L, ranges):
""" Reverse the parts of list L indicated by pairs of indices in
list ranges
"""
for start, end in ranges:
L = L[:start] + L[start:end+1][::-1] + L[end+1:]
return L
print(reverse_parts([1,2,3,4,5], [(0, 2), (3, 4)])) # [3,2,1,5,4]
Alternatively, to reverse the list in-place:
def reverse_parts(L, ranges):
for start, end in ranges:
L[start:end+1] = reversed(L[start:end+1])
my_list = [1,2,3,4,5]
reverse_parts(my_list, [(0, 2), (3, 4)])
print(my_list)
You should break your list in two and reverse each part separately:
li = [1,2,3,4,5]
print(li[0:3][::-1] + li[3:5][::-1])
cheers!

Grouping same elements in the list efficiently

n,m=map(int,input().split())
arr=[i%m for i in (map(int,(input().split())))]
suppose n=5 and m =3 and input array =[3, 2 ,1 ,4, 5] so arr=[0, 2, 1, 1, 2] in this case but now i want to store elements of equal value in a list efficiently i.e [1,1] and [2,2].What's the best way to group them together efficiently?Also i want their indices at the end so
output : [[1,1],[2,2]] from index (2,3) and index(1,4)
what i am looking for is the indices of the original array elements before taking mod that have the same value after performing mod operation.
set saves only unique values
arr=[0, 2, 1, 1, 2]
arr = [(s, arr.count(s)) for s in set(arr)]
# [(0, 1), (1, 2), (2, 2)]
update (thanks to #JonClements)
s = {}
for i, v in enumerate(arr):
s.setdefault(v % 3, []).append(i)
print(s)
# {0: [0], 1: [2, 3], 2: [1, 4]}

Compare occurrences of shared items between lists

Ok, for a fun project I'm working on in order to learn some python I'm hitting a wall with what should be a basic task: I need to compare lists for the times items shared among the lists occur in each list. Using
shared_items = set(alist).intersection(blist)
gives me the items sharedbetwen the lists, but it does not tell me, how often those items occur in each list.
I tried loops like this for example:
def the_count(alist,blist):
c = 0
for x in alist:
for y in blist:
if x == y:
c += 1
return c
but that doesn't do the trick.
Another attempt was to use Counter:
c = Counter(alist)
b = Counter(blist)
But trying to loop over the Counter results failed too, last try was
a = Counter(alist)
b = Counter(blist)
for key, val in a:
if key in b:
val1 = b[key]
if val < val1:
print b[key]
else:
print a[key]
You almost had it using the set intersection. Since that gives you the common elements amongst both lists, all you have to do now is loop over that and count the elements. One way could be:
list1 = [0, 1, 2, 3, 1, 2, 3, 4, 3, 2]
list2 = [1, 4, 3, 5, 2, 1, 0, 2, 7, 8]
shared = set(list1).intersection(list2)
# Now, loop over the elements and create a dictionary using a generator.
# The key will be the shared element, and the value would be a tuple
# which corresponds to the counts of the first list and the second list, respectively
counts = {num:(list1.count(num), list2.count(num)) for num in shared}
counts now contains:
{
0: (1, 1),
1: (2, 2),
2: (3, 2),
3: (3, 1),
4: (1, 1)
}
This can further be abstracted into a function similar to:
def count_shared_elements(list1, list2):
shared = set(list1).intersection(list2)
return {num:(list1.count(num), list2.count(num)) for num in shared}
Using list (dict) as jrd1 pointed comprehension:
>>> list1 = [0, 1, 2, 3, 1, 2, 3, 4, 3, 2]
>>> list2 = [1, 4, 3, 5, 2, 1, 0, 2, 7, 8]
>>> {i:(list1.count(i), list2.count(i)) for i in set(list1) & set(list2)}
{0: (1, 1), 1: (2, 2), 2: (3, 2), 3: (3, 1), 4: (1, 1)}
Take a look at the answers linked in the question comments, another way to do this would be like this:
for a in alist:
c+= blist.count(a)
Best way is to get the unique items from two lists and check the count of those distinct numbers in each list.
for distinct_num in set(alist + blist):
print(alist.count(distinct_num))
print(blist.count(distinct_num))

Python Random List Comprehension

I have a list similar to:
[1 2 1 4 5 2 3 2 4 5 3 1 4 2]
I want to create a list of x random elements from this list where none of the chosen elements are the same. The difficult part is that I would like to do this by using list comprehension...
So possible results if x = 3 would be:
[1 2 3]
[2 4 5]
[3 1 4]
[4 5 1]
etc...
Thanks!
I should have specified that I cannot convert the list to a set. Sorry!
I need the randomly selected numbers to be weighted. So if 1 appears 4 times in the list and 3 appears 2 times in the list, then 1 is twice as likely to be selected...
Disclaimer: the "use a list comprehension" requirement is absurd.
Moreover, if you want to use the weights, there are many excellent approaches listed at Eli Bendersky's page on weighted random sampling.
The following is inefficient, doesn't scale, etc., etc.
That said, it has not one but two (TWO!) list comprehensions, returns a list, never duplicates elements, and respects the weights in a sense:
>>> s = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
>>> [x for x in random.choice([p for c in itertools.combinations(s, 3) for p in itertools.permutations(c) if len(set(c)) == 3])]
[3, 1, 2]
>>> [x for x in random.choice([p for c in itertools.combinations(s, 3) for p in itertools.permutations(c) if len(set(c)) == 3])]
[5, 3, 4]
>>> [x for x in random.choice([p for c in itertools.combinations(s, 3) for p in itertools.permutations(c) if len(set(c)) == 3])]
[1, 5, 2]
.. or, as simplified by FMc:
>>> [x for x in random.choice([p for p in itertools.permutations(s, 3) if len(set(p)) == 3])]
[3, 5, 2]
(I'll leave the x for x in there, even though it hurts not to simply write list(random.choice(..)) or just leave it as a tuple..)
Generally, you don't want to do this sort of thing in a list comprehension -- It'll lead to much harder to read code. However, if you really must, we can write a completely horrible 1 liner:
>>> values = [random.randint(0,10) for _ in xrange(12)]
>>> values
[1, 10, 6, 6, 3, 9, 0, 1, 8, 9, 1, 2]
>>> # This is the 1 liner -- The other line was just getting us a list to work with.
>>> [(lambda x=random.sample(values,3):any(values.remove(z) for z in x) or x)() for _ in xrange(4)]
[[6, 1, 8], [1, 6, 10], [1, 0, 2], [9, 3, 9]]
Please never use this code -- I only post it for fun/academic reasons.
Here's how it works:
I create a function inside the list comprehension with a default argument of 3 randomly selected elements from the input list. Inside the function, I remove the elements from values so that they aren't available to be picked again. since list.remove returns None, I can use any(lst.remove(x) for x in ...) to remove the values and return False. Since any returns False, we hit the or clause which just returns x (the default value with 3 randomly selected items) when the function is called. All that is left then is to call the function and let the magic happen.
The one catch here is that you need to make sure that the number of groups you request (here I chose 4) multiplied by the number of items per group (here I chose 3) is less than or equal to the number of values in your input list. It may seem obvious, but it's probably worth mentioning anyway...
Here's another version where I pull shuffle into the list comprehension:
>>> lst = [random.randint(0,10) for _ in xrange(12)]
>>> lst
[3, 5, 10, 9, 10, 1, 6, 10, 4, 3, 6, 5]
>>> [lst[i*3:i*3+3] for i in xrange(shuffle(lst) or 4)]
[[6, 10, 6], [3, 4, 10], [1, 3, 5], [9, 10, 5]]
This is significantly better than my first attempt, however, most people would still need to stop, scratch their head a bit before they figured out what this code was doing. I still assert that it would be much better to do this in multiple lines.
If I'm understanding your question properly, this should work:
def weighted_sample(L, x):
# might consider raising some kind of exception of len(set(L)) < x
while True:
ans = random.sample(L, x)
if len(set(ans)) == x:
return ans
Then if you want many such samples you can just do something like:
[weighted_sample(L, x) for _ in range(num_samples)]
I have a hard time conceiving of a comprehension for the sampling logic that isn't just obfuscated. The logic is a bit too complicated. It sounds like something randomly tacked on to a homework assignment to me.
If you don't like infinite looping, I haven't tried it but I think this will work:
def weighted_sample(L, x):
ans = []
c = collections.Counter(L)
while len(ans) < x:
r = random.randint(0, sum(c.values())
for k in c:
if r < c[k]:
ans.append(k)
del c[k]
break
else:
r -= c[k]
else:
# maybe throw an exception since this should never happen on valid input
return ans
First of all, I hope your list might be like
[1,2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
so if you want to print the permutation from the given list as size 3, you can do as the following.
import itertools
l = [1,2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
for permutation in itertools.permutations(list(set(l)),3):
print permutation,
Output:
(1, 2, 3) (1, 2, 4) (1, 2, 5) (1, 3, 2) (1, 3, 4) (1, 3, 5) (1, 4, 2) (1, 4, 3) (1, 4, 5) (1, 5, 2) (1, 5, 3) (1, 5, 4) (2, 1, 3) (2, 1, 4) (2, 1, 5) (2, 3, 1) (2, 3, 4) (2, 3, 5) (2, 4, 1) (2, 4, 3) (2, 4, 5) (2, 5, 1) (2, 5, 3) (2, 5, 4) (3, 1, 2) (3, 1, 4) (3, 1, 5) (3, 2, 1) (3, 2, 4) (3, 2, 5) (3, 4, 1) (3, 4, 2) (3, 4, 5) (3, 5, 1) (3, 5, 2) (3, 5, 4) (4, 1, 2) (4, 1, 3) (4, 1, 5) (4, 2, 1) (4, 2, 3) (4, 2, 5) (4, 3, 1) (4, 3, 2) (4, 3, 5) (4, 5, 1) (4, 5, 2) (4, 5, 3) (5, 1, 2) (5, 1, 3) (5, 1, 4) (5, 2, 1) (5, 2, 3) (5, 2, 4) (5, 3, 1) (5, 3, 2) (5, 3, 4) (5, 4, 1) (5, 4, 2) (5, 4, 3)
Hope this helps. :)
>>> from random import shuffle
>>> L = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
>>> x=3
>>> shuffle(L)
>>> zip(*[L[i::x] for i in range(x)])
[(1, 3, 2), (2, 2, 1), (4, 5, 3), (1, 4, 4)]
You could also use a generator expression instead of the list comprehension
>>> zip(*(L[i::x] for i in range(x)))
[(1, 3, 2), (2, 2, 1), (4, 5, 3), (1, 4, 4)]
Starting with a way to do it without list compehensions:
import random
import itertools
alphabet = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
def alphas():
while True:
yield random.choice(alphabet)
def filter_unique(iter):
found = set()
for a in iter:
if a not in found:
found.add(a)
yield a
def dice(x):
while True:
yield itertools.islice(
filter_unique(alphas()),
x
)
for i, output in enumerate(dice(3)):
print list(output)
if i > 10:
break
The part, where list comprehensions have troubles is filter_unique() since list comprehension does not have 'memory' of what it did output. The possible solution would be to generate many outputs while the one of good quality is not found as #DSM suggested.
The slow, naive approach is:
import random
def pick_n_unique(l, n):
res = set()
while len(res) < n:
res.add(random.choice(l))
return list(res)
This will pick elements and only quit when it has n unique ones:
>>> pick_n_unique([1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2], 3)
[2, 3, 4]
>>> pick_n_unique([1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2], 3)
[3, 4, 5]
However it can get slow if, for example, you have a list with thirty 1s and one 2, since once it has a 1 it'll keep spinning until it finally hits a 2. The better is to count the number of occurrences of each unique element, choose a random one weighted by their occurrence count, remove that element from the count list, and repeat until you have the desired number of elements:
def weighted_choice(item__counts):
total_counts = sum(count for item, count in item__counts.items())
which_count = random.random() * total_counts
for item, count in item__counts.items():
which_count -= count
if which_count < 0:
return item
raise ValueError("Should never get here")
def pick_n_unique(items, n):
item__counts = collections.Counter(items)
if len(item__counts) < n:
raise ValueError(
"Can't pick %d values with only %d unique values" % (
n, len(item__counts))
res = []
for i in xrange(n):
choice = weighted_choice(item__counts)
res.append(choice)
del item__counts[choice]
return tuple(res)
Either way, this is a problem not well-suited to list comprehensions.
def sample(self, population, k):
n = len(population)
if not 0 <= k <= n:
raise ValueError("sample larger than population")
result = [None] * k
try:
selected = set()
selected_add = selected.add
for i in xrange(k):
j = int(random.random() * n)
while j in selected:
j = int(random.random() * n)
selected_add(j)
result[i] = population[j]
except (TypeError, KeyError): # handle (at least) sets
if isinstance(population, list):
raise
return self.sample(tuple(population), k)
return result
Above is a simplied version of the sample function Lib/random.py. I only removed some optimization code for small data sets. The codes tell us straightly how to implement a customized sample function:
get a random number
if the number have appeared before just abandon it and get a new one
repeat the above steps until you get all the sample numbers you want.
Then the real problem turns out to be how to get a random value from a list by weight.This could be by the original random.sample(population, 1) in the Python standard library (a little overkill here, but simple).
Below is an implementation, because duplicates represent weight in your given list, we can use int(random.random() * array_length) to get a random index of your array.
import random
arr = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
def sample_by_weight( population, k):
n = len(population)
if not 0 <= k <= len(set(population)):
raise ValueError("sample larger than population")
result = [None] * k
try:
selected = set()
selected_add = selected.add
for i in xrange(k):
j = population[int(random.random() * n)]
while j in selected:
j = population[int(random.random() * n)]
selected_add(j)
result[i] = j
except (TypeError, KeyError): # handle (at least) sets
if isinstance(population, list):
raise
return self.sample(tuple(population), k)
return result
[sample_by_weight(arr,3) for i in range(10)]
With the setup:
from random import shuffle
from collections import deque
l = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
This code:
def getSubLists(l,n):
shuffle(l) #shuffle l so the elements are in 'random' order
l = deque(l,len(l)) #create a structure with O(1) insert/pop at both ends
while l: #while there are still elements to choose
sample = set() #use a set O(1) to check for duplicates
while len(sample) < n and l: #until the sample is n long or l is exhausted
top = l.pop() #get the top value in l
if top in sample:
l.appendleft(top) #add it to the back of l for a later sample
else:
sample.add(top) #it isn't in sample already so use it
yield sample #yield the sample
You end up with:
for s in getSubLists(l,3):
print s
>>>
set([1, 2, 5])
set([1, 2, 3])
set([2, 4, 5])
set([2, 3, 4])
set([1, 4])

Categories

Resources