I have a numpy array as follows,
arr = np.array([0.166667, 0., 0., 0.333333, 0., 0.166667, 0.166667, np.nan]
I wish to rank above array in descending order such that the highest value gets 1. and np.nan gets the last value but without incrementing the rank during value repetitions!
Expectation:
ranks = [2, 3, 3, 1, 3, 2, 2, 4]
i.e.
>>>>
1 0.333333
2 0.166667
2 0.166667
2 0.166667
3 0.0
3 0.0
3 0.0
4 -inf
What I have accomplished so far is below,
I used np.argsort twice and filled the np.nan value with the lowest float possible but the ranks increment even with the same value!
# The Logic
arr = np.nan_to_num(arr, nan=float('-inf'))
ranks = list(np.argsort(np.argsort(arr)[::-1]) + 1)
# Pretty Print
sorted_ = sorted([(r, a) for a, r, in zip(arr, ranks)], key=lambda v: v[0])
for r, a in sorted_:
print(r, a)
>>>>
1 0.333333
2 0.166667
3 0.166667
4 0.166667
5 0.0
6 0.0
7 0.0
8 -inf
Any idea on how to manage the ranks without increments?
https://repl.it/#MilindDalvi/MidnightblueUnselfishCategories
Here's a pandas approach using DataFrame.rank setting method="min" and na_option ='bottom':
s = pd.Series(arr).rank(method="min", na_option ='bottom', ascending=False)
u = np.sort(s.unique())
s.map(dict(zip(u, range(len(u))))).add(1).values
# array([2, 3, 3, 1, 3, 2, 2, 4], dtype=int64)
Try something like that before the last loop:
k = 1;
for i in (1, len(sorted_)):
if sorted_[i][1] != sorted_[i - 1][1] then
k = k + 1
sorted_[i][0] = k
Not necessarily a better way - just another way of approaching this issue
arr = sorted(np.array([0.166667, 0., 0., 0.333333, 0., 0.166667, 0.166667, np.nan]), reverse=True)
count = 1
mydict = {}
for a in arr:
if a not in mydict:
mydict[a] = count
count += 1
for i in arr:
print(mydict[i], i)
Here's one approach:
v = sorted(arr, reverse = 1)
for i,j in enumerate(set(v)):
if np.isnan(j): k = i+1
print([list(set(v)).index(i)+1 if not np.isnan(i) else k for i in arr])
Output
[2, 3, 3, 1, 3, 2, 2, 4]
numpy.unique sorts the unique values ascending, so using -arr gives you the correct order. The index for reversing this operation is exactly your rank (minus one).
arr_u, inv = np.unique(-arr, return_inverse=True)
rank = inv + 1
Related
I'm new with Python and have a quite simple problem on paper but difficult to me in Python.
I have two samples of values (which are lists) :
X = [2, 2, 4, 6]
Y = [1, 3, 4, 5]
I have a concatenated list which is sorted as
Z = [ 1 , 2 , 2 , 3 , 4 , 4 , 5 , 6]
#rank: 1 2.5 4 5.5 7 8
I would like to get the sum of ranks of X values in Z. For this example, the ranks of 2, 2, 4 and 6 in Z are 2.5 + 2.5 + 5.5 + 8 = 18.5
(ranks of Y values in Z are 1 + 4 + 5.5 + 7 = 17.5)
Here is what I've done but it doesn't work with these lists X and Y (it works if each value appears only one time)
def funct(X, Z):
rank = []
for i in range(len(Z)):
for j in range(len(X)):
if Z[i] == X[j]:
rank = rank + [(i+1)]
print(sum(rank))
return
I would like to solve my problem with not too much complicated functions (only loops and quite easy ways to get a solution).
You can use a dictionary to keep track of the rank sums and counts once you've sorted the combined list.
X = [2, 2, 4, 6]
Y = [1, 3, 4, 5]
Z = sorted(X + Y)
ranksum = {}
counts = {}
for i, v in enumerate(Z):
ranksum[v] = ranksum.get(v, 0) + (i + 1) # Add
counts[v] = counts.get(v, 0) + 1 # Increment count
Then, when you want to look up the rank of an element, you need ranksum[v] / count[v].
r = [ranksum[x] / counts[x] for x in X]
print(r)
# Out: [2.5, 2.5, 5.5, 8]
Here's a solution for how to build the list of ranks:
X = ...
Y = ...
Z = sorted(X + Y)
rank = [1]
z = Z[:1]
for i, e in enumerate(Z[1:], start=2):
if e == z[-1]:
rank[-1] += 0.5
else:
rank.append(i)
z.append(e)
Now you can convert that into a dictionary:
ranks = dict(zip(z, rank))
That will make lookup easier:
sum(ranks[e] for e in X)
Here's another option where you build a dictionary of the rank indexes and then create a rank dictionary from there:
from collections import defaultdict
X = [2, 2, 4, 6]
Y = [1, 3, 4, 5]
Z = sorted(X + Y)
rank_indexes = defaultdict(lambda: [])
for i,v in enumerate(Z):
rank_indexes[v].append(i+1)
ranks = {k:(sum(v)/len(v)) for (k,v) in rank_indexes.items()}
print("Sum of X ranks:", sum([ranks[v] for v in X]))
print("Sum of Y ranks:", sum([ranks[v] for v in Y]))
Output:
Sum of X ranks: 18.5
Sum of Y ranks: 17.5
You can do the same thing without defaultdict, but it's slightly slower and I'd argue less Pythonic:
rank_indexes = {}
for i,v in enumerate(Z):
rank_indexes.setdefault(v, []).append(i+1)
ranks = {k:(sum(v)/len(v)) for (k,v) in rank_indexes.items()}
consider array1 and array2, with:
array1 = [a1 a2 NaN ... an]
array2 = [[NaN b2 b3 ... bn],
[b21 NaN b23 ... b2n],
...]
Both arrays are numpy-arrays. There is an easy way to compute the Euclidean distance between array1and each row of array2:
EuclideanDistance = np.sqrt(((array1 - array2)**2).sum(axis=1))
What messes up this computation are the NaN values. Of course, I could easily replace NaN with some number. But instead, I want to do the following:
When I compare array1 with row_x of array2, I count the columns in which one of the arrays has NaN and the other doesn't. Let's assume the count is 3. I will then delete these columns from both arrays and compute the Euclidean distance between the two. In the end, I add a minus_value * count to the calculated distance.
Now, I cannot think of a fast and efficient way to do this. Can somebody help me?
Here are a few of my ideas:
minus = 1000
dist = np.zeros(shape=(array1.shape[0])) # this array will store the distance of array1 to each row of array2
array1 = np.repeat(array1, array2.shape[0], axis=0) # now array1 has the same dimensions as array2
for i in range(0, array1.shape[0]):
boolarray = np.logical_or(np.isnan(array1[i]), np.isnan(array2[i]))
count = boolarray.sum()
deleteIdxs = boolarray.nonzero() # this should give the indices where boolarray is True
dist[i] = np.sqrt(((np.delete(array1[i], deleteIdxs, axis=0) - np.delete(array2[i], deleteIdxs, axis=0))**2).sum(axis=0))
dist[i] = dist[i] + count*minus
These lines look more than ugly to me, however. Also, I keep getting an index error: Apparently deleteIdxs contains an index that is out of range for array1. Don't know how this can even be.
You can find all the indices with where the value is nan using:
indices_1 = np.isnan(array1)
indices_2 = np.isnan(array2)
Which you can combine to:
indices_total = indices_1 + indices_2
And you can keep all the not nan values using:
array_1_not_nan = array1[~indices_total]
array_2_not_nan = array2[~indices_total]
I would write a function to handle the distance calculation. I am sure there is a faster and more efficient way to write this (list comprehensions, aggregations, etc.), but readability counts, right? :)
import numpy as np
def calculate_distance(fixed_arr, var_arr, penalty):
s_sum = 0.0
counter = 0
for num_1, num_2 in zip(fixed_arr, var_arr):
if np.isnan(num_1) or np.isnan(num_2):
counter += 1
else:
s_sum += (num_1 - num_2) ** 2
return np.sqrt(s_sum) + penalty * counter, counter
array1 = np.array([1, 2, 3, np.NaN, 5, 6])
array2 = np.array(
[
[3, 4, 9, 3, 4, 8],
[3, 4, np.NaN, 3, 4, 8],
[np.NaN, 9, np.NaN, 3, 4, 8],
[np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN],
]
)
dist = np.zeros(len(array2))
minus = 10
for index, arr in enumerate(array2):
dist[index], _ = calculate_distance(array1, arr, minus)
print(dist)
You have to think about the value for the minus variable very carefully. Is adding a random value really useful?
As #Nathan suggested, a more resource efficient can easily be implemented.
fixed_arr = array1
penalty = minus
dist = [
(
lambda indices=(np.isnan(fixed_arr) + np.isnan(var_arr)): np.linalg.norm(
fixed_arr[~indices] - var_arr[~indices]
)
+ (indices == True).sum() * penalty
)()
for var_arr in array2
]
print(dist)
However I would only try to implement something like this if I absolutely needed to (if it's the bottleneck). For all other times I would be happy to sacrifice some resources in order to gain some readability and extensibility.
You can filter out the columns containing nan with:
mask1 = np.isnan(arr1)
mask2 = np.isnan(arr2).any(0)
mask = ~(mask1 | mask2)
# the two filtered arrays
arr1[mask], arr2[mask]
I have two lists of element
a = [1,2,3,2,3,1,1,1,1,1]
b = [3,1,2,1,2,3,3,3,3,3]
and I am trying to uniquely match the element from a to b, my expected result is like this:
1: 3
2: 1
3: 2
So I tried to construct an assignment matrix and then use scipy.linear_sum_assignment
a = [1,2,3,2,3,1,1,1,1,1]
b = [3,1,2,1,2,3,3,3,3,3]
total_true = np.unique(a)
total_pred = np.unique(b)
matrix = np.zeros(shape=(len(total_pred),
len(total_true)
)
)
for n, i in enumerate(total_true):
for m, j in enumerate(total_pred):
matrix[n, m] = sum(1 for item in b if item==(i))
I expected the matrix to be:
1 2 3
1 0 2 0
2 0 0 2
3 6 0 0
But the output is:
[[2. 2. 2.]
[2. 2. 2.]
[6. 6. 6.]]
What mistake did I made in here? Thank you very much
You don't even need to process this by Pandas. try to use zip and dict:
In [42]: a = [1,2,3,2,3,1,1,1,1,1]
...: b = [3,1,2,1,2,3,3,3,3,3]
...:
In [43]: c =zip(a,b)
In [44]: dict(c)
Out[44]: {1: 3, 2: 1, 3: 2}
UPDATE as OP said, if we need to store all the value with the same key, we can use defaultdict:
In [58]: from collections import defaultdict
In [59]: d = defaultdict(list)
In [60]: for k,v in c:
...: d[k].append(v)
...:
In [61]: d
Out[61]: defaultdict(list, {1: [3, 3, 3, 3, 3, 3], 2: [1, 1], 3: [2, 2]})
This row:
matrix[n, m] = sum(1 for item in b if item==(i))
counts the occurrences of i in b and saves the result to matrix[n, m]. Each cell of the matrix will contain either the number of 1's in b (i.e. 2) or the number of 2's in b (i.e. 2) or the number of 3's in b (i.e. 6). Notice that this value is completely independent of j, which means that the values in one row will always be the same.
In order to take j into consideration, try to replace the row with:
matrix[n, m] = sum(1 for x, y in zip(a, b) if (x, y) == (j, i))
In case your expected output, since how we specify the matrix as a(i, j) with i is the index of the row, and j is the index of the col. Looking at a(3,1) in your matrix, the result is 6, which means (3,1) combination matches 6 times, with 3 is from b and 1 is from a. We can find all the matches from 2 list.
matches = [tuple([x, y]) for x,y in zip(b, a)]
Then we can find how many matches there are of a specific combination, for example a(3, 1).
result = matches.count((3,1))
Consider the following variable length 2D array
[
[1, 2, 3],
[4, 5],
[6, 7, 8, 9]
]
How can i find the mean of the variables along the column?
I want something like [(1+4+6)/3,(2+5+7)/3, (3+8)/2, 9/1]
So the end result would be [3.667, 4.667, 5.5, 9]
Is this possible using numpy?
I tried np.mean(x, axis=0), but numpy expects the arrays of same dimension.
Right now, I am popping the elements of each column and finding the mean. Is there a better way to achieve the result?
You could use pandas:
import pandas as pd
a = [[1, 2, 3],
[4, 5],
[6, 7, 8, 9]]
df = pd.DataFrame(a)
# 0 1 2 3
# 0 1 2 3 NaN
# 1 4 5 NaN NaN
# 2 6 7 8 9
df.mean()
# 0 3.666667
# 1 4.666667
# 2 5.500000
# 3 9.000000
# dtype: float64
Here is another solution that only uses numpy:
import numpy as np
nrows = len(a)
ncols = max(len(row) for row in a)
arr = np.zeros((nrows, ncols))
arr.fill(np.nan)
for jrow, row in enumerate(a):
for jcol, col in enumerate(row):
arr[jrow, jcol] = col
print np.nanmean(arr, axis=0)
# array([ 3.66666667, 4.66666667, 5.5 , 9. ])
Very simple alternative approach using itertools.izip_longest() as:
>>> mean_list = []
>>> for sub_list in izip_longest(*my_list):
... filtered_list = filter(None, sub_list)
... mean_list.append(sum(filtered_list)/(len(filtered_list)*1.0))
...
>>> mean_list
[3.6666666666666665, 4.666666666666667, 5.5, 9.0]
where my_list equals to:
[
[1, 2, 3],
[4, 5],
[6, 7, 8, 9]
]
Listed in this post is an almost vectorized approach using NumPy. We would try to assign each element in list element an ID based on their positions. These IDs could then be fed to np.bincount as it would perform ID based summations. Finally, we would divide the summations respectively by the lengths of each ID to get the final average values.
Thus, we would have an implementation like so -
def variable_mean(a):
vals = np.concatenate(a)
lens = np.array(map(len,a))
id_arr = np.ones(vals.size,dtype=int)
id_arr[0] = 0
id_arr[lens.cumsum()[:-1]] = -lens[:-1] + 1
IDs = id_arr.cumsum()
return np.bincount(IDs,vals)/np.bincount(IDs)
Runtime test -
In [298]: # Setup input
...: N = 1000 # number of elems in input list
...: minL = 3 # min len of an element (list) in input list
...: maxL = 10 # max len of an element (list) in input list
...: a = [list(np.random.randint(0,9,(i))) \
...: for i in np.random.randint(minL,maxL,(N))]
...:
In [299]: %timeit pd.DataFrame(a).mean() ##Julien Spronck's pandas soln
100 loops, best of 3: 3.33 ms per loop
In [300]: %timeit variable_mean(a)
100 loops, best of 3: 2.36 ms per loop
In [301]: # Setup input
...: N = 1000 # number of elems in input list
...: minL = 3 # min len of an element (list) in input list
...: maxL = 100 # max len of an element (list) in input list
...: a = [list(np.random.randint(0,9,(i))) \
...: for i in np.random.randint(minL,maxL,(N))]
...:
In [302]: %timeit pd.DataFrame(a).mean() ##Julien Spronck's pandas soln
10 loops, best of 3: 27.1 ms per loop
In [303]: %timeit variable_mean(a)
100 loops, best of 3: 9.58 ms per loop
If you want to do it manually, what I would do:
max_length = 0
Figure out the max array length:
for array in arrays:
if len(array) > max:
max = len(array)
Pad all arrays to that length with 'None'
for array in arrays:
while len(array) < max:
array.append(None)
Zip will group the columns
columns = zip(*arrays)
columns == [(1, 4, 6), (2, 5, 7), (3, 'None', 8), ('None', 'None', 9)]
Calculate the average as you would for any list:
for col in columns:
count = 0
sum = 0.0
for num in col:
if num is not None:
count += 1
sum += float(num)
print "%s: Avg %s" % (col, sum/count)
Or as a list comprehension after padding the arrays:
[sum(filter(None, col))/float(len(filter(None, col))) for col in zip(*arrays)]
Output:
(1, 4, 6): Avg 3.66666666667
(2, 5, 7): Avg 4.66666666667
(3, 'None', 8): Avg 5.5
('None', 'None', 9): Avg 9.0
In Py3, zip_longest takes a fillvalue parameter:
In [1208]: ll=[
...: [1, 2, 3],
...: [4, 5],
...: [6, 7, 8, 9]
...: ]
In [1209]: list(itertools.zip_longest(*ll, fillvalue=np.nan))
Out[1209]: [(1, 4, 6), (2, 5, 7), (3, nan, 8), (nan, nan, 9)]
By filling with nan, I can use np.nanmean to take the mean ignoring the nan. nanmean turns its input (here _ from the previous line) into an array:
In [1210]: np.nanmean(_, axis=1)
Out[1210]: array([ 3.66666667, 4.66666667, 5.5 , 9. ])
Assume I have the following arrays:
N = 8
M = 4
a = np.zeros(M)
b = np.random.randint(M, size=N) # contains indices for a
c = np.random.rand(N) # contains random values
I want to sum the values of c according to the indices provided in b, and store them in a. Writing a loop for this is trivial:
for i, v in enumerate(b):
a[v] += c[i]
Since N can get quite big in my real-world problem I'd like to avoid using python loops, but I can't figure out how to write it as a numpy-statement. Can anyone help me out?
Ok, here some example values:
In [27]: b
Out[27]: array([0, 1, 2, 0, 2, 3, 1, 1])
In [28]: c
Out[28]:
array([ 0.15517108, 0.84717734, 0.86019899, 0.62413489, 0.24357903,
0.86015187, 0.85813481, 0.7071174 ])
In [30]: a
Out[30]: array([ 0.77930596, 2.41242955, 1.10377802, 0.86015187])
import numpy as np
N = 8
M = 4
b = np.array([0, 1, 2, 0, 2, 3, 1, 1])
c = np.array([ 0.15517108, 0.84717734, 0.86019899, 0.62413489, 0.24357903, 0.86015187, 0.85813481, 0.7071174 ])
a = ((np.mgrid[:M,:N] == b)[0] * c).sum(axis=1)
returns
array([ 0.77930597, 2.41242955, 1.10377802, 0.86015187])