I have two arrays:
index = [2,1,0,0,1,1,1,2]
values = [1,2,3,4,5,4,3,2]
I would like to produce:
[sum(v for i,v in zip(index, values) if i == ui) for i in sorted(set(index))]
in the most efficient way possible.
my values are computed via autograd
doing a groupby in pandas is really not efficient because of the point above
I have to do it hundreds of times on the same index but with different values
len(values) ~ 10**7
len(set(index)) ~ 10**6
Counter(index).most_common(1)[0][1] ~ 1000
I think a pure numpy solution would be the best.
I tried to precompute the reduced version of index, and then do:
[values[l].sum() for l in reduced_index]
but it is not efficient enough.
Here is a minimal code sample:
import numpy as np
import autograd.numpy as anp
from autograd import grad
import pandas as pd
EASY = True
if EASY:
index = np.random.randint(10, size=10**3)
values = anp.random.rand(10**3) * 2 - 1
else:
index = np.random.randint(1000, size=10**7)
values = anp.random.rand(10**7) * 2 - 1
# doesn't work
def f1(values):
return anp.exp(anp.bincount(index, weights=values)).sum()
index_unique = sorted(set(index))
index_map = {j: i for i, j in enumerate(index_unique)}
index_mapped = [index_map[i] for i in index]
index_lists = [[] for _ in range(len(index_unique))]
for i, j in enumerate(index_mapped):
index_lists[j].append(i)
def f2(values):
s = anp.array([values[l].sum() for l in index_lists])
return anp.exp(s).sum()
ans = grad(f2)(values)
If your index are non negative integers, you can use np.bincount with values as weights:
np.bincount(index, weights=values)
# array([ 7., 14., 3.])
This gives the sum at each position from 0 to max(index).
Related
I have a 1-D array arr and I need to compute the variance of all possible contiguous subvectors that begin at position 0. It may be easier to understand with a for loop:
np.random.seed(1)
arr = np.random.normal(size=100)
res = []
for i in range(1, arr.size+1):
subvector = arr[:i]
var = np.var(subvector)
res.append(var)
Is there any way to compute res witouth the for loop?
Yes, since var = sum_squares / N - mean**2, and mean = sum /N, you can do cumsum to get the accumulate sums:
cumsum = np.cumsum(arr)
cummean = cumsum/(np.arange(len(arr)) + 1)
sq = np.cumsum(arr**2)
# correct the dof here
cumvar = sq/(np.arange(len(arr))+1) - cummean**2
np.allclose(res, cumvar)
# True
With pandas, you could use expanding:
import pandas as pd
pd.Series(arr).expanding().var(ddof=0).values
NB. one of the advantages is that you can benefit from the var parameters (by default ddof=1), and of course, you can run many other methods.
resuming this question: Compute the pairwise distance in scipy with missing values
test case: I want to compute the pairwise distance of series with different length taht are grouped together and I have to do it in the most efficient possible way (using euclidean distance).
one way that makes it work could be this:
import pandas as pd
import numpy as np
from scipy.spatial.distance import pdist
a = pd.DataFrame(np.random.rand(10, 4), columns=['a','b','c','d'])
a.loc[0, 'a'] = np.nan
a.loc[1, 'a'] = np.nan
a.loc[0, 'c'] = np.nan
a.loc[1, 'c'] = np.nan
def dropna_on_the_fly(x, y):
return np.sqrt(np.nansum(((x-y)**2)))
pdist(starting_set, dropna_on_the_fly)
but I feel this could be very inefficient as built in methods of the pdist function are internally optimized whereas the function is simply passed over.
I have a hunch that a vectorized solution in numpy for which I broadcast the subtraction and then I proceed with the np.nansum for na resistant sum but I am unsure on how to proceed.
Inspired by this post, there would be two solutions.
Approach #1 : The vectorized solution would be -
ar = a.values
r,c = np.triu_indices(ar.shape[0],1)
out = np.sqrt(np.nansum((ar[r] - ar[c])**2,1))
Approach #2 : The memory-efficient and more performant one for large arrays would be -
ar = a.values
b = np.where(np.isnan(ar),0,ar)
mask = ~np.isnan(ar)
n = b.shape[0]
N = n*(n-1)//2
idx = np.concatenate(( [0], np.arange(n-1,0,-1).cumsum() ))
start, stop = idx[:-1], idx[1:]
out = np.empty((N),dtype=b.dtype)
for j,i in enumerate(range(n-1)):
dif = b[i,None] - b[i+1:]
mask_j = (mask[i] & mask[i+1:])
masked_vals = mask_j * dif
out[start[j]:stop[j]] = np.einsum('ij,ij->i',masked_vals, masked_vals)
# or simply : ((mask_j * dif)**2).sum(1)
out = np.sqrt(out)
I am coming from a java background and new to numpy and pandas.
I want to translate the following pseudo code into python.
theta[0...D] - numpy
input[1...D][0...N-1] - Pandas data frame
PSEUDO CODE:
mean = theta[0]
for(row = 0 to N-1)
for(col = 1 to D)
mean += theta[col] * input[row][col]
Implementation:
class simulator:
theta = np.array([])
stddev = 0
def __init__(self, v_coefficents, v_stddev):
self.theta = v_coefficents
self.stddev = v_stddev
def sim( self, input ):
mean = self.theta[0]
D = input.shape[0]
N = input.shape[1]
for index, row in input.iterrows():
mean = self.theta[0]
for i in range(D):
mean += self.theta[i+1] *row['y']
I am concerned with iteration in the last line of code:
mean += self.theta[i+1] *row['y'].
Since you are working with NumPy, I would suggest extracting the pandas dataframe as an array and then we would have the luxury of working with theta and the extracted version of input both as arrays.
Thus, starting off we would have the array as -
input_arr = input.values
Then, the translation of the pseudo code would be -
mean = theta[0]
for row in range(N):
for col in range(1,D+1):
mean += theta[col] * input_arr[row,col]
To perform the sum-reductions, with NumPy supporting vectorized operations and broadcasting, we would have the output with simply -
mean = theta[0] + (theta[1:D+1]*input_arr[:,1:D+1]).sum()
This could be optimized further with np.dot as a matrix-multiplication, like so -
mean = theta[0] + np.dot(input_arr[:,1:D+1], theta[1:D+1]).sum()
Please note that if you meant that input has a length of D-1, then we need few edits :
Loopy code would have : input_arr[row,col-1] instead of input_arr[row,col].
Vectorized codes would have : input_arr instead of input_arr[:,1:D+1].
Sample run based on comments -
In [71]: df = {'y' : [1,2,3,4,5]}
...: data_frame = pd.DataFrame(df)
...: test_coefficients = np.array([1,2,3,4,5,6])
...:
In [79]: input_arr = data_frame.values
...: theta = test_coefficients
...:
In [80]: theta[0] + np.dot(input_arr[:,0], theta[1:])
Out[80]: 71
Is there an easy/build-in way to get the element-wise maximum of two (or ideally more) sparse matrices? I.e. a sparse equivalent of np.maximum.
This did the trick:
def maximum (A, B):
BisBigger = A-B
BisBigger.data = np.where(BisBigger.data < 0, 1, 0)
return A - A.multiply(BisBigger) + B.multiply(BisBigger)
No, there's no built-in way to do this in scipy.sparse. The easy solution is
np.maximum(X.A, Y.A)
but this is obviously going to be very memory-intensive when the matrices have large dimensions and it might crash your machine. A memory-efficient (but by no means fast) solution is
# convert to COO, if necessary
X = X.tocoo()
Y = Y.tocoo()
Xdict = dict(((i, j), v) for i, j, v in zip(X.row, X.col, X.data))
Ydict = dict(((i, j), v) for i, j, v in zip(Y.row, Y.col, Y.data))
keys = list(set(Xdict.iterkeys()).union(Ydict.iterkeys()))
XmaxY = [max(Xdict.get((i, j), 0), Ydict.get((i, j), 0)) for i, j in keys]
XmaxY = coo_matrix((XmaxY, zip(*keys)))
Note that this uses pure Python instead of vectorized idioms. You can try shaving some of the running time off by vectorizing parts of it.
Here's another memory-efficient solution that should be a bit quicker than larsmans'. It's based on finding the set of unique indices for the nonzero elements in the two arrays using code from Jaime's excellent answer here.
import numpy as np
from scipy import sparse
def sparsemax(X, Y):
# the indices of all non-zero elements in both arrays
idx = np.hstack((X.nonzero(), Y.nonzero()))
# find the set of unique non-zero indices
idx = tuple(unique_rows(idx.T).T)
# take the element-wise max over only these indices
X[idx] = np.maximum(X[idx].A, Y[idx].A)
return X
def unique_rows(a):
void_type = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
b = np.ascontiguousarray(a).view(void_type)
idx = np.unique(b, return_index=True)[1]
return a[idx]
Testing:
def setup(n=1000, fmt='csr'):
return sparse.rand(n, n, format=fmt), sparse.rand(n, n, format=fmt)
X, Y = setup()
Z = sparsemax(X, Y)
print np.all(Z.A == np.maximum(X.A, Y.A))
# True
%%timeit X, Y = setup()
sparsemax(X, Y)
# 100 loops, best of 3: 4.92 ms per loop
The latest scipy (13.0) defines element-wise booleans for sparse matricies. So:
BisBigger = B>A
A - A.multiply(BisBigger) + B.multiply(BisBigger)
np.maximum does not (yet) work because it uses np.where, which is still trying to get the truth value of an array.
Curiously B>A returns a boolean dtype, while B>=A is float64.
Here is a function that returns a sparse matrix that is element-wise maximum of two sparse matrices. It implements the answer by hpaulj:
def sparse_max(A, B):
"""
Return the element-wise maximum of sparse matrices `A` and `B`.
"""
AgtB = (A > B).astype(int)
M = AgtB.multiply(A - B) + B
return M
Testing:
A = sparse.csr_matrix(np.random.randint(-9,10, 25).reshape((5,5)))
B = sparse.csr_matrix(np.random.randint(-9,10, 25).reshape((5,5)))
M = sparse_max(A, B)
M2 = sparse_max(B, A)
# Test symmetry:
print((M.A == M2.A).all())
# Test that M is larger or equal to A and B, element-wise:
print((M.A >= A.A).all())
print((M.A >= B.A).all())
from scipy import sparse
from numpy import array
I = array([0,3,1,0])
J = array([0,3,1,2])
V = array([4,5,7,9])
A = sparse.coo_matrix((V,(I,J)),shape=(4,4))
A.data.max()
9
If you haven't already, you should try out ipython, you could have saved your self time my making your spare matrix A then simply typing A. then tab, this will print a list of methods that you can call on A. From this you would see A.data gives you the non-zero entries as an array and hence you just want the maximum of this.
Assume you have an array of values that will need to be summed together
d = [1,1,1,1,1]
and a second array specifying which elements need to be summed together
i = [0,0,1,2,2]
The result will be stored in a new array of size max(i)+1. So for example i=[0,0,0,0,0] would be equivalent to summing all the elements of d and storing the result at position 0 of a new array of size 1.
I tried to implement this using
c = zeros(max(i)+1)
c[i] += d
However, the += operation adds each element only once, thus giving the unexpected result of
[1,1,1]
instead of
[2,1,2]
How would one correctly implement this kind of summation?
If I understand the question correctly, there is a fast function for this (as long as the data array is 1d)
>>> i = np.array([0,0,1,2,2])
>>> d = np.array([0,1,2,3,4])
>>> np.bincount(i, weights=d)
array([ 1., 2., 7.])
np.bincount returns an array for all integers range(max(i)), even if some counts are zero
Juh_'s comment is the most efficient solution. Here's working code:
import numpy as np
import scipy.ndimage as ni
i = np.array([0,0,1,2,2])
d = np.array([0,1,2,3,4])
n_indices = i.max() + 1
print ni.sum(d, i, np.arange(n_indices))
This solution should be more efficient for large arrays (it iterates over the possible index values instead of the individual entries of i):
import numpy as np
i = np.array([0,0,1,2,2])
d = np.array([0,1,2,3,4])
i_max = i.max()
c = np.empty(i_max+1)
for j in range(i_max+1):
c[j] = d[i==j].sum()
print c
[1. 2. 7.]
def zeros(ilen):
r = []
for i in range(0,ilen):
r.append(0)
i_list = [0,0,1,2,2]
d = [1,1,1,1,1]
result = zeros(max(i_list)+1)
for index in i_list:
result[index]+=d[index]
print result
In the general case when you want to sum submatrices by labels you can use the following code
import numpy as np
from scipy.sparse import coo_matrix
def labeled_sum1(x, labels):
P = coo_matrix((np.ones(x.shape[0]), (labels, np.arange(len(labels)))))
res = P.dot(x.reshape((x.shape[0], np.prod(x.shape[1:]))))
return res.reshape((res.shape[0],) + x.shape[1:])
def labeled_sum2(x, labels):
res = np.empty((np.max(labels) + 1,) + x.shape[1:], x.dtype)
for i in np.ndindex(x.shape[1:]):
res[(...,)+i] = np.bincount(labels, x[(...,)+i])
return res
The first method use the sparse matrix multiplication. The second one is the generalization of user333700's answer. Both methods have comparable speed:
x = np.random.randn(100000, 10, 10)
labels = np.random.randint(0, 1000, 100000)
%time res1 = labeled_sum1(x, labels)
%time res2 = labeled_sum2(x, labels)
np.all(res1 == res2)
Output:
Wall time: 73.2 ms
Wall time: 68.9 ms
True