Efficient double iteration over array - python

I have the following code, where points is many lines by 3 cols list of lists, coorRadius is a radius within which I want to find the local coordinate maxima, and localCoordinateMaxima is an array where I store the i's of these maxima:
for i,x in enumerate(points):
check = 1
for j,y in enumerate(points):
if linalg.norm(x-y) <= coorRadius and x[2] < y[2]:
check = 0
if check == 1:
localCoordinateMaxima.append(i)
print localCoordinateMaxima
Unfortunately, this takes forever when I have several thousand points, I am looking for a way to speed it up. I tried to do it with if all() condition, however I didn't manage it and I am not even sure it will be more efficient. Could you guys propose a way to make it faster?
Best!

Your search for neighbors is best done using a KDTree.
from scipy.spatial import cKDTree
tree = cKDTree(points)
pairs = tree.query_pairs(coorRadius)
Now pairs is a set of two item tuples (i, j), where i < j and points[i] and points[j] are within coorRadius of each other. You can now simply iterate over these, which will likely be a much smaller set than the len(points)**2 you are currently iterating over:
is_maximum = [True] * len(points)
for i, j in pairs:
if points[i][2] < points[j][2]:
is_maximum[i] = False
elif points[j][2] < points[i][2]:
is_maximum[j] = False
localCoordinateMaxima, = np.nonzero(is_maximum)
This can be further sped up by vectorizing it:
pairs = np.array(list(pairs))
pairs = np.vstack((pairs, pairs[:, ::-1]))
pairs = pairs[np.argsort(pairs[:, 0])]
is_z_smaller = points[pairs[:, 0], 2] < points[pairs[:, 1], 2]
bins, = np.nonzero(pairs[:-1, 0] != pairs[1:, 0])
bins = np.concatenate(([0], bins+1))
is_maximum = np.logical_and.reduceat(is_z_smaller, bins)
localCoordinateMaxima, = np.nonzero(is_maximum)
The above code assumes that every point has at least one neighbor within coorRadius. If that is not the case, you need to slightly complicate things:
pairs = np.array(list(pairs))
pairs = np.vstack((pairs, pairs[:, ::-1]))
pairs = pairs[np.argsort(pairs[:, 0])]
is_z_smaller = points[pairs[:, 0], 2] < points[pairs[:, 1], 2]
bins, = np.nonzero(pairs[:-1, 0] != pairs[1:, 0])
has_neighbors = pairs[np.concatenate(([True], bins)), 0]
bins = np.concatenate(([0], bins+1))
is_maximum = np.ones((len(points),), bool)
is_maximum[has_neighbors] &= np.logical_and.reduceat(is_z_smaller, bins)
localCoordinateMaxima, = np.nonzero(is_maximum)

Here is the version of your code just tightened-up a bit:
for i, x in enumerate(points):
x2 = x[2]
for y in points:
if linalg.norm(x-y) <= coorRadius and x2 < y[2]:
break
else:
localCoordinateMaxima.append(i)
print localCoordinateMaxima
Changes:
Factor-out the x[2] lookup.
The j variable was unused.
Add a break for an early-out
Use a for-else construct instead of a flag variable

With numpy this is not too hard. You can do it with a single (long) expression, if you want:
import numpy as np
points = np.array(points)
localCoordinateMaxima = np.where(np.all((np.linalg.norm(points-points[None,:], axis=-1) >
coorRadius) |
(points[:,2] >= points[:,None,2]),
axis=-1))
The algorithm your current code implements is essentially where(not(any(w <= x and y < z))). If you distribute the not through the logical operations inside of it (using Demorgan's laws), you can avoid one level of nesting by flipping the inequalities, getting where(all(w > x or y >= z))).
w is a matrix of norms applied to the differences of the points broadcast together. x is a constant. y and z are both arrays with the third coordinates of the points, shaped so that they broadcast together into the same shape as w.

Related

Using python/numpy to create a complex matrix

Using python/numpy, I would like to create a 2D matrix M whose components are:
I know I can do this with a bunch of for loops but is there a better way to do this by using numpy (not using for loops)?
This is how I tried, which end up giving me a value error.
I tried to first define a function that takes the sum over k:
define sum_function(i,j):
initial_array = np.arange(g(i,j),h(i,j)+1)
applied_array = f(i,j,initial_array)
return applied_array.sum()
then I tried to create the M matrix with np.mgrid as follows:
ii, jj = np.mgrid(start:fin, start:fin)
M_matrix = sum_function(ii,jj)
--
(Edited)
Let me write down the concrete form of a matrix as an example:
M_{i,j} = \sum_{k=min(i,j)}^{i+j}\sin{\left( (i+j)^k \right)}
if i,j = 0,1, then this matrix is 2 by 2 and it's form will be
\bigl(\begin{smallmatrix}
\sin(0) & \sin(1) \
\sin(1)& \sin(2)+\sin(4)
\end{smallmatrix}\bigr)
Now if the matrix gets really big, how would I create this matrix without using for loops?
To simplify thinking, lets ravel the i,j dimensions to one, ij dimension. Can we evaluate 3 arrays:
G = g(ij) # for all ij values
H = h(ij)
F = f(ij, kk) # for all ij, and all kk
In other words, can g,h,f be evaluated at multiple values, to produce whole-arrays?
If the G and H values were the same for all ij, or subsets (preferably slices), then
F[:, G:H].sum(axis=1)
would be the value for all ij.
If the H-G difference, the size of each slice, was the same, then we can construct a 2d indexing array, GH such that
F[:, GH].sum(axis=1)
In other words we are summing constant size windows of the F rows.
But if the H-G differences vary across ij, I think we are stuck with doing the sum for each ij element separately - with Python level loops, or ones complied with numba or cython.
I think I myself found an answer to this. I first create 3D array F_{i,j,k} = f(i,j,k). And then create a mask_array whose component is Ture if g(i,j) < k < f(i,j), False otherwise. Then I compute the element-wise multiplication of these two arrays, F*mask_array, and then taking the sum over k axis.
For example, this matrix can be efficiently created by the following code.
M_{i,j} = \sum_{k=min(i,j)}^{i+j}\sin{\left( (i+j)^k \right)}
#in this example, g(i,j) = min(i,j) and h(i,j) = i+j f(i,j,k) = sin((i+j)^k)
# 0<= i, j <= 2
#kk should range from min g(i,j) to max h(i,j)
ii, jj, kk = np.mgrid[0:3,0:3,0:5]
# k > g(i,j)
frm1 = kk >= jj
frm2 = kk >= ii
frm = np.logical_or(frm1,frm2)
# k < h(i,j)
to = kk <= ii+jj
#mask
k_mask = np.logical_and(frm,to)
def f(i,j,k):
return np.sin((i+j)**k)
M_before_mask = f(ii,jj,kk)
#Matrix created
M_matrix = (M_before_mask*k_mask).sum(axis=2)

Faster way to threshold a 4-D numpy array

I have a 4D numpy array of size (98,359,256,269) that I want to threshold.
Right now, I have two separate lists that keep the coordinates of the first 2 dimension and the last 2 dimensions. (mag_ang for the first 2 dimensions and indices for the last 2).
size of indices : (61821,2)
size of mag_ang : (35182,2)
Currently, my code looks like this:
inner_points = []
for k in indices:
x = k[0]
y = k[1]
for i,ctr in enumerate(mag_ang):
mag = ctr[0]
ang = ctr[1]
if X[mag][ang][x][y] > 10:
inner_points.append((y,x))
This code works but it's pretty slow and I wonder if there's any more pythonic/faster way to do this?s
(EDIT: added a second alternate method)
Use numpy multi-array indexing:
import time
import numpy as np
n_mag, n_ang, n_x, n_y = 10, 12, 5, 6
shape = n_mag, n_ang, n_x, n_y
X = np.random.random_sample(shape) * 20
nb_indices = 100 # 61821
indices = np.c_[np.random.randint(0, n_x, nb_indices), np.random.randint(0, n_y, nb_indices)]
nb_mag_ang = 50 # 35182
mag_ang = np.c_[np.random.randint(0, n_mag, nb_mag_ang), np.random.randint(0, n_ang, nb_mag_ang)]
# original method
inner_points = []
start = time.time()
for x, y in indices:
for mag, ang in mag_ang:
if X[mag][ang][x][y] > 10:
inner_points.append((y, x))
end = time.time()
print(end - start)
# faster method 1:
inner_points_faster1 = []
start = time.time()
for x, y in indices:
if np.any(X[mag_ang[:, 0], mag_ang[:, 1], x, y] > 10):
inner_points_faster1.append((y, x))
end = time.time()
print(end - start)
# faster method 2:
start = time.time()
# note: depending on the real size of mag_ang and indices, you may wish to do this the other way round ?
found = X[:, :, indices[:, 0], indices[:, 1]][mag_ang[:, 0], mag_ang[:, 1], :] > 10
# 'found' shape is (nb_mag_ang x nb_indices)
assert found.shape == (nb_mag_ang, nb_indices)
matching_indices_mask = found.any(axis=0)
inner_points_faster2 = indices[matching_indices_mask, :]
end = time.time()
print(end - start)
# finally assert equality of findings
inner_points = np.unique(np.array(inner_points))
inner_points_faster1 = np.unique(np.array(inner_points_faster1))
inner_points_faster2 = np.unique(inner_points_faster2)
assert np.array_equal(inner_points, inner_points_faster1)
assert np.array_equal(inner_points, inner_points_faster2)
yields
0.04685807228088379
0.0
0.0
(of course if you increase the shape the time will not be zero for the second and third)
Final note: here I use "unique" at the end, but it would maybe be wise to do it upfront for the indices and mag_ang arrays (except if you are sure that they are unique already)
Use numpy directly. If indices and mag_ang are numpy arrays of two columns each for the appropriate coordinate:
(x, y), (mag, ang) = indices.T, mag_ang.T
index_matrix = np.meshgrid(mag, ang, x, y).T.reshape(-1,4)
inner_mag, inner_ang, inner_x, inner_y = np.where(X[index_matrix] > 10)
Now you the inner... variables hold arrays for each coordinate. To get a single list of pars you can zip the inner_y and inner_x.
Here are few vecorized ways leveraging broadcasting -
thresh = 10
mask = X[mag_ang[:,0],mag_ang[:,1],indices[:,0,None],indices[:,1,None]]>thresh
r = np.where(mask)[0]
inner_points_out = indices[r][:,::-1]
For larger arrays, we can compare first and then index to get the mask -
mask = (X>thresh)[mag_ang[:,0],mag_ang[:,1],indices[:,0,None],indices[:,1,None]]
If you are only interested in the unique coordinates off indices, use the mask directly -
inner_points_out = indices[mask.any(1)][:,::-1]
For large arrays, we can also leverage multi-cores with numexpr module.
Thus, first off import the module -
import numexpr as ne
Then, replace (X>thresh) with ne.evaluate('X>thresh') in the computation(s) listed earlier.
Use np.where
inner = np.where(X > 10)
a, b, x, y = zip(*inner)
inner_points = np.vstack([y, x]).T

How to do vector-matrix multiplication with conditions?

I want to obtain a list (or array, doesn't matter) of A from the following formula:
A_i = X_(k!=i) * S_(k!=i) * X'_(k!=i)
where:
X is a vector (and X' is the transpose of X), S is a matrix, and the subscript k is defined as {k=1,2,3,...n| k!=i}.
X = [x1, x2, ..., xn]
S = [[s11,s12,...,s1n],
[s21,s22,...,s2n]
[... ... ... ..]
[sn1,sn2,...,snn]]
I take the following as an example:
X = [0.1,0.2,0.3,0.5]
S = [[0.4,0.1,0.3,0.5],
[2,1.5,2.4,0.6]
[0.4,0.1,0.3,0.5]
[2,1.5,2.4,0.6]]
So, eventually, I would get a list of four values for A.
I did this:
import numpy as np
x = np.array([0.1,0.2,0.3,0.5])
s = np.matrix([[0.4,0.1,0.3,0.5],[1,2,1.5,2.4,0.6],[0.4,0.1,0.3,0.5],[1,2,1.5,2.4,0.6]])
for k in range(x) if k!=i
A = (x.dot(s)).dot(np.transpose(x))
print (A)
I am confused with how to use a conditional 'for' loop. Could you please help me to solve it? Thanks.
EDIT:
Just to explain more. If you take i=1, then the formula will be:
A_1 = X_(k!=1) * S_(k!=1) * X'_(k!=1)
So any array (or value) associated with subscript 1 will be deleted in X and S. like:
X = [0.2,0.3,0.5]
S = [[1.5,2.4,0.6]
[0.1,0.3,0.5]
[1.5,2.4,0.6]]
Step 1: correctly calculate A_i
Step 2: collect them into A
I assume what you want to calculate is
An easy way to do so is to mask away the entries using masked arrays. This way we don't need to delete or copy any matrixes.
# sample
x = np.array([1,2,3,4])
s = np.diag([4,5,6,7])
# we will use masked arrays to remove k=i
vec_mask = np.zeros_like(x)
matrix_mask = np.zeros_like(s)
i = 0 # start
# set masks
vec_mask[i] = 1
matrix_mask[i] = matrix_mask[:,i] = 1
s_mask = np.ma.array(s, mask=matrix_mask)
x_mask = np.ma.array(x, mask=vec_mask)
# reduced product, remember using np.ma.inner instead np.inner
Ai = np.ma.inner(np.ma.inner(x_mask, s_mask), x_mask.T)
vec_mask[i] = 0
matrix_mask[i] = matrix_mask[:,i] = 0
As terms of 0 don't add to the sum, we actually can ignore masking the matrix and just mask the vector:
# we will use masked arrays to remove k=i
mask = np.zeros_like(x)
i = 0 # start
# set masks
mask[i] = 1
x_mask = np.ma.array(x, mask=mask)
# reduced product
Ai = np.ma.inner(np.ma.inner(x_mask, s), x_mask.T)
# unset mask
mask[i] = 0
The final step is to assemble A out of the A_is, so in total we get
x = np.array([1,2,3,4])
s = np.diag([4,5,6,7])
mask = np.zeros_like(x)
x_mask = np.ma.array(x, mask=mask)
A = []
for i in range(len(x)):
x_mask.mask[i] = 1
Ai = np.ma.inner(np.ma.inner(x_mask, s), x_mask.T)
A.append(Ai)
x_mask.mask[i] = 0
A_vec = np.array(A)
Implementing a matrix/vector product using loops will be rather slow in Python. Therefore, I suggest to actually delete the rows/columns/elements at the given index and perform the fast built-in dot product without any explicit loops:
i = 0 # don't forget Python's indices are zero-based
x_ = np.delete(X, i) # remove element
s_ = np.delete(S, i, axis=0) # remove row
s_ = np.delete(s_, i, axis=1) # remove column
result = x_.dot(s_).dot(x_) # no need to transpose a 1-D array

Python: how to improve a normalization algorithm?

I have a list X containg the data performed by different users N so the the number of the user is i=0,1,....,N-1. Each entry Xi has a different length.
I want to normalize the value of each user Xi over the global dataset X.
This is what I am doing. First of all I create a 1D list containing all the data, so:
tmp = list()
for i in range(0,len(X)):
tmp.extend(X[i])
then I convert it to an array and I remove outliers and NaN.
A = np.array(tmp)
A = A[~np.isnan(A)] #remove NaN
tr = np.percentile(A,95)
A = A[A < tr] #remove outliers
and then I create the histogram of this dataset
p, x = np.histogram(A, bins=10) # bin it into n = N/10 bins
finally I normalize the value of each users over the histogram I created, so:
Xn = list()
for i in range(0,len(X)):
tmp = np.array(X[i])
tmp = tmp[tmp < tr]
tmp = np.histogram(tmp, x)
Xn.append(append(tmp[0]/sum(tmp[0]))
My data set is very large and this process could take a while. I am wondering if there is e a better way to do that or a package.
For the first part, if each element X[i] of X is a list, you may be able to use sum, and then convert directly to an array, or use concatenate:
# Example X
X = [list(range(i)) for i in range(3, 19)] + [[2., np.NaN]]
# Build array with sum
A = np.array(sum(X, []))
# Build array with concatenate
A = np.concatenate(X)
The latter is more readable.
For the second part, I would store indices of the user to which each data point belongs.
idx = np.concatenate([np.full(len(x), i, int) for i,x in enumerate(X)])
tr = np.nanpercentile(A,95)
ok = A < tr # this excludes outliers, +Inf and NaN
idx = idx[ok]
A = A[ok]
Finally, you can compute x from the range of A, then use digitize on A and get the bins of each element. Then each pair (idx,bin-1) identifies the datum of a given user belonging to a given bin. You can then sum all these contributions using the at method of the ufunc add (see documentation). Finally, you divide by the sum over bins to normalize.
x = np.linspace(A.min(), A.max(), 10+1)
bin = np.digitize(A, x)
Xn = np.zeros((len(X), len(x)))
np.add.at(Xn, (idx,bin-1), 1)
Xn /= Xn.sum(axis=1)[:,np.newaxis]

How to calculate the numpy.histogramdd bins coordinates

I'm using numpy.histogramdd to bin 3d points. My goal is to count how many bins contains items, and its distance from a specific point is less than a constant. numpy.histogramdd returns binedges, which is "a list of D arrays describing the bin edges for each dimension", but it's not straightforward to find the bins coordinates. Is there a better (and with less loops) way to do it than my following approach?
hist_bin_centers_list = [binedges[d][:-1] + (binedges[d][1:] - binedges[d][:-1])/2. for d in range(len(binedges))]
indices3d = itertools.product(*[range(len(binedges[d])-1) for d in range(len(binedges))])
ret_indices = []
for i,j,k in indices3d:
bin_center = [bin[ind] for bin,ind in zip(hist_bin_centers_list, (i, j, k))]
if hist[i,j,k]>0 and cdist([pos], [bin_center])[0] < max_dist:
ret_indices.append((i,j,k))
return len(ret_indices)
Thanks to #DilithiumMatrix proposal, here is a better implantation:
bin_centers = list(itertools.product(*hist_bin_centers_list))
dists = cdist([pos], bin_centers)[0].reshape(hist.shape)
hits = len(np.where((hist > 0) & (dists < approx))[0])
How about just using numpy.where?
e.g. combine the two conditions together:
nonzero = numpy.where( (hist > 0) & (binDist < max_dist) )
where you calculate an array of distances binDist, such that binDist[i,j,k] is the distance from bin i,j,k to pos.

Categories

Resources