I have a series of 2d arrays where the rows are points in some space. Many similar points occur across all arrays but in different row order. I want to sort the rows so they have the most similar order. Also the points are too different for clustering with K-means or DBSCAN. The problem can also be cast like this. If I stack the arrays into a 3d array, how do I permute the rows to minimize the average standard deviation (SD) along the 2nd axis? What's a good sorting algorithm for this problem?
I've tried the following approaches.
Create a set of reference 2d array and sort rows in each array to minimize mean euclidean distances to the reference 2d array. This I am afraid gives biased results.
Sort rows in arrays pairwise, then pairs of pair-medians, then pairs of that, etc... This doesn't really work and I'm not sure why.
A third approach could be just brute force optimization but I try to avoid that since I have multiple sets of arrays to perform the procedure on.
This is my code for the 2nd approach (Python):
def reorder_to(A, B):
"""Reorder rows in A to best match rows in B.
Input
-----
A : N x M numpy.array
B : N x M numpy.array
Output
------
perm_order : permutation order
"""
if A.shape != B.shape:
print "A and B must have the same shape"
return None
N = A.shape[0]
# Create a distance matrix of distance between rows in A and B
distance_matrix = np.ones((N, N))*np.inf
for i, a in enumerate(A):
for ii, b in enumerate(B):
ba = (b-a)
distance_matrix[i, ii] = np.sqrt(np.dot(ba, ba))
# Choose permutation order by smallest distances first
perm_order = [[] for _ in range(N)]
for _ in range(N):
ind = np.argmin(distance_matrix)
i, ii = ind/N, ind%N
perm_order[ii] = i
distance_matrix[i, :] = np.inf
distance_matrix[:, ii] = np.inf
return perm_order
def permute_tensor_rows(A):
"""Permute 1d rows in 3d array along the 0th axis to minimize average SD along 2nd axis.
Input
-----
A : numpy.3darray
Each "slice" in the 2nd direction is an independent array whose rows can be permuted
to decrease the average SD in the 2nd direction.
Output
------
A : numpy.3darray
A with sorted rows in each "slice".
"""
step = 2
while step <= A.shape[2]:
for k in range(0, A.shape[2], step):
# If last, reorder to previous
if k + step > A.shape[2]:
A_kk = A[:, :, k:(k+step)]
kk_order = reorder_to(np.median(A_kk, axis=2), np.median(A_k, axis=2))
A[:, :, k:(k+step)] = A[kk_order, :, k:(k+step)]
continue
k_0, k_1 = k, k+step/2
kk_0, kk_1 = k+step/2, k+step
A_k = A[:, :, k_0:k_1]
A_kk = A[:, :, kk_0:kk_1]
order = reorder_to(np.median(A_k, axis=2), np.median(A_kk, axis=2))
A[:, :, k_0:k_1] = A[order, :, k_0:k_1]
print "Step:", step, "\t ... Average SD:", np.mean(np.std(A, axis=2))
step *= 2
return A
Sorry I should have looked at your code sample; that was very informative.
Seems like this here gives an out-of-the-box solution to your problem:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html#scipy.optimize.linear_sum_assignment
Only really feasible for a few 100 points at most though, in my experience.
Related
Let's assume we have two numpy arrays A (n1xm) and B (n2xm) and I want to apply a certain mathematical operation between the rows of both tables.
For example, let's say that we want to calculate the Euclidean distance between each row of A and each row of B and store it at a new numpy table C (n1xn2).
The simple for-loop approach would be something like the following:
C = np.zeros((A.shape[0],B.shape[0]))
for i in range(A.shape[0]):
for j in range(B.shape[0]):
C[i,j] = np.linalg.norm(A[i]-B[j])
However, the above implementation is not the most efficient. How could I write this differently by using vectorization to speed up the implementation ?
You can broadcast over a new axis:
# n1 x m x n2
diff = A[:, :, None] - B[:, :, None].T
# n1 x n2 after summing across m
dists = np.sqrt((diff * diff).sum(1))
I have a 3-dimensional array a of shape (n, m, l). I extract one column j from it's last axis and compute the maximum index along the first axis like follows:
sub = a[:, :, j] # shape (n, m)
wheremax = np.argmax(sub, axis=0) # this have a shape of m
Now I'd like to slice the original array a to get all the information based on the index where the column j is maximal. I.e. I'd like an numpythonic way to do the following using array broadcasting or numpy functions:
new_arr = np.zeros((m, l))
for i, idx in enumerate(wheremax):
new_arr[i, :] = a[idx, i, :]
a = new_arr
Is there one?
As #hpaulj mentionned in the comments, using a[wheremax, np.arange(m)] did the trick.
I have a matrix A with m rows and n columns. I want a 3D tensor of dimension m*n*n such that the tensor consists out of m diagonal matrices formed by each of the columns of A. In other words every column of A should be converted into a diagonalized matrix and all those matrices should form a 3D tensor together.
This is quite easy to do with a for loop. But I want to do it without to improve speed.
I came up with a bad and inefficient way which works, but I hope someone can help me with finding a better way, which allows for large A matrices.
# I use python
# import numpy as np
n = A.shape[0] # A is an n*k matrix
k = A.shape[1]
holding_matrix = np.repeat(np.identity(k), repeats=n, axis=1) # k rows with n*k columns
identity_stack = np.tile(np.identity(n),k) #k nxn identity matrices stacked together
B = np.array((A#holding_matrix)*identity_stack)
B = np.array(np.hsplit(B,k)) # desired result of k n*n diagonal matrices in a tensor
n = A.shape[0] # A.shape == (n, k)
k = A.shape[1]
B = np.zeros_like(A, shape=(k, n*n)) # to preserve dtype and order of A
B[:, ::(n+1)] = A.T
B = B.reshape(k, n, n)
I'm running into confusingly large memory requirements for a relatively simple problem.
I have an ordered array of length N (index corresponds to sample ID) containing either an integer value or NaN.
I want to generate an indicator matrix of dimension N by N such that if two samples, i and j, both have a non-NaN value in the original list, then position (i, j) in the matrix is 1 and 0 otherwise (because the matrix is symmetrical I do not care about position (j, i).
To pare back on memory requirements, I've implemented the following code, which instead of generating a square matrix creates an array that represents the condensed square matrix (ie what squareform would generate). But for an initial list of 66,000 entries, this script requires over 80GB of memory! I think this is failing because of the map line in get_condensed_indeces, but I don't know how to fix it. If anyone has any suggestions for reducing memory use please share!
Code below, should work with any input array.
def ind_matrix(x):
ind = np.array([0.] * (len(x) * (len(x) - 1) / 2), dtype=np.float32)
mask = np.where(~np.isnan(x))[0]
targets = get_condensed_indeces(len(x), mask)
ind[targets] += 1
return ind
def get_condensed_indeces(n, desired_elements):
# args:
# n - number of cells in the current cluster
# desired_elements - list of numpy indeces that specify
# cells in a given cluster
return map(
index_converter,
[[n, x[0], x[1]] for x in itertools.combinations(desired_elements, 2)]
)
def index_converter(x):
# mapping from position (i,j) in square matrix to index in squareform 1D array
n, i, j = x[0], x[1], x[2]
return n * i - (i * (i + 1)) / 2 + j - 1 - i
I have one question about accessing a matrix position that in fact does not exists.
First, I have an matrix with rows rows and cols columns. From this matrix, I have to get sets of n x n sub matrices. For example, to get 3 x 3 sub matrices, I do the following:
for x, y in product(range(1, matrix.rows-1), range(1, matrix.cols-1)):
bootstrap_3x3 = npr.choice(matrix.data[x-1:x+2, y-1:y+2].flatten(), size=(3, 3), replace=True)
But, as can be seen, I'm not considering the extremes, and I have to. For x = 0 and y = 0, for example, I should consider matrix.data[x:x+2, y:y+2] (the center should be the current x and y), returning a 3 x 3 with the first row/column = 0.
I know that I can achieve this with some if statements. But I guess Python should have a clever way to do this properly.
Thank you in advance.
I would make a new matrix, padded with (n-1)/2 zeros around it:
import numpy as np
rows, cols = 4, 6
n = 3
d = (n-1)/2
data = np.arange(rows*cols).reshape(rows, cols)
padded = np.pad(data, d, mode='constant')
for x, y in np.indices(data.shape).reshape(2, -1).T:
sub = padded[x:x+n, y:y+n]
print sub
bootstrap_nxn = np.random.choice(sub.ravel(), (n, n))
This assumes n is odd, and that the submatrix center is always within the the original data matrix. If n is even, the center of the submatrix isn't well defined.
If you actually want to have the submatrix overlap with the data matrix with only one row, then you'd need to pad with n-1 zeros (and in that case even vs odd n won't matter).