Quickly generate numpy array - python

I want to generate a sparse numpy ndarray by using the row vector, column vector, and value vector of each element.
For example, if I have
row_index=np.array([0,1,2])
column_index=np.array([2,1,0])
value=np.array([4,5,6])
Then I want a matrix
[0,0,4
0,5,0
6,0,0]
Is there a function in numpy that can do the similar thing like scipy.sparse by using scipy.sparse.csc_matrix((data, (row_ind, col_ind)), [shape=(M, N)])? If not, is there a way to generate the matrix without for loops? I want to speed up the code but
scipy.sparse is quite slow during the calculation, and the matrix I want is not so large.

If the matrix you want is not very large, it might be faster to just create a regular (non-sparse) ndarray. For example, you can use the following code to generate a dense matrix using only numpy:
row_index = np.array([0, 1, 2])
column_index = np.array([2, 1, 0])
values = np.array([4, 5, 6])
# numpy dense
M = np.zeros((np.max(row_index) + 1, np.max(column_index) + 1))
M[row_index, column_index] = values
On my machine, creating the matrix (the last two lines) take approximately 6.3 μs to run. I compared it to the following code which uses scipy.sparse:
# scipy sparse
M = scipy.sparse.csc_matrix((values, (row_index, column_index)),
shape=(np.max(row_index) + 1, np.max(column_index) + 1))
This takes approximately 80 μs to run. Because you asked for a method to create a sparse array, I changed the first implementation to the following code, so that the created ndarray is converted into a sparse array:
# numpy sparse
M = np.zeros((np.max(row_index) + 1, np.max(column_index) + 1))
M[row_index, column_index] = values
M = scipy.sparse.csc_matrix(M)
This takes approximately 82 μs to run. The bottleneck in this code is clearly the operation of creating a sparse matrix.
Note that the scipy.sparse method scales very well as function of matrix size, and eventually becomes the fastest for larger matrices (on my machine, starting from approximately 360×360). See the figure below for an indication of the speed of each method as function of matrix size, from a 10×10 matrix up to a 1000×1000 matrix. Some outliers in the figure are most likely due to other programs on my machine interfering. Furthermore, I am not sure of the technical details behind the 'jumps' in the numpy dense method at ~360×360 and ~510×510. I have also added the code I used to run this comparison so that you can run it on your own machine.
import timeit
import matplotlib.pyplot as plt
import numpy as np
import scipy.sparse
def generate_indices(num_values):
row_index = np.arange(num_values)
column_index = np.arange(num_values)[::-1]
values = np.arange(num_values)
return row_index, column_index, values
def numpy_dense(N, row_index, column_index, values):
start = timeit.default_timer()
for _ in range(N):
M = np.zeros((np.max(row_index) + 1, np.max(column_index) + 1))
M[row_index, column_index] = values
end = timeit.default_timer()
return (end - start) / N
def numpy_sparse(N, row_index, column_index, values):
start = timeit.default_timer()
for _ in range(N):
M = np.zeros((np.max(row_index) + 1, np.max(column_index) + 1))
M[row_index, column_index] = values
M = scipy.sparse.csc_matrix(M)
end = timeit.default_timer()
return (end - start) / N
def scipy_sparse(N, row_index, column_index, values):
start = timeit.default_timer()
for _ in range(N):
M = scipy.sparse.csc_matrix((values, (row_index, column_index)),
shape=(np.max(row_index) + 1, np.max(column_index) + 1))
end = timeit.default_timer()
return (end - start) / N
ns = np.arange(10, 1001, 10) # matrix size to try with
runtimes_numpy_dense, runtimes_numpy_sparse, runtimes_scipy_sparse = [], [], []
for n in ns:
print(n)
indices = generate_indices(n)
# number of iterations for timing
# ideally, you want this to be as high as possible,
# but I didn't want to wait very long for this plot
N = 1000 if n < 500 else 100
runtimes_numpy_dense.append(numpy_dense(N, *indices))
runtimes_numpy_sparse.append(numpy_sparse(N, *indices))
runtimes_scipy_sparse.append(scipy_sparse(N, *indices))
fig, ax = plt.subplots()
ax.plot(ns, runtimes_numpy_dense, 'x-', markersize=4, label='numpy dense')
ax.plot(ns, runtimes_numpy_sparse, 'x-', markersize=4, label='numpy sparse')
ax.plot(ns, runtimes_scipy_sparse, 'x-', markersize=4, label='scipy sparse')
ax.set_yscale('log')
ax.set_xlabel('Matrix size')
ax.set_ylabel('Runtime (s)')
ax.legend()
plt.show()

You can create your (sparse) array in coordinate format, where you pass:
values to be put at specified coordinates,
row coordinates,
column coordinates.
The code to do it can be:
import scipy.sparse as ss
arr = ss.coo_matrix((value, (row_index - 1, column_index - 1)))
Note that you created row_index and column_index with 1-based
indexing in mind, whereas array indices are actually 0-based,
so in the code above I subtracted 1 from your both coordinate arrays.
When you print arr.toarray(), you will get just what you wanted:
array([[0, 0, 4],
[0, 5, 0],
[6, 0, 0]])

Related

build numpy d-dim array from iterator of (d-1)-dim array

I have a use case, and I simplify it to following question:
import numpy as np
def get_matrix(i): # get a matrix N * M
return (
(i, i + 1, i + 1.2),
(i + 1, i / 2, i * 3.2),
(i / 3, i * 2, i / 4),
(i / 5, i * 2.1, i + 2.2),
)
K = 10000
# build a n-d array K * N * M
arr = np.array(
tuple(get_matrix(i) for i in range(K)),
np.float32,
)
However, when I want to get K*N*M numpy array, I need to create a temporary tuple with shape K*N*M. Only when numpy array has been built, the tuple can be garbage collected. Therefore above construction has extra space O(K*N*M).
If I can create the numpy array from iterator (get_matrix(i) for i in range(K)), then every matrix N*M can be garbage collected, when it has been used. Therefore the extra space is O(N*M).
I found there is a method numpy.fromiter(), but I don't know how to write the dtype, since there is a similar example in the last.
import numpy as np
K = 10000
# build a n-d array K * N * M
arr = np.fromiter(
(get_matrix(i) for i in range(K)),
dtype=np.float32, # there is error
)
Ah, so this is a new feature for np.fromiter. Just going by the example in the docs, the following worked:
K = 10000
N = 4
M = 3
# build a n-d array K * N * M
arr = np.fromiter(
(get_matrix(i) for i in range(K)),
dtype=np.dtype((np.float32, (N, M))),
count=K
)
Note, I used the count argument for good measure, but it works without it.

SKlearn Minimum Covariance Determinant (MCD) Function yields different results if applied to whole data array vs looped

I have a repeated experiment (n=K) which measures time series of equal length N, i.e. my data matrix has the shape NxK. I now want to compute a robust estimate of the covariance between the experiments for which I use the Minimum Covariance Determinent algorithm implemented in Sci Kit Learn.
One way to apply the algorithm is to directly apply the function to the data array D, i.e.:
import numpy as np
from sklearn.covariance import MinCovDet
N = 300 #number of rows
K = 40 #number of columns
D = np.random.normal(0, 1, size=(N, K)) #create random Data
mcd = MinCovDet().fit(D) #yields a KxK matrix
cov_mat = mcd.covariance_ #covariances between the columns
another way is to loop over the experiments
cov_loop = np.zeros((K, K))
for i in range(0, K):
for j in range(i, K):
temp_arr = np.zeros((N, 2))
temp_arr[:, 0] = D[:, i]
temp_arr[:, 1] = D[:, j]
mcd_temp = MinCovDet().fit(temp_arr)
cov_temp = mcd_temp.covariance_ #yields 2x2 matrix, we are only interested in the [0,1] element
cov_loop[i, j] = cov_temp[0, 1]
cov_loop[j, i] = cov_loop[i, j]
print(cov_loop/cov_mat)
The results differ significantly, which is why I wanted to ask what went wrong here.

python numpy : roll column wise with different values [duplicate]

I have a matrix (2d numpy ndarray, to be precise):
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
And I want to roll each row of A independently, according to roll values in another array:
r = np.array([2, 0, -1])
That is, I want to do this:
print np.array([np.roll(row, x) for row,x in zip(A, r)])
[[0 0 4]
[1 2 3]
[0 5 0]]
Is there a way to do this efficiently? Perhaps using fancy indexing tricks?
Sure you can do it using advanced indexing, whether it is the fastest way probably depends on your array size (if your rows are large it may not be):
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
# Use always a negative shift, so that column_indices are valid.
# (could also use module operation)
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:, np.newaxis]
result = A[rows, column_indices]
numpy.lib.stride_tricks.as_strided stricks (abbrev pun intended) again!
Speaking of fancy indexing tricks, there's the infamous - np.lib.stride_tricks.as_strided. The idea/trick would be to get a sliced portion starting from the first column until the second last one and concatenate at the end. This ensures that we can stride in the forward direction as needed to leverage np.lib.stride_tricks.as_strided and thus avoid the need of actually rolling back. That's the whole idea!
Now, in terms of actual implementation we would use scikit-image's view_as_windows to elegantly use np.lib.stride_tricks.as_strided under the hoods. Thus, the final implementation would be -
from skimage.util.shape import view_as_windows as viewW
def strided_indexing_roll(a, r):
# Concatenate with sliced to cover all rolls
a_ext = np.concatenate((a,a[:,:-1]),axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = a.shape[1]
return viewW(a_ext,(1,n))[np.arange(len(r)), (n-r)%n,0]
Here's a sample run -
In [327]: A = np.array([[4, 0, 0],
...: [1, 2, 3],
...: [0, 0, 5]])
In [328]: r = np.array([2, 0, -1])
In [329]: strided_indexing_roll(A, r)
Out[329]:
array([[0, 0, 4],
[1, 2, 3],
[0, 5, 0]])
Benchmarking
# #seberg's solution
def advindexing_roll(A, r):
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:,np.newaxis]
return A[rows, column_indices]
Let's do some benchmarking on an array with large number of rows and columns -
In [324]: np.random.seed(0)
...: a = np.random.rand(10000,1000)
...: r = np.random.randint(-1000,1000,(10000))
# #seberg's solution
In [325]: %timeit advindexing_roll(a, r)
10 loops, best of 3: 71.3 ms per loop
# Solution from this post
In [326]: %timeit strided_indexing_roll(a, r)
10 loops, best of 3: 44 ms per loop
In case you want more general solution (dealing with any shape and with any axis), I modified #seberg's solution:
def indep_roll(arr, shifts, axis=1):
"""Apply an independent roll for each dimensions of a single axis.
Parameters
----------
arr : np.ndarray
Array of any shape.
shifts : np.ndarray
How many shifting to use for each dimension. Shape: `(arr.shape[axis],)`.
axis : int
Axis along which elements are shifted.
"""
arr = np.swapaxes(arr,axis,-1)
all_idcs = np.ogrid[[slice(0,n) for n in arr.shape]]
# Convert to a positive shift
shifts[shifts < 0] += arr.shape[-1]
all_idcs[-1] = all_idcs[-1] - shifts[:, np.newaxis]
result = arr[tuple(all_idcs)]
arr = np.swapaxes(result,-1,axis)
return arr
I implement a pure numpy.lib.stride_tricks.as_strided solution as follows
from numpy.lib.stride_tricks import as_strided
def custom_roll(arr, r_tup):
m = np.asarray(r_tup)
arr_roll = arr[:, [*range(arr.shape[1]),*range(arr.shape[1]-1)]].copy() #need `copy`
strd_0, strd_1 = arr_roll.strides
n = arr.shape[1]
result = as_strided(arr_roll, (*arr.shape, n), (strd_0 ,strd_1, strd_1))
return result[np.arange(arr.shape[0]), (n-m)%n]
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
r = np.array([2, 0, -1])
out = custom_roll(A, r)
Out[789]:
array([[0, 0, 4],
[1, 2, 3],
[0, 5, 0]])
By using a fast fourrier transform we can apply a transformation in the frequency domain and then use the inverse fast fourrier transform to obtain the row shift.
So this is a pure numpy solution that take only one line:
import numpy as np
from numpy.fft import fft, ifft
# The row shift function using the fast fourrier transform
# rshift(A,r) where A is a 2D array, r the row shift vector
def rshift(A,r):
return np.real(ifft(fft(A,axis=1)*np.exp(2*1j*np.pi/A.shape[1]*r[:,None]*np.r_[0:A.shape[1]][None,:]),axis=1).round())
This will apply a left shift, but we can simply negate the exponential exponant to turn the function into a right shift function:
ifft(fft(...)*np.exp(-2*1j...)
It can be used like that:
# Example:
A = np.array([[1,2,3,4],
[1,2,3,4],
[1,2,3,4]])
r = np.array([1,-1,3])
print(rshift(A,r))
Building on divakar's excellent answer, you can apply this logic to 3D array easily (which was the problematic that brought me here in the first place). Here's an example - basically flatten your data, roll it & reshape it after::
def applyroll_30(cube, threshold=25, offset=500):
flattened_cube = cube.copy().reshape(cube.shape[0]*cube.shape[1], cube.shape[2])
roll_matrix = calc_roll_matrix_flattened(flattened_cube, threshold, offset)
rolled_cube = strided_indexing_roll(flattened_cube, roll_matrix, cube_shape=cube.shape)
rolled_cube = triggered_cube.reshape(cube.shape[0], cube.shape[1], cube.shape[2])
return rolled_cube
def calc_roll_matrix_flattened(cube_flattened, threshold, offset):
""" Calculates the number of position along time axis we need to shift
elements in order to trig the data.
We return a 1D numpy array of shape (X*Y, time) elements
"""
# armax(...) finds the position in the cube (3d) where we are above threshold
roll_matrix = np.argmax(cube_flattened > threshold, axis=1) + offset
# ensure we don't have index out of bound
roll_matrix[roll_matrix>cube_flattened.shape[1]] = cube_flattened.shape[1]
return roll_matrix
def strided_indexing_roll(cube_flattened, roll_matrix_flattened, cube_shape):
# Concatenate with sliced to cover all rolls
# otherwise we shift in the wrong direction for my application
roll_matrix_flattened = -1 * roll_matrix_flattened
a_ext = np.concatenate((cube_flattened, cube_flattened[:, :-1]), axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = cube_flattened.shape[1]
result = viewW(a_ext,(1,n))[np.arange(len(roll_matrix_flattened)), (n - roll_matrix_flattened) % n, 0]
result = result.reshape(cube_shape)
return result
Divakar's answer doesn't do justice to how much more efficient this is on large cube of data. I've timed it on a 400x400x2000 data formatted as int8. An equivalent for-loop does ~5.5seconds, Seberg's answer ~3.0seconds and strided_indexing.... ~0.5second.

Local reduce with specified slices over a single axis in tensorflow

I am trying to perform a local reduce with specified slices over a single axis on a 2D array.
I achieved this using numpy's numpy.ufunc.reduceat or numpy.add.reduceat but I would like do the same in tensorflow as the input to this reduce operation is an output from tensorflow convolution.
I came across tf.math.reduce_sum but I am not sure how this can be used in my case.
It will be great if I can do the reduceat operation in tensorflow as I can take advantage of a GPU.
You can do almost the same using tf.math.segment_sum:
import tensorflow as tf
import numpy as np
def add_reduceat_tf(a, indices, axis=0):
a = tf.convert_to_tensor(a)
indices = tf.convert_to_tensor(indices)
# Transpose if necessary
transpose = not (isinstance(axis, int) and axis == 0)
if transpose:
axis = tf.convert_to_tensor(axis)
ndims = tf.cast(tf.rank(a), axis.dtype)
a = tf.transpose(a, tf.concat([[axis], tf.range(axis),
tf.range(axis + 1, ndims)], axis=0))
# Make segment ids
r = tf.range(tf.shape(a, out_type=indices.dtype)[0])
segments = tf.searchsorted(indices, r, side='right')
# Compute segmented sum and discard first unused segment
out = tf.math.segment_sum(a, segments)[1:]
# Transpose back if necessary
if transpose:
out = tf.transpose(out, tf.concat([tf.range(1, axis + 1), [0],
tf.range(axis + 1, ndims)], axis=0))
return out
# Test
np.random.seed(0)
a = np.random.rand(5, 10).astype(np.float32)
indices = [2, 4, 7]
axis = 1
# NumPy computation
out_np = np.add.reduceat(a, indices, axis=axis)
# TF computation
with tf.Graph().as_default(), tf.Session() as sess:
out = add_reduceat_tf(a, indices, axis=axis)
out_tf = sess.run(out)
# Check result
print(np.allclose(out_np, out_tf))
# True
You can replace tf.math.segment_sum above with the reduction function you want to use. The only difference between this and the actual np.ufunc.reduceat is the special case where indices[i] >= indices[i + 1]. The posted function requires indices to be sorted, and if there were a case where indices[i] == indices[i + 1] the corresponding i position in the output would be zero, not a[indices[i]].

Standard deviation from center of mass along Numpy array axis

I am trying to find a well-performing way to calculate the standard deviation from the center of mass/gravity along an axis of a Numpy array.
In formula this is (sorry for the misalignment):
The best I could come up with is this:
def weighted_com(A, axis, weights):
average = np.average(A, axis=axis, weights=weights)
return average * weights.sum() / A.sum(axis=axis).astype(float)
def weighted_std(A, axis):
weights = np.arange(A.shape[axis])
w1com2 = weighted_com(A, axis, weights)**2
w2com1 = weighted_com(A, axis, weights**2)
return np.sqrt(w2com1 - w1com2)
In weighted_com, I need to correct the normalization from sum of weights to sum of values (which is an ugly workaround, I guess). weighted_std is probably fine.
To avoid the XY problem, I still ask for what I actually want, (a better weighted_std) instead of a better version of my weighted_com.
The .astype(float) is a safety measure as I'll apply this to histograms containing ints, which caused problems due to integer division when not in Python 3 or when from __future__ import division is not active.
You want to take the mean, variance and standard deviation of the vector [1, 2, 3, ..., n] — where n is the dimension of the input matrix A along the axis of interest —, with weights given by the matrix A itself.
For concreteness, say you want to consider these center-of-mass statistics along the vertical axis (axis=0) — this is what corresponds to the formulas you wrote. For a fixed column j, you would do
n = A.shape[0]
r = np.arange(1, n+1)
mu = np.average(r, weights=A[:,j])
var = np.average(r**2, weights=A[:,j]) - mu**2
std = np.sqrt(var)
In order to put all of the computations for the different columns together, you have to stack together a bunch of copies of r (one per column) to form a matrix (that I have called R in the code below). With a bit of care, you can make things work for both axis=0 and axis=1.
import numpy as np
def com_stats(A, axis=0):
A = A.astype(float) # if you are worried about int vs. float
n = A.shape[axis]
m = A.shape[(axis-1)%2]
r = np.arange(1, n+1)
R = np.vstack([r] * m)
if axis == 0:
R = R.T
mu = np.average(R, axis=axis, weights=A)
var = np.average(R**2, axis=axis, weights=A) - mu**2
std = np.sqrt(var)
return mu, var, std
For example,
A = np.array([[1, 1, 0], [1, 2, 1], [1, 1, 1]])
print(A)
# [[1 1 0]
# [1 2 1]
# [1 1 1]]
print(com_stats(A))
# (array([ 2. , 2. , 2.5]), # centre-of-mass mean by column
# array([ 0.66666667, 0.5 , 0.25 ]), # centre-of-mass variance by column
# array([ 0.81649658, 0.70710678, 0.5 ])) # centre-of-mass std by column
EDIT:
One can avoid creating in-memory copies of r to build R by using numpy.lib.stride_tricks: swap the line
R = np.vstack([r] * m)
above with
from numpy.lib.stride_tricks import as_strided
R = as_strided(r, strides=(0, r.itemsize), shape=(m, n))
The resulting R is a (strided) ndarray whose underlying array is the same as r's — absolutely no copying of any values occurs.
from numpy.lib.stride_tricks import as_strided
FMT = '''\
Shape: {}
Strides: {}
Position in memory: {}
Size in memory (bytes): {}
'''
def find_base_nbytes(obj):
if obj.base is not None:
return find_base_nbytes(obj.base)
return obj.nbytes
def stats(obj):
return FMT.format(obj.shape,
obj.strides,
obj.__array_interface__['data'][0],
find_base_nbytes(obj))
n=10
m=1000
r = np.arange(1, n+1)
R = np.vstack([r] * m)
S = as_strided(r, strides=(0, r.itemsize), shape=(m, n))
print(stats(r))
print(stats(R))
print(stats(S))
Output:
Shape: (10,)
Strides: (8,)
Position in memory: 4299744576
Size in memory (bytes): 80
Shape: (1000, 10)
Strides: (80, 8)
Position in memory: 4304464384
Size in memory (bytes): 80000
Shape: (1000, 10)
Strides: (0, 8)
Position in memory: 4299744576
Size in memory (bytes): 80
Credit to this SO answer and this one for explanations on how to get the memory address and size of the underlying array of a strided ndarray.

Categories

Resources