Related
I have a 4d numpy array temperature of data with the measured temperature at points x,y,z and time t. Assuming I have an array indices with the indices where the first instance of a condition is met, say temperature < 0, how do I extract a 3d array with the first temperatures satisfying this condition? That is I'm looking for the equivalent of numpy's 1d version (import numpy as np tacitly assumed)
>>> temperatures = np.arange(10,-10,-1)
>>> ind = np.argmax(temperatures < 0)
>>> T = temperature[ind]
I have tried the analogous
In [1]: temperatures = np.random.random((11,8,5,200)) * 1000
In [2]: temperatures.shape
Out[2]: (11, 8, 5, 200)
In [3]: indices= np.argmax(temperatures > 900,axis=3)
In [4]: indices.shape
Out[4]: (11, 8, 5)
In [5]: T = temperatures[:,:,:,indices]
In [6]: T.shape
Out[6]: (11, 8, 5, 11, 8, 5)
However, the dimensions if Tis 6.
I could of course do it with a for loop:
indices = np.argmax(temperatures > 900,axis=3)
x,y,z = temperatures.shape[:-1]
T = np.zeros((x,y,z))
for indx in range(x):
for indy in range(y):
for indz in range(z):
T[indx,indy,indz] = temperatures[indx,indy,indz,indices[indx,indy,indz]]
but I'm looking for something fore elegant and more pythonic. Is there someone more skilled with numpy out there who can help me out on this?
P.S. For the sake of clarity, I'm not just looking for the temperature at these points given by indices, I'm also looking for other quantities in arrays of the same shape as temperature, e.g. the time derivative. Also, in reality the arrays are much larger then this minimal example.
Numpy advanced indexing does always work:
import numpy as np
temperatures = np.random.random((11,8,5, 200)) * 1000
indices = np.argmax(temperatures > 900, axis=3)
x, y, z = temperatures.shape[:-1]
T = temperatures[np.arange(x)[:, np.newaxis, np.newaxis],
np.arange(y)[np.newaxis, :, np.newaxis],
np.arange(z)[np.newaxis, np.newaxis, :],
indices]
As jdehesa pointed out this can be made more concise:
x, y, z = np.ogrid[:x, :y, :z]
T = temperatures[x, y, z, i]
I think you need:
axis = 3
indices = np.argmax(temperatures > 900, axis=axis)
result = np.take_along_axis(temperatures, np.expand_dims(indices, axis), axis)
result = result.squeeze(axis)
I have an array with m rows and arrays as values, which indicate the index of columns and are bounded to a large number n.
E.g:
Y = [[1,34,203,2032],...,[2984]]
Now I want an efficient way to initialize a sparse numpy matrix X with dimensions m,n and values corresponding to Y (X[i,j] = 1, if j is in Y[i], = 0 otherwise).
Your data are already close to csr format, so I suggest using that:
import numpy as np
from scipy import sparse
from itertools import chain
# create an example
m, n = 20, 10
X = np.random.random((m, n)) < 0.1
Y = [list(np.where(y)[0]) for y in X]
# construct the sparse matrix
indptr = np.fromiter(chain((0,), map(len, Y)), int, len(Y) + 1).cumsum()
indices = np.fromiter(chain.from_iterable(Y), int, indptr[-1])
data = np.ones_like(indices)
S = sparse.csr_matrix((data, indices, indptr), (m, n))
# or
S = sparse.csr_matrix((data, indices, indptr))
# check
assert np.all(S==X)
Computing the sum of pairwise mins between vectors is very popular in natural language processing (NLP) and is used in computing the intersecting histogram kernel [1]. However, in NLP we frequently deal with sparse matrices.
Here is an inefficient way that uses the slow for loops to compute this operation:
import numpy as np
from scipy.sparse import csr_matrix
# Initialize sparse matrices
A = csr_matrix(np.clip(np.random.randn(100, 64) - 1, 0, np.inf))
B = csr_matrix(np.clip(np.random.randn(64, 100) - 1, 0, np.inf))
# For each row, col vector i,j in A and B respectively
G = np.zeros((100, 100))
for i in range(A.shape[0]):
for j in range(B.shape[1]):
G[i, j] = A[i].minimum(B[:,j]).sum()
Is there a way to do this without the for loop ?
I wouldn't mind a for loop if it can be compiled such as with using jit in numba.
A fast dense version of this is given here: Numpy: an efficient way to implement sum of pairwise mins operation
Thanks.
[1] http://blog.datadive.net/histogram-intersection-for-change-detection/
Here is an implementation that should be ok efficient, leveraging sparseness as best as it can. There is a loop but only along one dim, so should be not too bad.
import numpy as np
from scipy.sparse import csr_matrix, csc_matrix
M, N, K = 640, 100, 650
B1 = csr_matrix(np.clip(np.random.randn(N, K) - 1, 0, np.inf))
B2 = csr_matrix(np.clip(np.random.randn(N, K) - 1, 0, np.inf))
B = B1-B2
A1 = csc_matrix(np.clip(np.random.randn(M, N) - 1, 0, np.inf))
A2 = csc_matrix(np.clip(np.random.randn(M, N) - 1, 0, np.inf))
A = A1-A2
result = np.zeros((M, K))
for j in range(N):
ia = A.indices[A.indptr[j] : A.indptr[j+1]]
ib = B.indices[B.indptr[j] : B.indptr[j+1]]
IA, IB = np.ix_(ia, ib)
da = A.data[A.indptr[j] : A.indptr[j+1]]
db = B.data[B.indptr[j] : B.indptr[j+1]]
# both nonzero
result[IA, IB] += np.minimum.outer(da, db)
# one negative ...
am = da<0
iam, dam = ia[am], da[am]
bm = db<0
ibm, dbm = ib[bm], db[bm]
# ... the other zero
za = np.ones((M,), dtype=bool)
za[ia] = False
zb = np.ones((K,), dtype=bool)
zb[ib] = False
IA, IB = np.ix_(iam, zb)
result[IA, IB] += dam[:, None]
IA, IB = np.ix_(za, ibm)
result[IA, IB] += dbm
# compare with dense method
print(np.allclose(result, np.minimum(A.A[..., None], B.A).sum(axis=1)))
Prints
True
Well, at least in recent versions of SciPy there is a function scipy.sparse.csr_matrix.minimum Link to documentation which is the equivalent of numpy.minimum in term of element-wise minimum. However, I don't know how computationally efficient that is.
A Cauchy matrix (Wikipedia article) is a matrix determined by two vectors (arrays of numbers). Given two vectors x and y, the Cauchy matrix C generated by them is defined entry-wise as
C[i][j] := 1/(x[i] - y[j])
Given two Numpy arrays x and y, what is an efficient way to generate a Cauchy matrix?
This is the most efficient way I found, using array broadcasting to take advantage of vectorization.
1.0 / (x.reshape((-1,1)) - y)
Edit: #HYRY and #shx2 have suggested that, instead of x.reshape((-1,1)), which makes a copy, you can use x[:,np.newaxis], which returns a view of the same array. #HYRY also suggests 1.0/np.subtract.outer(x,y), which is slightly slower for me but maybe more explicit.
Example:
>>> x = numpy.array([1,2,3,4]) #x
>>> y = numpy.array([5,6,7]) #y
>>>
>>> #transpose x, to nx1
... x = x.reshape((-1,1))
>>> x
array([[1],
[2],
[3],
[4]])
>>>
>>> #array of differences x[i] - y[j]
... #an nx1 array minus a 1xm array is an nxm array
... diff_matrix = x-y
>>> diff_matrix
array([[-4, -5, -6],
[-3, -4, -5],
[-2, -3, -4],
[-1, -2, -3]])
>>>
>>> #apply the multiplicative inverse to each entry
... cauchym = 1.0/diff_matrix
>>> cauchym
array([[-0.25 , -0.2 , -0.16666667],
[-0.33333333, -0.25 , -0.2 ],
[-0.5 , -0.33333333, -0.25 ],
[-1. , -0.5 , -0.33333333]])
I tried a few other methods, all of which were significantly slower.
This is the naive approach, which costs list comprehension:
cauchym = numpy.array([[ 1.0/(x_i-y_j) for y_j in y] for x_i in x])
This one generates the matrix as a 1-dimensional array (saving the cost of nested Python lists) and reshapes it to a matrix afterward. It also moves the division to a single Numpy operation:
cauchym = 1.0/numpy.array([(x_i-y_j) for x_i in x for y_j in y]).reshape([len(x),len(y)])
Using numpy.repeat and numpy.tile (which respectively tile the array horizontally and vertically). This way makes unnecessary copies:
lenx = len(x)
leny = len(y)
xm = numpy.repeat(x,leny) #the i'th row is s_i
ym = numpy.tile(y,lenx)
cauchym = (1.0/(xm-ym)).reshape([lenx,leny]);
I created a function hope it helps u to understand in a better way.
# Creating a function in order to form a cauchy matrix
def cauchy_matrix(arr1,arr2):
"""
Enter two arrays in order to get a cauchy matrix.The input array should be a 1-D array.
arr1 = First 1-D array
arr2 = Second 1-D array
It returns the cauchy matrix having shape equal to m*n, where m is size of arr1 and n is size of arr2.
"""
my_list = []
try:
for i in range(len(arr1)):
for j in range(len(arr2)):
z = 1/(arr1[i]-arr2[j])
my_list.append(z)
return np.array(my_list).reshape(arr1.shape[0],arr2.shape[0])
except ZeroDivisionError:
print("Check if both the arrays has '0' as one of it's element. One array can have a zero but both the arrays having '0' is not acceptable!")
Numpy's meshgrid is very useful for converting two vectors to a coordinate grid. What is the easiest way to extend this to three dimensions? So given three vectors x, y, and z, construct 3x3D arrays (instead of 2x2D arrays) which can be used as coordinates.
Numpy (as of 1.8 I think) now supports higher that 2D generation of position grids with meshgrid. One important addition which really helped me is the ability to chose the indexing order (either xy or ij for Cartesian or matrix indexing respectively), which I verified with the following example:
import numpy as np
x_ = np.linspace(0., 1., 10)
y_ = np.linspace(1., 2., 20)
z_ = np.linspace(3., 4., 30)
x, y, z = np.meshgrid(x_, y_, z_, indexing='ij')
assert np.all(x[:,0,0] == x_)
assert np.all(y[0,:,0] == y_)
assert np.all(z[0,0,:] == z_)
Here is the source code of meshgrid:
def meshgrid(x,y):
"""
Return coordinate matrices from two coordinate vectors.
Parameters
----------
x, y : ndarray
Two 1-D arrays representing the x and y coordinates of a grid.
Returns
-------
X, Y : ndarray
For vectors `x`, `y` with lengths ``Nx=len(x)`` and ``Ny=len(y)``,
return `X`, `Y` where `X` and `Y` are ``(Ny, Nx)`` shaped arrays
with the elements of `x` and y repeated to fill the matrix along
the first dimension for `x`, the second for `y`.
See Also
--------
index_tricks.mgrid : Construct a multi-dimensional "meshgrid"
using indexing notation.
index_tricks.ogrid : Construct an open multi-dimensional "meshgrid"
using indexing notation.
Examples
--------
>>> X, Y = np.meshgrid([1,2,3], [4,5,6,7])
>>> X
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
>>> Y
array([[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7]])
`meshgrid` is very useful to evaluate functions on a grid.
>>> x = np.arange(-5, 5, 0.1)
>>> y = np.arange(-5, 5, 0.1)
>>> xx, yy = np.meshgrid(x, y)
>>> z = np.sin(xx**2+yy**2)/(xx**2+yy**2)
"""
x = asarray(x)
y = asarray(y)
numRows, numCols = len(y), len(x) # yes, reversed
x = x.reshape(1,numCols)
X = x.repeat(numRows, axis=0)
y = y.reshape(numRows,1)
Y = y.repeat(numCols, axis=1)
return X, Y
It is fairly simple to understand. I extended the pattern to an arbitrary number of dimensions, but this code is by no means optimized (and not thoroughly error-checked either), but you get what you pay for. Hope it helps:
def meshgrid2(*arrs):
arrs = tuple(reversed(arrs)) #edit
lens = map(len, arrs)
dim = len(arrs)
sz = 1
for s in lens:
sz*=s
ans = []
for i, arr in enumerate(arrs):
slc = [1]*dim
slc[i] = lens[i]
arr2 = asarray(arr).reshape(slc)
for j, sz in enumerate(lens):
if j!=i:
arr2 = arr2.repeat(sz, axis=j)
ans.append(arr2)
return tuple(ans)
Can you show us how you are using np.meshgrid? There is a very good chance that you really don't need meshgrid because numpy broadcasting can do the same thing without generating a repetitive array.
For example,
import numpy as np
x=np.arange(2)
y=np.arange(3)
[X,Y] = np.meshgrid(x,y)
S=X+Y
print(S.shape)
# (3, 2)
# Note that meshgrid associates y with the 0-axis, and x with the 1-axis.
print(S)
# [[0 1]
# [1 2]
# [2 3]]
s=np.empty((3,2))
print(s.shape)
# (3, 2)
# x.shape is (2,).
# y.shape is (3,).
# x's shape is broadcasted to (3,2)
# y varies along the 0-axis, so to get its shape broadcasted, we first upgrade it to
# have shape (3,1), using np.newaxis. Arrays of shape (3,1) can be broadcasted to
# arrays of shape (3,2).
s=x+y[:,np.newaxis]
print(s)
# [[0 1]
# [1 2]
# [2 3]]
The point is that S=X+Y can and should be replaced by s=x+y[:,np.newaxis] because
the latter does not require (possibly large) repetitive arrays to be formed. It also generalizes to higher dimensions (more axes) easily. You just add np.newaxis where needed to effect broadcasting as necessary.
See http://www.scipy.org/EricsBroadcastingDoc for more on numpy broadcasting.
i think what you want is
X, Y, Z = numpy.mgrid[-10:10:100j, -10:10:100j, -10:10:100j]
for example.
Here is a multidimensional version of meshgrid that I wrote:
def ndmesh(*args):
args = map(np.asarray,args)
return np.broadcast_arrays(*[x[(slice(None),)+(None,)*i] for i, x in enumerate(args)])
Note that the returned arrays are views of the original array data, so changing the original arrays will affect the coordinate arrays.
Instead of writing a new function, numpy.ix_ should do what you want.
Here is an example from the documentation:
>>> ixgrid = np.ix_([0,1], [2,4])
>>> ixgrid
(array([[0],
[1]]), array([[2, 4]]))
>>> ixgrid[0].shape, ixgrid[1].shape
((2, 1), (1, 2))'
You can achieve that by changing the order:
import numpy as np
xx = np.array([1,2,3,4])
yy = np.array([5,6,7])
zz = np.array([9,10])
y, z, x = np.meshgrid(yy, zz, xx)