Simplify Matrix by Averaging Multiple Cells - python

I have a large 2D numpy matrix that needs to be made smaller (ex: convert from 100x100 to 10x10).
My goal is essentially: break the nxn matrix into smaller mxm matrices, average the cells in these mxm slices, and then construct a new (smaller) matrix out of these mxm slices.
I'm thinking about using something like matrix[a::b, c::d] to extract the smaller matrices, and then averaging those values, but this seems overly complex. Is there a better way to accomplish this?

You could split your array into blocks with the view_as_blocks function (in scikit-image).
For a 2D array, this returns a 4D array with the blocks ordered row-wise:
>>> import skimage.util as ski
>>> import numpy as np
>>> a = np.arange(16).reshape(4,4) # 4x4 array
>>> ski.view_as_blocks(a, (2,2))
array([[[[ 0, 1],
[ 4, 5]],
[[ 2, 3],
[ 6, 7]]],
[[[ 8, 9],
[12, 13]],
[[10, 11],
[14, 15]]]])
Taking the mean along the last two axes returns a 2D array with the mean in each block:
>>> ski.view_as_blocks(a, (2,2)).mean(axis=(2,3))
array([[ 2.5, 4.5],
[ 10.5, 12.5]])
Note: view_as_blocks returns a view of the array by modifying the strides (it also works with arrays with more than two dimensions). It is implemented purely in NumPy using as_strided, so if you don't have access to the scikit-image library you can copy the code from here.

Without ski-learn, you can simply reshape, and take the appropriate mean.
M=np.arange(10000).reshape(100,100)
M1=M.reshape(10,10,10,10)
M2=M1.mean(axis=(1,3))
quick check to see if I got the right axes
In [127]: M2[0,0]
Out[127]: 454.5
In [128]: M[:10,:10].mean()
Out[128]: 454.5
In [131]: M[-10:,-10:].mean()
Out[131]: 9544.5
In [132]: M2[-1,-1]
Out[132]: 9544.5
Adding .transpose([0,2,1,3]) puts the 2 averaging dimensions at the end, as view_as_blocks does.
For this (100,100) case, the reshape approach is 2x faster than the as_strided approach, but both are quite fast.
However the direct strided solution isn't much slower than reshaping.
as_strided(M,shape=(10,10,10,10),strides=(8000,80,800,8)).mean((2,3))
as_strided(M,shape=(10,10,10,10),strides=(8000,800,80,8)).mean((1,3))

I'm coming in late but I'd recommend scipy.ndimage.zoom() as an off-the-shelf solution for this. It does down-sizing (or upsizing) using spline interpolations of arbitrary order from 0 to 5. Sounds like order 0 would be sufficient for you based on your question.
from scipy import ndimage as ndi
import numpy as np
M=np.arange(1000000).reshape(1000,1000)
shrinkby=10
Mfilt = ndi.filters.uniform_filter(input=M, size=shrinkby)
Msmall = ndi.interpolation.zoom(input=Mfilt, zoom=1./shrinkby, order=0)
That's all you need. It's perhaps slightly less convenient to specify a zoom rather than a desired output size, but at least for order=0 this method is very fast.
The output size is 10% of the input in each dimension, i.e.
print M.shape, Msmall.shape
gives (1000, 1000) (100, 100) and the speed you can get from
%timeit Mfilt = ndi.filters.uniform_filter(input=M, size=shrinkby)
%timeit Msmall = ndi.interpolation.zoom(input=Mfilt, zoom=1./shrinkby, order=0)
which on my machine gave 10 loops, best of 3: 20.5 ms per loop for the uniform_filter call and 1000 loops, best of 3: 1.67 ms per loop for the zoom call.

Related

Extend 1d numpy array in multiple dimensions

I have a 1d numpy array, e.g. a=[10,12,15] and I want to extend it so that I end up with a numpy array b with the shape (3,10,15,20) filled with a so that e.g. b[:,1,1,1] is [10,12,15].
I thought of using np.repeat but it's not clear to me how to do ?
tile will do it for you. Internally this does a repeat for each axis.
In [114]: a = np.array([10,12,15])
In [115]: A = np.tile(a.reshape(3,1,1,1),(1,10,15,20))
In [116]: A.shape
Out[116]: (3, 10, 15, 20)
In [117]: A[:,1,1,1]
Out[117]: array([10, 12, 15])
For some purposes it might be enough to just do the reshape and let broadcasting expand the dimensions as needed (without actually expanding memory use).
Code:
import numpy as np
a = np.arange(1800).reshape((10,12,15))
b = np.repeat(a, repeats=5, axis=0).reshape(((3,10,15,20)))
You can change axis if you want to repeat in a different fashion. To understand repeat use lower shape for e.g. a(3,5,4) and b (2,3,5,4) and repeat on different axis.

Applying matrix functions like scipy.linalg.eigh to higher dimensional arrays

I am new to numpy but have been using python for quite a while as an engineer.
I am writing a program that currently stores stress tensors as 3x3 numpy arrays within another NxM array which represents values through time and through the thickness of a wall, so overall it is an NxMx3x3 numpy array. I want to efficiently calculate the eigenvals and vectors of each 3x3 array within this larger array. So far I have tried to using "fromiter" but this doesn't seem to work because the functions returns 2 arrays. I have also tried apply_along_axis which also doesn't work because it says the inner 3x3 is not a square matrix? I can do it with list comprehension, but this doesn't seem ideal to resort to using lists.
Example just calculating eigenvals using list comprehension
import numpy as np
from scipy import linalg
a=np.random.random((2,2,3,3))
f=linalg.eigvalsh
ans=np.asarray([f(x) for x in a.reshape((4,3,3))])
ans.shape=(2,2,3)
I thought something like this would work but I have played around with it and can't get it working:
np.apply_along_axis(f,0,a)
BTW the 2x2 bit could be up to 5000x100 and this code is repeated ~50x50x200 times hence the need for efficiency. Any help would be greatly appreciated?
You can use numpy.linalg.eigh. It accepts an array like your example a.
Here's an example. First, create an array of 3x3 symmetric arrays:
In [96]: a = np.random.random((2, 2, 3, 3))
In [97]: a = a + np.transpose(a, axes=(0, 1, 3, 2))
In [98]: a[0, 0]
Out[98]:
array([[0.61145048, 0.85209618, 0.03909677],
[0.85209618, 1.79309413, 1.61209077],
[0.03909677, 1.61209077, 1.55432465]])
Compute the eigenvalues and eigenvectors of all the 3x3 arrays:
In [99]: evals, evecs = np.linalg.eigh(a)
In [100]: evals.shape
Out[100]: (2, 2, 3)
In [101]: evecs.shape
Out[101]: (2, 2, 3, 3)
Take a look at the result for a[0, 0]:
In [102]: evals[0, 0]
Out[102]: array([-0.31729364, 0.83148477, 3.44467813])
In [103]: evecs[0, 0]
Out[103]:
array([[-0.55911658, 0.79634401, 0.23070516],
[ 0.63392772, 0.23128064, 0.73800062],
[-0.53434473, -0.55887877, 0.63413738]])
Verify that it is the same as computing the eigenvalues and eigenvectors for a[0, 0] separately:
In [104]: np.linalg.eigh(a[0, 0])
Out[104]:
(array([-0.31729364, 0.83148477, 3.44467813]),
array([[-0.55911658, 0.79634401, 0.23070516],
[ 0.63392772, 0.23128064, 0.73800062],
[-0.53434473, -0.55887877, 0.63413738]]))

Large point-matrix array multiplication in numpy

Given two large numpy arrays, one for a list of 3D points, and another for a list of transformation matrices. Assuming there is a 1 to 1 correspondence between the two lists, i'm looking for the best way to calculate the result array of each point transformed by it's corresponding matrix.
My solution to do this was to use slicing (see "test4" in the example code below) which worked fine with small arrays, but fails with large arrays because of how memory-wasteful my method is :)
import numpy as np
COUNT = 100
matrix = np.random.random_sample((3,3,)) # A single matrix
matrices = np.random.random_sample((COUNT,3,3,)) # Many matrices
point = np.random.random_sample((3,)) # A single point
points = np.random.random_sample((COUNT,3,)) # Many points
# Test 1, result of a single point multiplied by a single matrix
# This is as easy as it gets
test1 = np.dot(point,matrix)
print 'done'
# Test 2, result of a single point multiplied by many matrices
# This works well and returns a transformed point for each matrix
test2 = np.dot(point,matrices)
print 'done'
# Test 3, result of many points multiplied by a single matrix
# This works also just fine
test3 = np.dot(points,matrix)
print 'done'
# Test 4, this is the case i'm trying to solve. Assuming there's a 1-1
# correspondence between the point and matrix arrays, the result i want
# is an array of points, where each point has been transformed by it's
# corresponding matrix
test4 = np.zeros((COUNT,3))
for i in xrange(COUNT):
test4[i] = np.dot(points[i],matrices[i])
print 'done'
With a small array, this works fine. With large arrays, (COUNT=1000000) Test #4 works but gets rather slow.
Is there a way to make Test #4 faster? Presuming without using a loop?
You can use numpy.einsum. Here's an example with 5 matrices and 5 points:
In [49]: matrices.shape
Out[49]: (5, 3, 3)
In [50]: points.shape
Out[50]: (5, 3)
In [51]: p = np.einsum('ijk,ik->ij', matrices, points)
In [52]: p[0]
Out[52]: array([ 1.16532051, 0.95155227, 1.5130032 ])
In [53]: matrices[0].dot(points[0])
Out[53]: array([ 1.16532051, 0.95155227, 1.5130032 ])
In [54]: p[1]
Out[54]: array([ 0.79929572, 0.32048587, 0.81462493])
In [55]: matrices[1].dot(points[1])
Out[55]: array([ 0.79929572, 0.32048587, 0.81462493])
The above is doing matrix[i] * points[i] (i.e. multiplying on the right), but I just reread the question and noticed that your code uses points[i] * matrix[i]. You can do that by switching the indices and arguments of einsum:
In [76]: lp = np.einsum('ij,ijk->ik', points, matrices)
In [77]: lp[0]
Out[77]: array([ 1.39510822, 1.12011057, 1.05704609])
In [78]: points[0].dot(matrices[0])
Out[78]: array([ 1.39510822, 1.12011057, 1.05704609])
In [79]: lp[1]
Out[79]: array([ 0.49750324, 0.70664634, 0.7142573 ])
In [80]: points[1].dot(matrices[1])
Out[80]: array([ 0.49750324, 0.70664634, 0.7142573 ])
It doesn't make much sense to have multiple transform matrices. You can combine transform matrices as in this question:
If I want to apply matrix A, then B, then C, I will multiply the matrices in reverse order np.dot(C,np.dot(B,A))
So you can save some memory space by precomputing that matrix. Then applying a bunch of vectors to one transform matrix should be easily handled (within reason).
I don't know why you would need one million transformation on one million vectors, but I would suggest buying a larger RAM.
Edit:
There isn't a way to reduce the operations, no. Unless your transformation matrices hold a specific property such as sparsity, diagonality, etc. you're going to have to run all multiplications and summations. However, the way you process these operations can be optimized across cores and/or using vector operations on GPUs.
Also, python is notably slow. You can try splitting numpy across your cores using NumExpr. Or maybe use a BLAS framework on C++ (notably quick ;))

Numpy: transpose result of advanced indexing

>>> import numpy as np
>>> X = np.arange(27).reshape(3, 3, 3)
>>> x = [0, 1]
>>> X[x, x, :]
array([[ 0, 1, 2],
[12, 13, 14]])
I need to sum it along the 0 dimension but in the real world the matrix is huge and I would prefer to be summing it along -1 dimension which is faster due to memory layout. Hence I would like the result to be transposed:
array([[ 0, 12],
[ 1, 13],
[ 2, 14]])
How do I do that? I would like the result of numpy's "advanced indexing" to be implicitly transposed. Transposing it explicitly with .T at the end is even slower and is not an option.
Update1: in the real world advanced indexing is unavoidable and the subscripts are not guaranteed to be the same.
>>> x = [0, 0, 1]
>>> y = [0, 1, 1]
>>> X[x, y, :]
array([[ 0, 1, 2],
[ 3, 4, 5],
[12, 13, 14]])
Update2: To clarify that this is not an XY problem, here is the actual problem:
I have a large matrix X which contains elements x coming from some probability distribution. The probability distribution of the element depends on the neighbourhood of the element. This distribution is unknown so I follow the Gibbs sampling procedure to build a matrix which has elements from this distribution. In a nutshell it means that I make some initial guess for matrix X and then I keep iterating over the elements of matrix X updating each element x with a formula that depends on the neighbouring values of x. So, for any element of a matrix I need to get its neighbours (advanced indexing) and perform some operation on them (summation in my example). I have used line_profiler to see that the line which takes most of the time in my code is taking the sum of an array with respect to dimension 0 rather than -1. Hence I would like to know if there is a way to produce an already-transposed matrix as a result of advanced indexing.
I would like to sum it along the 0 dimension but in the real world the matrix is huge and I would prefer to be summing it along -1 dimension which is faster due to memory layout.
I'm not totally sure what you mean by this. If the underlying array is row-major (the default, i.e. X.flags.c_contiguous == True), then it may be slightly faster to sum it along the 0th dimension. Simply transposing an array using .T or np.transpose() does not, in itself, change how the array is laid out in memory.
For example:
# X is row-major
print(X.flags.c_contiguous)
# True
# Y is just a transposed view of X
Y = X.T
# the indices of the elements in Y are transposed, but their layout in memory
# is the same as in X, therefore Y is column-major rather than row-major
print(Y.flags.c_contiguous)
# False
You can convert from row-major to column-major, for example by using np.asfortranarray(X), but there is no way to perform this conversion without making a full copy of X in memory. Unless you're going to be performing lots of operations over the columns of X then it almost certainly won't be worthwhile doing the conversion.
If you want to store the result of your summation in a column-major array, you could use the out= kwarg to X.sum(), e.g.:
result = np.empty((3, 3), order='F') # Fortran-order, i.e. column-major
X.sum(0, out=result)
In your case the difference between summing over rows vs columns is likely to be very minimal, though - since you are already going to be indexing non-adjacent elements in X you will already be losing the benefit of spatial locality of reference that would normally make summing over rows slightly faster.
For example:
X = np.random.randn(100, 100, 100)
# summing over whole rows is slightly faster than summing over whole columns
%timeit X.sum(0)
# 1000 loops, best of 3: 438 µs per loop
%timeit X.T.sum(0)
# 1000 loops, best of 3: 486 µs per loop
# however, the locality advantage disappears when you are addressing
# non-adjacent elements using fancy indexing
%timeit X[[0, 0, 1], [0, 1, 1], :].sum()
# 100000 loops, best of 3: 4.72 µs per loop
%timeit X.T[[0, 0, 1], [0, 1, 1], :].sum()
# 100000 loops, best of 3: 4.63 µs per loop
Update
#senderle has mentioned in the comments that using numpy v1.6.2 he sees the opposite order for the timing, i.e. X.sum(-1) is faster than X.sum(0) for a row-major array. This seems to be related to the version of numpy he is using - using v1.6.2 I can reproduce the order that he observes, but using two newer versions (v1.8.2 and 1.10.0.dev-8bcb756) I observe the opposite (i.e. X.sum(0) is faster than X.sum(-1) by a small margin). Either way, I don't think it's likely that changing the memory order of the array is likely to help much for the OP's case.

Calculating long expressions using Numpy (coordinate transform)?

In Pythons Numpy module, is there a function that can calculate long/advanced math expressions on an array? I heard of the numexp module but want to stay clear of further dependencies.
Better yet, can I limit these expressions to only say the first or second element of the sub arrays within my array, without having to unpack them as separate arrays?
Here is my specific problem. I have an array of arrays containing geographic point coordinates looking like this: [[x1,y1],[x2,y2],[x3,y3],etc...]. What I want is to transform these geocoords to pixel coordinates so they can be drawn on an image. I therefore want to run the following expression/calculation on the first element of each subarray, ie the xs:
((180+X)/360)*screenwidthpixels
And on the second element, ie the ys:
((-90+Y)/180)*-screenheightpixels
These expressions would work in a python for-loop but is too slow, which is why I'm turning to Numpy. I know I can and have tried to just link numpys single math operator functions after each other but still too slow, and besides, to do that I first had to unpack all the xs and ys to separate arrays and repack them together after the calculation making it even slower.
So I guess I'm looking for a more direct Numpy way using less steps to transform my coordinate array using the expressions above. Any ideas?
import numpy as np
points = np.random.rand(10,2)
translation = np.array([180,-90])
scaling = np.array([1024, -768]) / np.array([360,180])
transformed_points = (points + translation) * scaling
This will do what you are looking for. It relies on numpy broadcasting rules to achieve expressiveness and performance.
But rather than explaining exactly how that works, I think you are better off finding yourself a good numpy primer, and starting at the top. numpy is one of the best things about python, and you cant go wrong learning a little more about it. Suffice to say, numpy is certainly up to the kind of task you are facing.
I'm a little confused because I'm not sure exactly what you're saying you already tried, or what the speed condition for success is.
Are you saying you already tried something like the following, but it is too slow?
arr = whatever
arr[:,0] = (arr[:,0] + 180) / (360 * screenwidthpixels)
arr[:,1] = 180 - (arr[:,1] - 90) / (180 * screenheightpixels)
I'm not sure what you mean by "having to unpack" to X and Y. Here's how you avoid unpacking (if i understand...)
arr = np.array([ [x1,y1], [x2,y2], [x3,y3] ])
arr.shape
=> (3, 2)
X = arr[:,0] # fast, creates a view
Y = arr[:,1] # fast too
((X+180)/360)/screenwidthpixels
Further speed up can be achieved by rewriting/simplifying your expressions.
((X+180)/360)/s => (X+180)/(360*s)
(180-((Y+90)/180))/s => (180/s-1/(2*s)) - y/(180*s)
In the first rewrite, you get 2 traverses of the array, instead of 3, and in the second, the array is only traversed twice, instead of 4 times.
In [235]: xs=arange(1000)
In [236]: ys=arange(1, 1001)
In [237]: a=array([xs, ys]).T
In [238]: a
Out[238]:
array([[ 0, 1],
[ 1, 2],
[ 2, 3],
...,
[ 997, 998],
[ 998, 999],
[ 999, 1000]])
In [240]: a[:, 0]=(a[:, 0]+180)/360/1024
the a[:, 0] offers a view of the first column of a, it's fast and memory saving. docs for numpy here

Categories

Resources