An efficient way to iterate over a multidimensional array? - python

I'm trying to find a way to perform operations on each elements across multiple 2D-arrays without having to loop over them. Or at least, not needing two for loops. My code calculates the standard deviation of each pixel over a series of images (arrays). Now, the amount of images there are is not the problem, it is the size of the arrays, making the code take extremely slow. The following is a working example of what I have.
import numpy as np
# reshape(# of image (arrays),# of rows, # of cols)
a = np.arange(32).reshape(2,4,4)
stddev_arr = np.array([])
for i in range(4):
for j in range(4):
pixel = a[0:,i,j]
stddev = np.std(pixel)
stddev_arr = np.append(stddev_arr, stddev)
My actual data is 2000x2000, making this code loop 4000000 times. Is there a better way to do this?
Any advice is extremely appreciated.

You're already using numpy. numpy's std() function takes an axis argument that tells it what axis you want it to operate on (in this case the zeroth axis). Because this offloads the calculation to numpy's C-backend (and possibly using SIMD optimizations for your processor that vectorize a lot of operations), it's so much faster than iterating. Another time-consuming operation in your code is when you append to stddev_arr. Appending to numpy arrays is slow because the entire array is copied into new memory before the new element is added. Now you already know how big that array needs to be, so you might as well preallocate it.
a = np.arange(32).reshape(2, 4, 4)
stdev = np.std(a, axis=0)
This gives a 4x4 array
array([[8., 8., 8., 8.],
[8., 8., 8., 8.],
[8., 8., 8., 8.],
[8., 8., 8., 8.]])
To flatten this into a 1D array, do flat_stdev = stdev.flatten().
Comparing the execution times:
# Using only numpy
def fun1(arr):
return np.std(arr, axis=0).flatten()
# Your function
def fun2(arr):
stddev_arr = np.array([])
for i in range(arr.shape[1]):
for j in range(arr.shape[2]):
pixel = arr[0:,i,j]
stddev = np.std(pixel)
stddev_arr = np.append(stddev_arr, stddev)
return stddev_arr
# Your function, but pre-allocating stddev_arr
def fun3(arr):
stddev_arr = np.zeros((arr.shape[1] * arr.shape[2],))
x = 0
for i in range(arr.shape[1]):
for j in range(arr.shape[2]):
pixel = arr[0:,i,j]
stddev = np.std(pixel)
stddev_arr[x] = stddev
x += 1
return stddev_arr
First, let's make sure all these functions are equivalent:
a = np.random.random((3, 10, 10))
assert np.all(fun1(a) == fun2(a))
assert np.all(fun1(a) == fun3(a))
Yup, all give the same result. Now, let's try with a bigger array.
a = np.random.random((3, 100, 100))
x = timeit.timeit('fun1(a)', setup='from __main__ import fun1, a', number=10)
# x: 0.003302899989648722
y = timeit.timeit('fun2(a)', setup='from __main__ import fun2, a', number=10)
# y: 5.495519500007504
z = timeit.timeit('fun3(a)', setup='from __main__ import fun3, a', number=10)
# z: 3.6250679999939166
Wow! We get a ~1.5x speedup just by preallocating.
Even more wow: using numpy's std() with the axis argument gives a > 1000x speedup, and this is just for the 100x100 array! With bigger arrays, you can expect to see even bigger speedup.

So based on what you have provided, you can reshape your array in another way to vectorize it to replace your two loops. Then you only have to use np.std once on the axis that you want.
a = np.arange(32).reshape(2, 4, 4)
a = a.reshape(2, -1).transpose()
stddev_arr = np.std(a, axis=1)

Related

How to produce the following three dimensional array?

I have a cube of size N * N * N, say N=8. Each dimension of the cube is discretised to 1, so that I have labelled points (0,0,0), (0,0,1)..(N,N,N). At each labelled points, I would like to assign a random value, and thus produce an array which stores value at each vertex. For example val[0,0,0]=1, val[0,0,1]=1.2 val[0,1,0]=1.3, ...
How do I write a python code to acheive this?
Did you mean this:
import numpy as np
n = 5
val = np.empty((n, n, n)) # Create an 3d array full of 0's
val[0,0,0] = 11
val[0,0,1] = 33
print(val[0, 0])
array([ 11., 33., 0., 0., 0.])
You could simply generate lists of lists. While not in any way efficient, it would allow you to access your cube like val[0][0][0].
arr = [[[] for _ in range(8)] for _ in range(8)]
arr[0][0].append(1)
For large matrices, look into using numpy. This is the problem that it's designed to solve

Numpy convert scalars to arrays

I am evaluating arbitrary expressions in terms of an x array, such as 3*x**2 + 4. This normally results in an array with x's shape. However if the expression is just a constant, it returns a scalar. What is the best way to ensure it has x's shape without explicitly checking the shape? Multiplying by numpy.ones(x.shape) works, but I think that uses unnecessary computations.
Edit:
To be clear, I don't just want it to be an array with size one, I want it to be the same shape and size as X.
I'm evaluating a string using NumExpr which can contain an arbitrary function of x:
x = numpy.linspace(min, max, num)
y = numexpr.evaluate(expr, {'x': x}, {})
I want to get an array of y-values that could be plotted against x through matplotlib. Currently I am doing this, which works fine:
y = numpy.ones(x.size) * y
But I'm worried that this is wasteful for large sizes. Is there a better way?
See atleast_1d:
Convert inputs to arrays with at least one dimension.
>>> import numpy as np
>>> x = 42 # x is a scalar
>>> np.atleast_1d(x)
array([42])
>>> x_is_array = np.array(42) # A zero dim array
>>> np.atleast_1d(x_is_array)
array([42])
>>> x_is_another_array = np.array([42]) # A 1d array
>>> np.atleast_1d(x_is_another_array)
array([42])
>>> np.atleast_1d(np.ones((3, 3))) # Any other numpy array
array([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
When I'm unsure whether x will be a scalar, list/tuple or array, I've been using:
x = np.asarray(x).reshape(1, -1)[0,:]
Alternatively by (ab)using the broadcasting rules, you could equally write:
x = np.asarray(x) * np.ones(1)
Perhaps a slightly more streamlined syntax is to make use of the extra arguments on the array constructor:
x = np.array(x, ndmin=1, copy=False)
Which will ensure that the array has at least one dimension.
But this is one of those things that seems a bit clumsy in numpy
You can use reshape: np.reshape(x, (1,1))
Here's demonstration:
>>> x = 4
>>> a = np.reshape(x, (1,1))
>>> a[0]
array([4])
>>> a[0][0]
lin_reg.predict(np.array(6.5).reshape(1,-1))

Fast inverse and transpose matrix in Python

I have a large matrix A of shape (n, n, 3, 3) with n is about 5000. Now I want find the inverse and transpose of matrix A:
import numpy as np
A = np.random.rand(1000, 1000, 3, 3)
identity = np.identity(3, dtype=A.dtype)
Ainv = np.zeros_like(A)
Atrans = np.zeros_like(A)
for i in range(1000):
for j in range(1000):
Ainv[i, j] = np.linalg.solve(A[i, j], identity)
Atrans[i, j] = np.transpose(A[i, j])
Is there a faster, more efficient way to do this?
This is taken from a project of mine, where I also do vectorized linear algebra on many 3x3 matrices.
Note that there is only a loop over 3; not a loop over n, so the code is vectorized in the important dimensions. I don't want to vouch for how this compares to a C/numba extension to do the same thing though, performance wise. This is likely to be substantially faster still, but at least this blows the loops over n out of the water.
def adjoint(A):
"""compute inverse without division by det; ...xv3xc3 input, or array of matrices assumed"""
AI = np.empty_like(A)
for i in xrange(3):
AI[...,i,:] = np.cross(A[...,i-2,:], A[...,i-1,:])
return AI
def inverse_transpose(A):
"""
efficiently compute the inverse-transpose for stack of 3x3 matrices
"""
I = adjoint(A)
det = dot(I, A).mean(axis=-1)
return I / det[...,None,None]
def inverse(A):
"""inverse of a stack of 3x3 matrices"""
return np.swapaxes( inverse_transpose(A), -1,-2)
def dot(A, B):
"""dot arrays of vecs; contract over last indices"""
return np.einsum('...i,...i->...', A, B)
A = np.random.rand(2,2,3,3)
I = inverse(A)
print np.einsum('...ij,...jk',A,I)
for the transpose:
testing a bit in ipython showed:
In [1]: import numpy
In [2]: x = numpy.ones((5,6,3,4))
In [3]: numpy.transpose(x,(0,1,3,2)).shape
Out[3]: (5, 6, 4, 3)
so you can just do
Atrans = numpy.transpose(A,(0,1,3,2))
to transpose the second and third dimensions (while leaving dimension 0 and 1 the same)
for the inversion:
the last example of http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html#numpy.linalg.inv
Inverses of several matrices can be computed at once:
from numpy.linalg import inv
a = np.array([[[1., 2.], [3., 4.]], [[1, 3], [3, 5]]])
>>> inv(a)
array([[[-2. , 1. ],
[ 1.5, -0.5]],
[[-5. , 2. ],
[ 3. , -1. ]]])
So i guess in your case, the inversion can be done with just
Ainv = inv(A)
and it will know that the last two dimensions are the ones it is supposed to invert over, and that the first dimensions are just how you stacked your data. This should be much faster
speed difference
for the transpose: your method needs 3.77557015419 sec, and mine needs 2.86102294922e-06 sec (which is a speedup of over 1 million times)
for the inversion: i guess my numpy version is not high enough to try that numpy.linalg.inv trick with (n,n,3,3) shape, to see the speedup there (my version is 1.6.2, and the docs i based my solution on are for 1.8, but it should work on 1.8, if someone else can test that?)
Numpy has the array.T properties which is a shortcut for transpose.
For inversions, you use np.linalg.inv(A).
As posted by wim A.I also works on matrix. e.g.
print (A.I)
for numpy-matrix object, use matrix.getI.
e.g.
A=numpy.matrix('1 3;5 6')
print (A.getI())

Pixel interpolation(binning?)

I am trying to write up a pixel interpolation(binning?) algorithm (I want to, for example, take four pixels and take their average and produce that average as a new pixel). I've had success with stride tricks to speed up the "partitioning" process, but the actual calculation is really slow. For a 256x512 16-bit grayscale image I get the averaging code to take 7s on my machine. I have to process from 2k to 20k images depending on the data set. The purpose is to make the image less noisy (I am aware my proposed method decreases resolution, but this might not be a bad thing for my purposes).
import numpy as np
from numpy.lib.stride_tricks import as_strided
from scipy.misc import imread
import matplotlib.pyplot as pl
import time
def sliding_window(arr, footprint):
""" Construct a sliding window view of the array"""
t0 = time.time()
arr = np.asarray(arr)
footprint = int(footprint)
if arr.ndim != 2:
raise ValueError("need 2-D input")
if not (footprint > 0):
raise ValueError("need a positive window size")
shape = (arr.shape[0] - footprint + 1,
arr.shape[1] - footprint + 1, footprint, footprint)
if shape[0] <= 0:
shape = (1, shape[1], arr.shape[0], shape[3])
if shape[1] <= 0:
shape = (shape[0], 1, shape[2], arr.shape[1])
strides = (arr.shape[1]*arr.itemsize, arr.itemsize,
arr.shape[1]*arr.itemsize, arr.itemsize)
t1 = time.time()
total = t1-t0
print "strides"
print total
return as_strided(arr, shape=shape, strides=strides)
def binning(w,footprint):
#the averaging block
#prelocate memory
binned = np.zeros(w.shape[0]*w.shape[1]).reshape(w.shape[0],w.shape[1])
#print w
t2 = time.time()
for i in xrange(w.shape[0]):
for j in xrange(w.shape[1]):
binned[i,j] = w[i,j].sum()/(footprint*footprint + 0.0)
t3 = time.time()
tot = t3-t2
print tot
return binned
Output:
5.60283660889e-05
7.00565886497
Is there some built-in/optimized function that would to the same thing I want, or should I just try and make a C extension (or even something else) ?
Below is the additional part of the code just for completeness, since I think the functions are the most important here. Image plotting is slow, but I think there is a way to improve it for example here
for i in range(2000):
arr = imread("./png/frame_" + str("%05d" % (i + 1) ) + ".png").astype(np.float64)
w = sliding_window(arr,footprint)
binned = binning(w,footprint)
pl.imshow(binned,interpolation = "nearest")
pl.gray()
pl.savefig("binned_" + str(i) + ".png")
enter code here
What I am looking for could be called interpolation. I just used the term the person who advised me to do this used. Probably that is the reason why I was finding histogram related stuff !
Apart from the median_filter I tried generic_filter from scipy.ndimage but those did not give me the results I wanted (they had no "valid" mode like in convolution i.e. these relied on going out of bounds of the array when moving the kernel arround). I asked in code review and it seems that stackoverflow would be a more suitable place for this question.
Without diving into your code, I think what you what is just to resize the image with interpolation. You should use an image library for this operation, as it will have heavily optimized code.
Since you are using SciPy, you might want to start with PIL, the Python Imaging Library. Use the resize method, were you can pass the desired interpolation parameter, probably Image.BILINEAR in your case.
It should look something like this:
import Image
im = Image.fromarray(your_numpy)
im.resize((w/2, h/2), Image.BILINEAR)
Edit: I just noticed, you can do it even with scipy itself, look at the documentation for
scipy.misc.imresize
a = np.array([[0,1,2],[3,4,5],[6,7,8],[9,10,11]], dtype=np.uint8)
res = scipy.misc.imresize(a, (3,2), interp="bilinear")
For the average look at scipy.ndimage.filters.uniform_filter, in scipy.ndimage.filters, you have lots kernels for convolution that are much faster than a direct convolution with scipy.convolve.
For completeness, there's an interesting solution on scipy blog. The idea is to reshape the array to higher dimensions, then apply mean over one dimension and again, reshape into a smaller array. Supposedly it's fast as well.
https://scipython.com/blog/binning-a-2d-array-in-numpy/
For the ones looking for true binning, rather than interpolation or decimation: this is also provided in the Pillow module with the function Image.reduce. The output of Image.reduce is equal to the rebin method from scipython.com linked by #Tilen K.
image = np.arange(16).astype(float).reshape(4,4)
array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]])
np.asarray(Image.fromarray(image).reduce(2))
array([[ 2.5, 4.5],
[10.5, 12.5]], dtype=float32)

Create a dynamic 2D numpy array on the fly

I am having a hard time creating a numpy 2D array on the fly.
So basically I have a for loop something like this.
for ele in huge_list_of_lists:
instance = np.array(ele)
creates a 1D numpy array of this list and now I want to append it to a numpy array so basically converting list of lists to array of arrays?
I have checked the manual.. and np.append() methods that doesn't work as for np.append() to work, it needs two arguments to append it together.
Any clues?
Create the 2D array up front, and fill the rows while looping:
my_array = numpy.empty((len(huge_list_of_lists), row_length))
for i, x in enumerate(huge_list_of_lists):
my_array[i] = create_row(x)
where create_row() returns a list or 1D NumPy array of length row_length.
Depending on what create_row() does, there might be even better approaches that avoid the Python loop altogether.
Just pass the list of lists to numpy.array, keep in mind that numpy arrays are ndarrays, so the concept to a list of lists doesn't translate to arrays of arrays it translates to a 2d array.
>>> import numpy as np
>>> a = [[1., 2., 3.], [4., 5., 6.]]
>>> b = np.array(a)
>>> b
array([[ 1., 2., 3.],
[ 4., 5., 6.]])
>>> b.shape
(2, 3)
Also ndarrays have nd-indexing so [1][1] becomes [1, 1] in numpy:
>>> a[1][1]
5.0
>>> b[1, 1]
5.0
Did I misunderstand your question?
You defiantly don't want to use numpy.append for something like this. Keep in mind that numpy.append has O(n) run time so if you call it n times, once for each row of your array, you end up with a O(n^2) algorithm. If you need to create the array before you know what all the content is going to be, but you know the final size, it's best to create an array using numpy.zeros(shape, dtype) and fill it in later. Similar to Sven's answer.
import numpy as np
ss = np.ndarray(shape=(3,3), dtype=int);
array([[ 0, 139911262763080, 139911320845424],
[ 10771584, 10771584, 139911271110728],
[139911320994680, 139911206874808, 80]]) #random
numpy.ndarray function achieves this. numpy.ndarray

Categories

Resources