Numpy rewriting operation using einsum - python

I am trying to implement PCA in python.
Currently I am using this code to represent the data back into the initial dimensions from the low dimensional data and the principal components:
sameDimRepresentation = lowDimRepresentation[:, np.newaxis] * principalComponents.T
sameDimRepresentation = sameDimRepresentation.sum(axis=2)
What the code does:
for each row in lowDimRepresentation it computes the product between each element of the row (seen as a scalar) and each of the row vectors of principal components (column vectors of principalComponents.T) and then it sums all these product vectors up (line 2)
lowDimRepresentation: an array of x by 100
principalComponents: an array of 100 by 784
resulting array: x by 784
This method works fine when using x = 10000 but after that I get a memory error.
I know einsum is more memory efficient, I was trying to rewrite the same code with it but I did not manage.
Can someone help me with that?
Worst case I just split the 60000 cases into batches of 10000 and I run those bits, but I was hoping to achieve something more elegant.
Thanks a lot!

So there's good news and there's bad news. The good news is that the einsum version is very simple:
np.einsum('ij,jl->il', lowDimRepresentation, principalComponents)
For example:
>>> import numpy as np
>>> x = 1000
>>> lowDimRepresentation = np.random.random((x, 100))
>>> principalComponents = np.random.random((100, 784))
>>> sameDimRepresentation = (lowDimRepresentation[:, np.newaxis] * principalComponents.T).sum(axis=2)
>>> esum_same = np.einsum('ij,jl->il', lowDimRepresentation, principalComponents)
>>> np.allclose(sameDimRepresentation, esum_same)
True
This should also be a little faster:
>>> %timeit sameDimRepresentation = (lowDimRepresentation[:, np.newaxis] * principalComponents.T).sum(axis=2)
1 loops, best of 3: 1.12 s per loop
>>> %timeit esum_same = np.einsum('ij,jl->il', lowDimRepresentation, principalComponents)
10 loops, best of 3: 88.7 ms per loop
The bad news is that when I try applying it to the x=60000 case:
>>> esum_same = np.einsum('ij,jl->il', lowDimRepresentation, principalComponents)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: iterator is too large
So I'm not sure whether it'll actually help with your real problem..

Related

Dask Distributed: Reducing Multiple Dimensions into a Distance Matrix

I want to calculate a large distance matrix, based on a higher dimensional vector. For instance, I have 1000 instances each represented by 20 vectors of length 10. The distance between each two instances is given by the mean distance between each of the 20 vectors associated to each vector. So I want to go from a 1000 by 20 by 10 matrix to a 1000 by 1000 (lower-triangular) matrix. Because these calculations can get slow, I want to use Dask distributed to block the algorithm and spread it over several CPU's. Below is how far I've gotten:
Preamble
import itertools
import random
import numpy as np
import dask.array
from dask.distributed import Client
The distance function is defined by
def distance(u, v):
result = np.empty([int((len(u)*(len(u)+1))/2)], dtype=float)
for i, j in itertools.product(range(len(u)),range(len(v))):
if j <= i:
differences = []
k = int(((i*(i+1))/2 +j-1)+1)
for x,y in itertools.product(u[i], v[j]):
difference = np.abs(np.array(x) - np.array(y)).sum(axis=1)
differences.apply(difference)
result[k] = np.mean(differences)
return result
and returns an array of length n*(n+1)/2 to describe the lower triangular matrix for this block of the distance matrix.
def distance_matrix(X):
X = np.asarray(X, dtype=object)
X = dask.array.from_array(X, (100, 20, 10)).astype(float)
print("chunksize: ", X.chunksize)
resulting_length = [int((X.chunksize[0]*(X.chunksize[0])+1)/2)]
result = dask.array.map_blocks(distance, X, X, chunks=(resulting_length), drop_axis=[1,2], dtype=float)
return result.compute()
I split up the input array in chunks and use dask.array.map_blocks to apply the distance calculation to all the blocks.
if __name__ == '__main__':
workers = 6
X = np.array([[[random.random() for _ in range(10)] for _ in range(20)] for _ in range(1000)])
client = Client(n_workers=workers)
results = similarity_matrix(X)
client.close()
print(results)
Unfortunately, this approach returns the wrong length of array at the end of the process. Would somebody to help me out here? I don't have much experience in distributed computing.
I'm a big fan of dask, but this problem is way too small to need it. The runtime issue you're seeing is because you are looping through each element in python rather than using vectorized operations in numpy.
As with many packages in python, numpy relies on highly efficient compiled code written in other, faster languages such as C to carry out array operations. When you do something like an array operation A + B numpy calls these fast routines once, and the array operations are carried out within a highly optimized C routine. There is overhead involved with making calls to other languages, but this is overwhelmed by the performance gain due to the single call to a very fast routine. If instead you loop over every element, adding cell-wise, you have a (slow) python process, and on each element, this calls the C code, which adds overhead for each element of the array. Because of this, you actually would be better off not using numpy if you're going to do this once for each element.
To implement this in a vectorized manner, you can exploit numpy's broadcasting rules to ensure the first dimensions of your two arrays expand to a new dimension. I don't totally understand what's going on in your distance function, but you could extend this simple version to do whatever you want:
In [1]: import numpy as np
In [2]: A = np.random.random((1000, 20))
...: B = np.random.random((1000, 20))
In [3]: distance = np.abs(A[:, np.newaxis, :] - B[np.newaxis, :, :]).sum(axis=-1)
In [4]: distance
Out[4]:
array([[7.22985776, 7.76185666, 5.61824886, ..., 7.62092039, 6.35189562,
7.06365986],
[5.73359499, 5.8422105 , 7.2644021 , ..., 5.72230353, 6.79390303,
5.03074007],
[7.27871151, 8.6856818 , 5.97489449, ..., 8.86620029, 7.49875638,
6.57389575],
...,
[7.67783107, 7.24419076, 4.17941596, ..., 8.68674754, 6.65078093,
5.67279811],
[7.1550136 , 6.10590227, 5.75417987, ..., 7.05953998, 5.8306628 ,
6.55112672],
[5.81748615, 6.79246838, 6.95053088, ..., 7.63994705, 6.77720511,
7.5663236 ]])
In [5]: distance.shape
Out[5]: (1000, 1000)
The performance difference can be seen clearly against a looped implementation:
In [6]: %%timeit
...: np.abs(A[:, np.newaxis, :] - B[np.newaxis, :, :]).sum(axis=-1)
...:
...:
45 ms ± 326 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %%timeit
...: distances = np.empty((1000, 1000))
...: for i in range(1000):
...: for j in range(1000):
...: distances[i, j] = np.abs(A[i, :] - B[j, :]).sum()
...:
2.42 s ± 7.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The looped version takes more than 50x as long!

complex() with arrays [duplicate]

I want to combine 2 parts of the same array to make a complex array:
Data[:,:,:,0] , Data[:,:,:,1]
These don't work:
x = np.complex(Data[:,:,:,0], Data[:,:,:,1])
x = complex(Data[:,:,:,0], Data[:,:,:,1])
Am I missing something? Does numpy not like performing array functions on complex numbers? Here's the error:
TypeError: only length-1 arrays can be converted to Python scalars
This seems to do what you want:
numpy.apply_along_axis(lambda args: [complex(*args)], 3, Data)
Here is another solution:
# The ellipsis is equivalent here to ":,:,:"...
numpy.vectorize(complex)(Data[...,0], Data[...,1])
And yet another simpler solution:
Data[...,0] + 1j * Data[...,1]
PS: If you want to save memory (no intermediate array):
result = 1j*Data[...,1]; result += Data[...,0]
devS' solution below is also fast.
There's of course the rather obvious:
Data[...,0] + 1j * Data[...,1]
If your real and imaginary parts are the slices along the last dimension and your array is contiguous along the last dimension, you can just do
A.view(dtype=np.complex128)
If you are using single precision floats, this would be
A.view(dtype=np.complex64)
Here is a fuller example
import numpy as np
from numpy.random import rand
# Randomly choose real and imaginary parts.
# Treat last axis as the real and imaginary parts.
A = rand(100, 2)
# Cast the array as a complex array
# Note that this will now be a 100x1 array
A_comp = A.view(dtype=np.complex128)
# To get the original array A back from the complex version
A = A.view(dtype=np.float64)
If you want to get rid of the extra dimension that stays around from the casting, you could do something like
A_comp = A.view(dtype=np.complex128)[...,0]
This works because, in memory, a complex number is really just two floating point numbers. The first represents the real part, and the second represents the imaginary part.
The view method of the array changes the dtype of the array to reflect that you want to treat two adjacent floating point values as a single complex number and updates the dimension accordingly.
This method does not copy any values in the array or perform any new computations, all it does is create a new array object that views the same block of memory differently.
That makes it so that this operation can be performed much faster than anything that involves copying values.
It also means that any changes made in the complex-valued array will be reflected in the array with the real and imaginary parts.
It may also be a little trickier to recover the original array if you remove the extra axis that is there immediately after the type cast.
Things like A_comp[...,np.newaxis].view(np.float64) do not currently work because, as of this writing, NumPy doesn't detect that the array is still C-contiguous when the new axis is added.
See this issue.
A_comp.view(np.float64).reshape(A.shape) seems to work in most cases though.
This is what your are looking for:
from numpy import array
a=array([1,2,3])
b=array([4,5,6])
a + 1j*b
->array([ 1.+4.j, 2.+5.j, 3.+6.j])
I am python novice so this may not be the most efficient method but, if I understand the intent of the question correctly, steps listed below worked for me.
>>> import numpy as np
>>> Data = np.random.random((100, 100, 1000, 2))
>>> result = np.empty(Data.shape[:-1], dtype=complex)
>>> result.real = Data[...,0]; result.imag = Data[...,1]
>>> print Data[0,0,0,0], Data[0,0,0,1], result[0,0,0]
0.0782889873474 0.156087854837 (0.0782889873474+0.156087854837j)
import numpy as np
n = 51 #number of data points
# Suppose the real and imaginary parts are created independently
real_part = np.random.normal(size=n)
imag_part = np.random.normal(size=n)
# Create a complex array - the imaginary part will be equal to zero
z = np.array(real_part, dtype=complex)
# Now define the imaginary part:
z.imag = imag_part
print(z)
I use the following method:
import numpy as np
real = np.ones((2, 3))
imag = 2*np.ones((2, 3))
complex = np.vectorize(complex)(real, imag)
# OR
complex = real + 1j*imag
If you really want to eke out performance (with big arrays), numexpr can be used, which takes advantage of multiple cores.
Setup:
>>> import numpy as np
>>> Data = np.random.randn(64, 64, 64, 2)
>>> x, y = Data[...,0], Data[...,1]
With numexpr:
>>> import numexpr as ne
>>> %timeit result = ne.evaluate("complex(x, y)")
573 µs ± 21.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Compared to fast numpy method:
>>> %timeit result = np.empty(x.shape, dtype=complex); result.real = x; result.imag = y
1.39 ms ± 5.74 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
That worked for me:
input:
[complex(a,b) for a,b in zip([1,2,3],[1,2,3])]
output:
[(1+4j), (2+5j), (3+6j)]

When to use `numpy.append()`?

I have been reading in multiple places (e.g. here) that numpy.append() should never be used.
For example, if one wants to stack multiple arrays together, it is much better to do so via an intermediate Python list:
import numpy as np
def stacker(arrs):
result = arrs[0][None, ...]
for arr in arrs[1:]:
result = np.append(result, arr[None, ...], 0)
return result
n = 1000
shape = (100, 100)
x = [np.random.randint(0, n, shape) for _ in range(n)]
%timeit np.array(x)
# 100 loops, best of 3: 17.6 ms per loop
%timeit np.concatenate([arr[None, ...] for arr in x])
# 100 loops, best of 3: 17.7 ms per loop
%timeit np.stack(x)
# 100 loops, best of 3: 18.3 ms per loop
%timeit stacker(x)
# 1 loop, best of 3: 12.5 s per loop
I understand that np.append() creates a copy of both its NumPy array inputs and this is much more inefficient than list.append() or list.extend() in this use-case. However, I find it hard to believe that NumPy developers just added a useless function.
So, what is the use-case for numpy.append()?
Look at its code:
arr = asanyarray(arr)
if axis is None:
if arr.ndim != 1:
arr = arr.ravel()
values = ravel(values)
axis = arr.ndim-1
return concatenate((arr, values), axis=axis)
It's just a simple interface to concatenate. With axis it's a direct call to concatenate. Without it it ravels the inputs, which often causes a problem. And it converts scalars to arrays.
If you have a 1d array, then it is an easy way to add one value:
In [8]: np.append(np.arange(3), 10)
Out[8]: array([ 0, 1, 2, 10])
but hstack is just as nice:
In [10]: np.hstack([np.arange(3), 10])
Out[10]: array([ 0, 1, 2, 10])
People write functions that seem to be a good idea at the time, usually with a specific use in mind. But the actual use (and misuses) may be different than anticipated.
np.stack is a more recent, and useful addition.
For a while there was a note in the docs urging us to use concatenate and stack and avoid all the other stack's, but that's been toned down. Now they just have:
This function makes most sense for arrays with up to 3 dimensions. For
instance, for pixel-data with a height (first axis), width (second axis),
and r/g/b channels (third axis). The functions concatenate, stack and
block provide more general stacking and concatenation operations.

NumPy PolyFit and PolyVal in Multiple Dimensions?

Assume an n-dimensional array of observations that are reshaped to be a 2d-array with each row being one observation set. Using this reshape approach, np.polyfit can compute 2nd order fit coefficients for the entire ndarray (vectorized):
fit = np.polynomial.polynomialpolyfit(X, Y, 2)
where Y is shape (304000, 21) and X is a vector. This results in a (304000,3) array of coefficients, fit.
Using an iterator it is possible to call np.polyval(fit, X) for each row. This is inefficient when a vectorized approach may exist. Could the fit result be applied to the entire observation array without iterating? If so, how?
This is along the lines of this SO question.
np.polynomial.polynomial.polyval takes multidimensional coefficient arrays:
>>> x = np.random.rand(100)
>>> y = np.random.rand(100, 25)
>>> fit = np.polynomial.polynomial.polyfit(x, y, 2)
>>> fit.shape # 25 columns of 3 polynomial coefficients
(3L, 25L)
>>> xx = np.random.rand(50)
>>> interpol = np.polynomial.polynomial.polyval(xx, fit)
>>> interpol.shape # 25 rows, each with 50 evaluations of the polynomial
(25L, 50L)
And of course:
>>> np.all([np.allclose(np.polynomial.polynomial.polyval(xx, fit[:, j]),
... interpol[j]) for j in range(25)])
True
np.polynomial.polynomial.polyval is a perfectly fine (and convenient) approach to efficient evaluation of polynomial fittings.
However, if 'speediest' is what you are looking for, simply constructing the polynomial inputs and using the rudimentary numpy matrix multiplication functions results in slightly faster ( roughly 4x faster) computational speeds.
Setup
Using the same setup as above, we'll create 25 different line fittings.
>>> num_samples = 100000
>>> num_lines = 100
>>> x = np.random.randint(0,100,num_samples)
>>> y = np.random.randint(0,100,(num_samples, num_lines))
>>> fit = np.polyfit(x,y,deg=2)
>>> xx = np.random.randint(0,100,num_samples*10)
Numpy's polyval Function
res1 = np.polynomial.polynomial.polyval(xx, fit)
Basic Matrix Multiplication
inputs = np.array([np.power(xx,d) for d in range(len(fit))])
res2 = fit.T.dot(inputs)
Timing the functions
Using the same parameters above...
%timeit _ = np.polynomial.polynomial.polyval(xx, fit)
1 loop, best of 3: 247 ms per loop
%timeit inputs = np.array([np.power(xx, d) for d in range(len(fit))]);_ = fit.T.dot(inputs)
10 loops, best of 3: 72.8 ms per loop
To beat a dead horse...
mean Efficiency bump of ~3.61x faster. Speed fluctuations probably come from random computer processes in background.

Is there a way to efficiently invert an array of matrices with numpy?

Normally I would invert an array of 3x3 matrices in a for loop like in the example below. Unfortunately for loops are slow. Is there a faster, more efficient way to do this?
import numpy as np
A = np.random.rand(3,3,100)
Ainv = np.zeros_like(A)
for i in range(100):
Ainv[:,:,i] = np.linalg.inv(A[:,:,i])
It turns out that you're getting burned two levels down in the numpy.linalg code. If you look at numpy.linalg.inv, you can see it's just a call to numpy.linalg.solve(A, inv(A.shape[0]). This has the effect of recreating the identity matrix in each iteration of your for loop. Since all your arrays are the same size, that's a waste of time. Skipping this step by pre-allocating the identity matrix shaves ~20% off the time (fast_inverse). My testing suggests that pre-allocating the array or allocating it from a list of results doesn't make much difference.
Look one level deeper and you find the call to the lapack routine, but it's wrapped in several sanity checks. If you strip all these out and just call lapack in your for loop (since you already know the dimensions of your matrix and maybe know that it's real, not complex), things run MUCH faster (Note that I've made my array larger):
import numpy as np
A = np.random.rand(1000,3,3)
def slow_inverse(A):
Ainv = np.zeros_like(A)
for i in range(A.shape[0]):
Ainv[i] = np.linalg.inv(A[i])
return Ainv
def fast_inverse(A):
identity = np.identity(A.shape[2], dtype=A.dtype)
Ainv = np.zeros_like(A)
for i in range(A.shape[0]):
Ainv[i] = np.linalg.solve(A[i], identity)
return Ainv
def fast_inverse2(A):
identity = np.identity(A.shape[2], dtype=A.dtype)
return array([np.linalg.solve(x, identity) for x in A])
from numpy.linalg import lapack_lite
lapack_routine = lapack_lite.dgesv
# Looking one step deeper, we see that solve performs many sanity checks.
# Stripping these, we have:
def faster_inverse(A):
b = np.identity(A.shape[2], dtype=A.dtype)
n_eq = A.shape[1]
n_rhs = A.shape[2]
pivots = zeros(n_eq, np.intc)
identity = np.eye(n_eq)
def lapack_inverse(a):
b = np.copy(identity)
pivots = zeros(n_eq, np.intc)
results = lapack_lite.dgesv(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
if results['info'] > 0:
raise LinAlgError('Singular matrix')
return b
return array([lapack_inverse(a) for a in A])
%timeit -n 20 aI11 = slow_inverse(A)
%timeit -n 20 aI12 = fast_inverse(A)
%timeit -n 20 aI13 = fast_inverse2(A)
%timeit -n 20 aI14 = faster_inverse(A)
The results are impressive:
20 loops, best of 3: 45.1 ms per loop
20 loops, best of 3: 38.1 ms per loop
20 loops, best of 3: 38.9 ms per loop
20 loops, best of 3: 13.8 ms per loop
EDIT: I didn't look closely enough at what gets returned in solve. It turns out that the 'b' matrix is overwritten and contains the result in the end. This code now gives consistent results.
A few things have changed since this question was asked and answered, and now numpy.linalg.inv supports multidimensional arrays, handling them as stacks of matrices with matrix indices being last (in other words, arrays of shape (...,M,N,N)). This seems to have been introduced in numpy 1.8.0. Unsurprisingly this is by far the best option in terms of performance:
import numpy as np
A = np.random.rand(3,3,1000)
def slow_inverse(A):
"""Looping solution for comparison"""
Ainv = np.zeros_like(A)
for i in range(A.shape[-1]):
Ainv[...,i] = np.linalg.inv(A[...,i])
return Ainv
def direct_inverse(A):
"""Compute the inverse of matrices in an array of shape (N,N,M)"""
return np.linalg.inv(A.transpose(2,0,1)).transpose(1,2,0)
Note the two transposes in the latter function: the input of shape (N,N,M) has to be transposed to shape (M,N,N) for np.linalg.inv to work, then the result has to be permuted back to shape (M,N,N).
A check and timing results using IPython, on python 3.6 and numpy 1.14.0:
In [5]: np.allclose(slow_inverse(A),direct_inverse(A))
Out[5]: True
In [6]: %timeit slow_inverse(A)
19 ms ± 138 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit direct_inverse(A)
1.3 ms ± 6.39 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Numpy-Blas calls are not always the fastest possibility
On problems where you have to calculate lots of inverses, eigenvalues, dot-products of small 3x3 matrices or similar cases, numpy-MKL which I use can often be outperformed by quite a margin.
This external Blas routines are usually made for problems with larger matrices, for smaller ones you can write out a standard algorithm or take a look at eg. Intel IPP.
Please keep also in mind that Numpy uses C-ordered arrays by default (last dimension changes fastest).
For this example I took the code from Matrix inversion (3,3) python - hard coded vs numpy.linalg.inv and modified it a bit.
import numpy as np
import numba as nb
import time
#nb.njit(fastmath=True)
def inversion(m):
minv=np.empty(m.shape,dtype=m.dtype)
for i in range(m.shape[0]):
determinant_inv = 1./(m[i,0]*m[i,4]*m[i,8] + m[i,3]*m[i,7]*m[i,2] + m[i,6]*m[i,1]*m[i,5] - m[i,0]*m[i,5]*m[i,7] - m[i,2]*m[i,4]*m[i,6] - m[i,1]*m[i,3]*m[i,8])
minv[i,0]=(m[i,4]*m[i,8]-m[i,5]*m[i,7])*determinant_inv
minv[i,1]=(m[i,2]*m[i,7]-m[i,1]*m[i,8])*determinant_inv
minv[i,2]=(m[i,1]*m[i,5]-m[i,2]*m[i,4])*determinant_inv
minv[i,3]=(m[i,5]*m[i,6]-m[i,3]*m[i,8])*determinant_inv
minv[i,4]=(m[i,0]*m[i,8]-m[i,2]*m[i,6])*determinant_inv
minv[i,5]=(m[i,2]*m[i,3]-m[i,0]*m[i,5])*determinant_inv
minv[i,6]=(m[i,3]*m[i,7]-m[i,4]*m[i,6])*determinant_inv
minv[i,7]=(m[i,1]*m[i,6]-m[i,0]*m[i,7])*determinant_inv
minv[i,8]=(m[i,0]*m[i,4]-m[i,1]*m[i,3])*determinant_inv
return minv
#I was to lazy to modify the code from the link above more thoroughly
def inversion_3x3(m):
m_TMP=m.reshape(m.shape[0],9)
minv=inversion(m_TMP)
return minv.reshape(minv.shape[0],3,3)
#Testing
A = np.random.rand(1000000,3,3)
#Warmup to not measure compilation overhead on the first call
#You may also use #nb.njit(fastmath=True,cache=True) but this has also about 0.2s
#overhead on fist call
Ainv = inversion_3x3(A)
t1=time.time()
Ainv = inversion_3x3(A)
print(time.time()-t1)
t1=time.time()
Ainv2 = np.linalg.inv(A)
print(time.time()-t1)
print(np.allclose(Ainv2,Ainv))
Performance
np.linalg.inv: 0.36 s
inversion_3x3: 0.031 s
For loops are indeed not necessarily much slower than the alternatives and also in this case, it will not help you much. But here is a suggestion:
import numpy as np
A = np.random.rand(100,3,3) #this is to makes it
#possible to index
#the matrices as A[i]
Ainv = np.array(map(np.linalg.inv, A))
Timing this solution vs. your solution yields a small but noticeable difference:
# The for loop:
100 loops, best of 3: 6.38 ms per loop
# The map:
100 loops, best of 3: 5.81 ms per loop
I tried to use the numpy routine 'vectorize' with the hope of creating an even cleaner solution, but I'll have to take a second look into that. The change of ordering in the array A is probably the most significant change, since it utilises the fact that numpy arrays are ordered column-wise and therefor a linear readout of the data is ever so slightly faster this way.

Categories

Resources