Difference on performance between numpy and matlab - python

I am computing the backpropagation algorithm for a sparse autoencoder. I have implemented it in python using numpy and in matlab. The code is almost the same, but the performance is very different. The time matlab takes to complete the task is 0.252454 seconds while numpy 0.973672151566, that is almost four times more. I will call this code several times later in a minimization problem so this difference leads to several minutes of delay between the implementations. Is this a normal behaviour? How could I improve the performance in numpy?
Numpy implementation:
Sparse.rho is a tuning parameter, sparse.nodes are the number of nodes in the hidden layer (25), sparse.input (64) the number of nodes in the input layer, theta1 and theta2 are the weight matrices for the first and second layer respectively with dimensions 25x64 and 64x25, m is equal to 10000, rhoest has a dimension of (25,), x has a dimension of 10000x64, a3 10000x64 and a2 10000x25.
UPDATE: I have introduced changes in the code following some of the ideas of the responses. The performance is now numpy: 0.65 vs matlab: 0.25.
partial_j1 = np.zeros(sparse.theta1.shape)
partial_j2 = np.zeros(sparse.theta2.shape)
partial_b1 = np.zeros(sparse.b1.shape)
partial_b2 = np.zeros(sparse.b2.shape)
t = time.time()
delta3t = (-(x-a3)*a3*(1-a3)).T
for i in range(m):
delta3 = delta3t[:,i:(i+1)]
sum1 = np.dot(sparse.theta2.T,delta3)
delta2 = ( sum1 + sum2 ) * a2[i:(i+1),:].T* (1 - a2[i:(i+1),:].T)
partial_j1 += np.dot(delta2, a1[i:(i+1),:])
partial_j2 += np.dot(delta3, a2[i:(i+1),:])
partial_b1 += delta2
partial_b2 += delta3
print "Backprop time:", time.time() -t
Matlab implementation:
tic
for i = 1:m
delta3 = -(data(i,:)-a3(i,:)).*a3(i,:).*(1 - a3(i,:));
delta3 = delta3.';
sum1 = W2.'*delta3;
sum2 = beta*(-sparsityParam./rhoest + (1 - sparsityParam) ./ (1.0 - rhoest) );
delta2 = ( sum1 + sum2 ) .* a2(i,:).' .* (1 - a2(i,:).');
W1grad = W1grad + delta2* a1(i,:);
W2grad = W2grad + delta3* a2(i,:);
b1grad = b1grad + delta2;
b2grad = b2grad + delta3;
end
toc

It would be wrong to say "Matlab is always faster than NumPy" or vice
versa. Often their performance is comparable. When using NumPy, to get good
performance you have to keep in mind that NumPy's speed comes from calling
underlying functions written in C/C++/Fortran. It performs well when you apply
those functions to whole arrays. In general, you get poorer performance when you call those NumPy function on smaller arrays or scalars in a Python loop.
What's wrong with a Python loop you ask? Every iteration through the Python loop is
a call to a next method. Every use of [] indexing is a call to a
__getitem__ method. Every += is a call to __iadd__. Every dotted attribute
lookup (such as in like np.dot) involves function calls. Those function calls
add up to a significant hinderance to speed. These hooks give Python
expressive power -- indexing for strings means something different than indexing
for dicts for example. Same syntax, different meanings. The magic is accomplished by giving the objects different __getitem__ methods.
But that expressive power comes at a cost in speed. So when you don't need all
that dynamic expressivity, to get better performance, try to limit yourself to
NumPy function calls on whole arrays.
So, remove the for-loop; use "vectorized" equations when possible. For example, instead of
for i in range(m):
delta3 = -(x[i,:]-a3[i,:])*a3[i,:]* (1 - a3[i,:])
you can compute delta3 for each i all at once:
delta3 = -(x-a3)*a3*(1-a3)
Whereas in the for-loop delta3 is a vector, using the vectorized equation delta3 is a matrix.
Some of the computations in the for-loop do not depend on i and therefore should be lifted outside the loop. For example, sum2 looks like a constant:
sum2 = sparse.beta*(-float(sparse.rho)/rhoest + float(1.0 - sparse.rho) / (1.0 - rhoest) )
Here is a runnable example with an alternative implementation (alt) of your code (orig).
My timeit benchmark shows a 6.8x improvement in speed:
In [52]: %timeit orig()
1 loops, best of 3: 495 ms per loop
In [53]: %timeit alt()
10 loops, best of 3: 72.6 ms per loop
import numpy as np
class Bunch(object):
""" http://code.activestate.com/recipes/52308 """
def __init__(self, **kwds):
self.__dict__.update(kwds)
m, n, p = 10 ** 4, 64, 25
sparse = Bunch(
theta1=np.random.random((p, n)),
theta2=np.random.random((n, p)),
b1=np.random.random((p, 1)),
b2=np.random.random((n, 1)),
)
x = np.random.random((m, n))
a3 = np.random.random((m, n))
a2 = np.random.random((m, p))
a1 = np.random.random((m, n))
sum2 = np.random.random((p, ))
sum2 = sum2[:, np.newaxis]
def orig():
partial_j1 = np.zeros(sparse.theta1.shape)
partial_j2 = np.zeros(sparse.theta2.shape)
partial_b1 = np.zeros(sparse.b1.shape)
partial_b2 = np.zeros(sparse.b2.shape)
delta3t = (-(x - a3) * a3 * (1 - a3)).T
for i in range(m):
delta3 = delta3t[:, i:(i + 1)]
sum1 = np.dot(sparse.theta2.T, delta3)
delta2 = (sum1 + sum2) * a2[i:(i + 1), :].T * (1 - a2[i:(i + 1), :].T)
partial_j1 += np.dot(delta2, a1[i:(i + 1), :])
partial_j2 += np.dot(delta3, a2[i:(i + 1), :])
partial_b1 += delta2
partial_b2 += delta3
# delta3: (64, 1)
# sum1: (25, 1)
# delta2: (25, 1)
# a1[i:(i+1),:]: (1, 64)
# partial_j1: (25, 64)
# partial_j2: (64, 25)
# partial_b1: (25, 1)
# partial_b2: (64, 1)
# a2[i:(i+1),:]: (1, 25)
return partial_j1, partial_j2, partial_b1, partial_b2
def alt():
delta3 = (-(x - a3) * a3 * (1 - a3)).T
sum1 = np.dot(sparse.theta2.T, delta3)
delta2 = (sum1 + sum2) * a2.T * (1 - a2.T)
# delta3: (64, 10000)
# sum1: (25, 10000)
# delta2: (25, 10000)
# a1: (10000, 64)
# a2: (10000, 25)
partial_j1 = np.dot(delta2, a1)
partial_j2 = np.dot(delta3, a2)
partial_b1 = delta2.sum(axis=1)
partial_b2 = delta3.sum(axis=1)
return partial_j1, partial_j2, partial_b1, partial_b2
answer = orig()
result = alt()
for a, r in zip(answer, result):
try:
assert np.allclose(np.squeeze(a), r)
except AssertionError:
print(a.shape)
print(r.shape)
raise
Tip: Notice that I left in the comments the shape of all the intermediate arrays. Knowing the shape of the arrays helped me understand what your code was doing. The shape of the arrays can help guide you toward the right NumPy functions to use. Or at least, paying attention to the shapes can help you know if an operation is sensible. For example, when you compute
np.dot(A, B)
and A.shape = (n, m) and B.shape = (m, p), then np.dot(A, B) will be an array of shape (n, p).
It can help to build the arrays in C_CONTIGUOUS-order (at least, if using np.dot). There might be as much as a 3x speed up by doing so:
Below, x is the same as xf except that x is C_CONTIGUOUS and
xf is F_CONTIGUOUS -- and the same relationship for y and yf.
import numpy as np
m, n, p = 10 ** 4, 64, 25
x = np.random.random((n, m))
xf = np.asarray(x, order='F')
y = np.random.random((m, n))
yf = np.asarray(y, order='F')
assert np.allclose(x, xf)
assert np.allclose(y, yf)
assert np.allclose(np.dot(x, y), np.dot(xf, y))
assert np.allclose(np.dot(x, y), np.dot(xf, yf))
%timeit benchmarks show the difference in speed:
In [50]: %timeit np.dot(x, y)
100 loops, best of 3: 12.9 ms per loop
In [51]: %timeit np.dot(xf, y)
10 loops, best of 3: 27.7 ms per loop
In [56]: %timeit np.dot(x, yf)
10 loops, best of 3: 21.8 ms per loop
In [53]: %timeit np.dot(xf, yf)
10 loops, best of 3: 33.3 ms per loop
Regarding benchmarking in Python:
It can be misleading to use the difference in pairs of time.time() calls to benchmark the speed of code in Python.
You need to repeat the measurement many times. It's better to disable the automatic garbage collector. It is also important to measure large spans of time (such as at least 10 seconds worth of repetitions) to avoid errors due to poor resolution in the clock timer and to reduce the significance of time.time call overhead. Instead of writing all that code yourself, Python provides you with the timeit module. I'm essentially using that to time the pieces of code, except that I'm calling it through an IPython terminal for convenience.
I'm not sure if this is affecting your benchmarks, but be aware it could make a difference. In the question I linked to, according to time.time two pieces of code differed by a factor of 1.7x while benchmarks using timeit showed the pieces of code ran in essentially identical amounts of time.

I would start with inplace operations to avoid to allocate new arrays every time:
partial_j1 += np.dot(delta2, a1[i,:].reshape(1,a1.shape[1]))
partial_j2 += np.dot(delta3, a2[i,:].reshape(1,a2.shape[1]))
partial_b1 += delta2
partial_b2 += delta3
You can replace this expression:
a1[i,:].reshape(1,a1.shape[1])
with a simpler and faster (thanks to Bi Rico):
a1[i:i+1]
Also, this line:
sum2 = sparse.beta*(-float(sparse.rho)/rhoest + float(1.0 - sparse.rho) / (1.0 - rhoest))
seems to be the same at each loop, you don't need to recompute it.
And, a probably minor optimization, you can replace all the occurrences of
x[i,:] with x[i].
Finally, if you can afford to allocate the m times more memory, you can follow unutbu suggestion and vectorize the loop:
for m in range(m):
delta3 = -(x[i]-a3[i])*a3[i]* (1 - a3[i])
with:
delta3 = -(x-a3)*a3*(1-a3)
And you can always use Numba and gain in speed significantly without vectorizing (and without using more memory).

Difference in performance between numpy and matlab have always frustrated me. They often in the end boil down to the underlying lapack libraries. As far as I know matlab uses the full atlas lapack as a default while numpy uses a lapack light. Matlab reckons people dont care about space and bulk, while numpy reckons people do. Similar question with a good answer.

Related

Optimize 2D array multiplication numpy

I am trying to do a quite memory intensive multiplication and it seems that I am always filling up my RAM. The idea is that I have a 2D gaussian centered in (0,0) and then I have another 2D gaussian that changes its distance with respect to the (0,0) point in time. For each time I need to compute the product of the two gaussian on a specific grid and sum all over the indices. Basically I should end up at each timestep I should have *SUM{g1ij g2ij} and end up with a 1D array of the same length of time
The code here is just a pseudo-code. The problem is the creation of a 1001x1001x25000 array here xx[:,:,np.newaxis] which gives a huge array
import numpy as np
import numexpr as ne
def gaussian2d(x,y,x0,y0,x_std,y_std):
return np.exp(-(x-x0)**2/x_std**2-(y-y0)**2/y_std**2)
x = np.linspace(-5,5,1001)
y = np.linspace(-5,5,1001)
xx, yy = np.meshgrid(x,y)
g1 = gaussian2d(xx, yy, 0, 0, 0.25, 0.25)
x0 = np.random.rand(25000)
y0 = np.random.rand(25000)
X = np.subtract(xx[:,:,np.newaxis], x0)
Y = np.subtract(yy[:,:,np.newaxis], y0)
X_std = 0.75
Y_std = 0.75
temp = ne.evaluate('exp(-(X)**2/(2*X_std**2)-(Y)**2/(2*Y_std**2))')
final = np.sum(np.multiply(temp.T, g1), axis=(1,2))
A very slow alternative would be to just loop along the x0 length, but in future x0 may be as long as 100000 points. The other solution would be to reduce the grid. But in that case I would lose resolution and if the fixed function is not a gaussian but something different may affect calulations.
Any suggestion?
There is no need for all such HUGE arrays. xx and yy as well as X and Y contains 1001 times the same repeated line/columns which is a huge waste of memory! The RAM is a very scarce resource (both throughput and space) so you should avoid operating on very large array (so to use the CPU cache which are far much faster or even CPU registers). You can rewrite the code using loops and use a JIT compiler like Numba (or a static compiler like Cython) so to run this efficiently by removing all the big arrays. In fact, thinking about loops can help to optimize the operation further even in pure Numpy (see later). So Numba/Cython is not even needed. Here is a naive implementation:
import numpy as np
import numba as nb
#nb.njit('(f8[:], f8[:], i8, i8, f8[:,::1])', parallel=True)
def compute(x0, y0, N, T, g1):
X_std = 0.75
Y_std = 0.75
final = np.empty(T)
for i in nb.prange(T):
s = 0.0
for k in range(N):
for j in range(N):
X = x[k] - x0[i]
Y = y[j] - y0[i]
temp = np.exp(-(X)**2/(2*X_std**2)-(Y)**2/(2*Y_std**2))
s += g1[k, j] * temp
final[i] = s
return final
N = 1001
T = 25000
# [...] (same code as in the question without the big temporary arrays)
final = compute_clever(x0, y0, N, T, g1)
This code is much faster and use only a tiny amount of RAM compared to the initial code that could not even run on a regular PC. The same strategy can be used to compute g1 so not to create xx and yy.
Actually, the above code is not even optimized. On can split the exponential expression in two parts so to pre-compute partial results using only basic math. The computation can then be factorized to reduce the number of mathematical operations even more. Here is a better implementation:
#nb.njit('(f8[:], f8[:], i8, i8, f8[:,::1])', parallel=True)
def compute_clever(x0, y0, N, T, g1):
X_std = 0.75
Y_std = 0.75
final = np.empty(T)
exp1 = np.empty((T, N))
exp2 = np.empty((T, N))
for i in nb.prange(T):
for k in range(N):
X = x[k] - x0[i]
exp1[i, k] = np.exp(-(X)**2/(2*X_std**2))
for i in nb.prange(T):
for j in range(N):
Y = y[j] - y0[i]
exp2[i, j] = np.exp(-(Y)**2/(2*Y_std**2))
for i in nb.prange(T):
s = 0.0
for k in range(N):
s2 = 0.0
for j in range(N):
s2 += g1[k, j] * exp2[i, j]
s += s2 * exp1[i, k]
final[i] = s
return final
Here are results with N=1001 and T=250 on my 6-core machine:
Naive Numpy: 2380 ms (use about 4 GiB of RAM)
compute: 374 ms (use few MiB of RAM)
compute_clever: 55 ms (use few MiB of RAM)
Note that the code can be further optimized using register blocking though it will make the code more complex. Also note that the last kernel can certainly be computed efficiently using np.einsum. exp1 and exp2 can also be computed using basic Numpy operation (though it will be a bit less efficient). Thus, you could even solve this using a pure Numpy code.

python matrix optimization

My input has a shape: (9, 480, 1)
Here is my current function:
def f(X):
ordered_max_scorers = np.sort(np.max(X, axis=2), axis=1)
cd_mat2 = np.array([[0] * len(X)] * len(X), dtype="float64")
for ind1, score1 in enumerate(ordered_max_scorers):
for ind2, score2 in enumerate(ordered_max_scorers[:ind1]):
cognitive_diversity = np.sum((score1 - score2)**2)**(1/2)
cd_mat2[ind1][ind2] = cognitive_diversity
cd_mat2[ind2][ind1] = cognitive_diversity
return cd_mat2
And I am trying to optimize this function so that it runs faster: It is called once by an object B within an object A. B is called a few hundred times everytime A runs (about 2 dozen times) if that makes a difference.
I have tried to use a bit of numpy by replacing:
cognitive_diversity = np.sum((score1 - score2) ** 2) ** (1/2)
with:
cognitive_diversity = np.sqrt(np.sum(np.square(score1 - score2)))
But that seems to slow the code down.
I also tried to optimize using numba:
#autojit
def temp(x, y):
return np.sum((score1 - score2) ** 2) ** (1/2)
Which is used for cognitive_diversity again.
cognitive_diversity = temp(score1, score2)
But it also slowed the code down a lot
If anyone has any suggestions for the speed up or any suggestions how to properly re-write the loop to speed it up that would be amazing!
EDIT
Thank you for the assistance. I used it to come up with a slightly different solution.
#Std jit optimization
#jit()
def cd_matrix_jit(oms,l_oms,l_oms2,cd_mat2):
for ind1 in range(l_oms):
for ind2 in range(l_oms):
d=0.0
for i in range(l_oms2):
d = d + (oms[ind1,i]-oms[ind2,i])**2
cd_mat2[ind1,ind2] = d**(1/2)
return cd_mat2
First a few improvements in style:
np.array([[0] * len(X)] * len(X), dtype="float64")
len(X) is 9, X.shape[0]. So this expression is better written as
np.zeros((X.shape[0], X.shape[0]))
i.e. a 9x9 array of 0s (float the default dtype).
cd_mat2[ind1][ind2]
can be written as
cd_mat2[ind1, ind2]
In
for ind1, score1 in enumerate(ordered_max_scorers):
for ind2, score2 in enumerate(ordered_max_scorers[:ind1]):
cognitive_diversity = np.sum((score1 - score2)**2)**(1/2)
you are trying to avoid duplicate calculations by using an upper (or lower) triangle. That probably isn't worth it with numpy. If you can perform the whole calculation with one expression, rather than loops, it'll be faster even if some values are duplicated.
Something like:
cd_mat2 = np.sqrt(np.sum((ordered_max_scorers[:,None] - ordered_max_scorers)**2), axis=1))
A sample, X, preferably smaller than (9, 480, 1) would let us duplicate your calculations in alternatives.

Numpy ":" operator broadcasting issues

In the following code I have written 2 methods that theoretically(in my mind) should do the same thing. Unfortunately they don't, I am unable to find out why they don't do the same thing per the numpy documentation.
import numpy as np
dW = np.zeros((20, 10))
y = [1 for _ in range(100)]
X = np.ones((100, 20))
# ===================
# Method 1 (works!)
# ===================
for i in range(len(y)):
dW[:, y[i]] -= X[i]
# ===================
# Method 2 (does not work)
# ===================
dW[:, y] -= X.T
As indicated, in principle you cannot operate multiple times over the same element in a single operation, due to how buffering works in NumPy. For that purpose there is the at function, which can be used on about any standard NumPy function (add, subtract, etc.). For your case, you can do:
import numpy as np
dW = np.zeros((20, 10))
y = [1 for _ in range(100)]
X = np.ones((100, 20))
# at modifies in place dW, does not return a new array
np.subtract.at(dW, (slice(None), y), X.T)
This is a column-wise version of this question.
The answer there can be adapted to work column-wise as follows:
Approach 1: np.<ufunc>.at
>>> np.subtract.at(dW, (slice(None), y), X.T)
Approach 2: np.bincount
>>> m, n = dW.shape
>>> dW -= np.bincount(np.add.outer(np.arange(m) * n, y).ravel(), (X.T).ravel(), dW.size).reshape(m, n)
Please note that the bincount based solution - even though it involves more steps - is faster by a factor of ~6.
>>> from timeit import repeat
>>> kwds = dict(globals=globals(), number=5000)
>>>
>>> repeat('np.subtract.at(dW, (slice(None), y), X.T); np.add.at(dW, (slice(None), y), X.T)', **kwds)
[1.590626839082688, 1.5769231889862567, 1.5802007300080732]
>>> repeat('_= dW; _ -= np.bincount(np.add.outer(np.arange(m) * n, y).ravel(), (X.T).ravel(), dW.size).reshape(m, n); _ += np.bincount(np.add.outer(np.arange(m) * n, y).ravel(), (X.T).ravel(), dW.size).reshape(m, n)', **kwds)
[0.2582490430213511, 0.25572817400097847, 0.25478115503210574]
I found a third solution for this problem. Normal Matrix multiplication:
ind = np.zeros((X.shape[0],dW.shape[1]))
ind[range(X.shape[0]),y] = -1
dW = X.T.dot(ind)
I did some experiment using the proposed methods above on some neural network data. In my example X.shape = (500,3073), W.shape = (3073,10) and ind.shape = (500,10).
The subtract version takes about 0.2 second (slowest). The Matrix multiplication method 0.01 s (fastest). Normal loop 0.015 and then bincountmethod 0.04 s. Note that in the question y is vector of ones. This is not my case. The case with only ones can be solved with a simple sum.
Option 1:
for i in range(len(y)):
dW[:, y[i]] -= X[i]
This works because you are looping through and updating value which was updated last time.
Option 2:
dW[:, [1,1,1,1,....1,1,1]] -= [[1,1,1,1...1],
[1,1,1,1...1],
.
.
[1,1,1,1...1]]
It does not work because update happens to 1st index at same time in parallel not in serial way. Initially all are 0 so subtracting results in -1.

python multiprocessing for euclidean distance loop [duplicate]

I have two points in 3D space:
a = (ax, ay, az)
b = (bx, by, bz)
I want to calculate the distance between them:
dist = sqrt((ax-bx)^2 + (ay-by)^2 + (az-bz)^2)
How do I do this with NumPy? I have:
import numpy
a = numpy.array((ax, ay, az))
b = numpy.array((bx, by, bz))
Use numpy.linalg.norm:
dist = numpy.linalg.norm(a-b)
This works because the Euclidean distance is the l2 norm, and the default value of the ord parameter in numpy.linalg.norm is 2.
For more theory, see Introduction to Data Mining:
Use scipy.spatial.distance.euclidean:
from scipy.spatial import distance
a = (1, 2, 3)
b = (4, 5, 6)
dst = distance.euclidean(a, b)
For anyone interested in computing multiple distances at once, I've done a little comparison using perfplot (a small project of mine).
The first advice is to organize your data such that the arrays have dimension (3, n) (and are C-contiguous obviously). If adding happens in the contiguous first dimension, things are faster, and it doesn't matter too much if you use sqrt-sum with axis=0, linalg.norm with axis=0, or
a_min_b = a - b
numpy.sqrt(numpy.einsum('ij,ij->j', a_min_b, a_min_b))
which is, by a slight margin, the fastest variant. (That actually holds true for just one row as well.)
The variants where you sum up over the second axis, axis=1, are all substantially slower.
Code to reproduce the plot:
import numpy
import perfplot
from scipy.spatial import distance
def linalg_norm(data):
a, b = data[0]
return numpy.linalg.norm(a - b, axis=1)
def linalg_norm_T(data):
a, b = data[1]
return numpy.linalg.norm(a - b, axis=0)
def sqrt_sum(data):
a, b = data[0]
return numpy.sqrt(numpy.sum((a - b) ** 2, axis=1))
def sqrt_sum_T(data):
a, b = data[1]
return numpy.sqrt(numpy.sum((a - b) ** 2, axis=0))
def scipy_distance(data):
a, b = data[0]
return list(map(distance.euclidean, a, b))
def sqrt_einsum(data):
a, b = data[0]
a_min_b = a - b
return numpy.sqrt(numpy.einsum("ij,ij->i", a_min_b, a_min_b))
def sqrt_einsum_T(data):
a, b = data[1]
a_min_b = a - b
return numpy.sqrt(numpy.einsum("ij,ij->j", a_min_b, a_min_b))
def setup(n):
a = numpy.random.rand(n, 3)
b = numpy.random.rand(n, 3)
out0 = numpy.array([a, b])
out1 = numpy.array([a.T, b.T])
return out0, out1
b = perfplot.bench(
setup=setup,
n_range=[2 ** k for k in range(22)],
kernels=[
linalg_norm,
linalg_norm_T,
scipy_distance,
sqrt_sum,
sqrt_sum_T,
sqrt_einsum,
sqrt_einsum_T,
],
xlabel="len(x), len(y)",
)
b.save("norm.png")
I want to expound on the simple answer with various performance notes. np.linalg.norm will do perhaps more than you need:
dist = numpy.linalg.norm(a-b)
Firstly - this function is designed to work over a list and return all of the values, e.g. to compare the distance from pA to the set of points sP:
sP = set(points)
pA = point
distances = np.linalg.norm(sP - pA, ord=2, axis=1.) # 'distances' is a list
Remember several things:
Python function calls are expensive.
[Regular] Python doesn't cache name lookups.
So
def distance(pointA, pointB):
dist = np.linalg.norm(pointA - pointB)
return dist
isn't as innocent as it looks.
>>> dis.dis(distance)
2 0 LOAD_GLOBAL 0 (np)
2 LOAD_ATTR 1 (linalg)
4 LOAD_ATTR 2 (norm)
6 LOAD_FAST 0 (pointA)
8 LOAD_FAST 1 (pointB)
10 BINARY_SUBTRACT
12 CALL_FUNCTION 1
14 STORE_FAST 2 (dist)
3 16 LOAD_FAST 2 (dist)
18 RETURN_VALUE
Firstly - every time we call it, we have to do a global lookup for "np", a scoped lookup for "linalg" and a scoped lookup for "norm", and the overhead of merely calling the function can equate to dozens of python instructions.
Lastly, we wasted two operations on to store the result and reload it for return...
First pass at improvement: make the lookup faster, skip the store
def distance(pointA, pointB, _norm=np.linalg.norm):
return _norm(pointA - pointB)
We get the far more streamlined:
>>> dis.dis(distance)
2 0 LOAD_FAST 2 (_norm)
2 LOAD_FAST 0 (pointA)
4 LOAD_FAST 1 (pointB)
6 BINARY_SUBTRACT
8 CALL_FUNCTION 1
10 RETURN_VALUE
The function call overhead still amounts to some work, though. And you'll want to do benchmarks to determine whether you might be better doing the math yourself:
def distance(pointA, pointB):
return (
((pointA.x - pointB.x) ** 2) +
((pointA.y - pointB.y) ** 2) +
((pointA.z - pointB.z) ** 2)
) ** 0.5 # fast sqrt
On some platforms, **0.5 is faster than math.sqrt. Your mileage may vary.
**** Advanced performance notes.
Why are you calculating distance? If the sole purpose is to display it,
print("The target is %.2fm away" % (distance(a, b)))
move along. But if you're comparing distances, doing range checks, etc., I'd like to add some useful performance observations.
Let’s take two cases: sorting by distance or culling a list to items that meet a range constraint.
# Ultra naive implementations. Hold onto your hat.
def sort_things_by_distance(origin, things):
return things.sort(key=lambda thing: distance(origin, thing))
def in_range(origin, range, things):
things_in_range = []
for thing in things:
if distance(origin, thing) <= range:
things_in_range.append(thing)
The first thing we need to remember is that we are using Pythagoras to calculate the distance (dist = sqrt(x^2 + y^2 + z^2)) so we're making a lot of sqrt calls. Math 101:
dist = root ( x^2 + y^2 + z^2 )
:.
dist^2 = x^2 + y^2 + z^2
and
sq(N) < sq(M) iff M > N
and
sq(N) > sq(M) iff N > M
and
sq(N) = sq(M) iff N == M
In short: until we actually require the distance in a unit of X rather than X^2, we can eliminate the hardest part of the calculations.
# Still naive, but much faster.
def distance_sq(left, right):
""" Returns the square of the distance between left and right. """
return (
((left.x - right.x) ** 2) +
((left.y - right.y) ** 2) +
((left.z - right.z) ** 2)
)
def sort_things_by_distance(origin, things):
return things.sort(key=lambda thing: distance_sq(origin, thing))
def in_range(origin, range, things):
things_in_range = []
# Remember that sqrt(N)**2 == N, so if we square
# range, we don't need to root the distances.
range_sq = range**2
for thing in things:
if distance_sq(origin, thing) <= range_sq:
things_in_range.append(thing)
Great, both functions no-longer do any expensive square roots. That'll be much faster, but before you go further, check yourself: why did sort_things_by_distance need a "naive" disclaimer both times above? Answer at the very bottom (*a1).
We can improve in_range by converting it to a generator:
def in_range(origin, range, things):
range_sq = range**2
yield from (thing for thing in things
if distance_sq(origin, thing) <= range_sq)
This especially has benefits if you are doing something like:
if any(in_range(origin, max_dist, things)):
...
But if the very next thing you are going to do requires a distance,
for nearby in in_range(origin, walking_distance, hotdog_stands):
print("%s %.2fm" % (nearby.name, distance(origin, nearby)))
consider yielding tuples:
def in_range_with_dist_sq(origin, range, things):
range_sq = range**2
for thing in things:
dist_sq = distance_sq(origin, thing)
if dist_sq <= range_sq: yield (thing, dist_sq)
This can be especially useful if you might chain range checks ('find things that are near X and within Nm of Y', since you don't have to calculate the distance again).
But what about if we're searching a really large list of things and we anticipate a lot of them not being worth consideration?
There is actually a very simple optimization:
def in_range_all_the_things(origin, range, things):
range_sq = range**2
for thing in things:
dist_sq = (origin.x - thing.x) ** 2
if dist_sq <= range_sq:
dist_sq += (origin.y - thing.y) ** 2
if dist_sq <= range_sq:
dist_sq += (origin.z - thing.z) ** 2
if dist_sq <= range_sq:
yield thing
Whether this is useful will depend on the size of 'things'.
def in_range_all_the_things(origin, range, things):
range_sq = range**2
if len(things) >= 4096:
for thing in things:
dist_sq = (origin.x - thing.x) ** 2
if dist_sq <= range_sq:
dist_sq += (origin.y - thing.y) ** 2
if dist_sq <= range_sq:
dist_sq += (origin.z - thing.z) ** 2
if dist_sq <= range_sq:
yield thing
elif len(things) > 32:
for things in things:
dist_sq = (origin.x - thing.x) ** 2
if dist_sq <= range_sq:
dist_sq += (origin.y - thing.y) ** 2 + (origin.z - thing.z) ** 2
if dist_sq <= range_sq:
yield thing
else:
... just calculate distance and range-check it ...
And again, consider yielding the dist_sq. Our hotdog example then becomes:
# Chaining generators
info = in_range_with_dist_sq(origin, walking_distance, hotdog_stands)
info = (stand, dist_sq**0.5 for stand, dist_sq in info)
for stand, dist in info:
print("%s %.2fm" % (stand, dist))
(*a1: sort_things_by_distance's sort key calls distance_sq for every single item, and that innocent looking key is a lambda, which is a second function that has to be invoked...)
Another instance of this problem solving method:
def dist(x,y):
return numpy.sqrt(numpy.sum((x-y)**2))
a = numpy.array((xa,ya,za))
b = numpy.array((xb,yb,zb))
dist_a_b = dist(a,b)
Starting Python 3.8, the math module directly provides the dist function, which returns the euclidean distance between two points (given as tuples or lists of coordinates):
from math import dist
dist((1, 2, 6), (-2, 3, 2)) # 5.0990195135927845
And if you're working with lists:
dist([1, 2, 6], [-2, 3, 2]) # 5.0990195135927845
It can be done like the following. I don't know how fast it is, but it's not using NumPy.
from math import sqrt
a = (1, 2, 3) # Data point 1
b = (4, 5, 6) # Data point 2
print sqrt(sum( (a - b)**2 for a, b in zip(a, b)))
A nice one-liner:
dist = numpy.linalg.norm(a-b)
However, if speed is a concern I would recommend experimenting on your machine. I've found that using math library's sqrt with the ** operator for the square is much faster on my machine than the one-liner NumPy solution.
I ran my tests using this simple program:
#!/usr/bin/python
import math
import numpy
from random import uniform
def fastest_calc_dist(p1,p2):
return math.sqrt((p2[0] - p1[0]) ** 2 +
(p2[1] - p1[1]) ** 2 +
(p2[2] - p1[2]) ** 2)
def math_calc_dist(p1,p2):
return math.sqrt(math.pow((p2[0] - p1[0]), 2) +
math.pow((p2[1] - p1[1]), 2) +
math.pow((p2[2] - p1[2]), 2))
def numpy_calc_dist(p1,p2):
return numpy.linalg.norm(numpy.array(p1)-numpy.array(p2))
TOTAL_LOCATIONS = 1000
p1 = dict()
p2 = dict()
for i in range(0, TOTAL_LOCATIONS):
p1[i] = (uniform(0,1000),uniform(0,1000),uniform(0,1000))
p2[i] = (uniform(0,1000),uniform(0,1000),uniform(0,1000))
total_dist = 0
for i in range(0, TOTAL_LOCATIONS):
for j in range(0, TOTAL_LOCATIONS):
dist = fastest_calc_dist(p1[i], p2[j]) #change this line for testing
total_dist += dist
print total_dist
On my machine, math_calc_dist runs much faster than numpy_calc_dist: 1.5 seconds versus 23.5 seconds.
To get a measurable difference between fastest_calc_dist and math_calc_dist I had to up TOTAL_LOCATIONS to 6000. Then fastest_calc_dist takes ~50 seconds while math_calc_dist takes ~60 seconds.
You can also experiment with numpy.sqrt and numpy.square though both were slower than the math alternatives on my machine.
My tests were run with Python 2.6.6.
I find a 'dist' function in matplotlib.mlab, but I don't think it's handy enough.
I'm posting it here just for reference.
import numpy as np
import matplotlib as plt
a = np.array([1, 2, 3])
b = np.array([2, 3, 4])
# Distance between a and b
dis = plt.mlab.dist(a, b)
You can just subtract the vectors and then innerproduct.
Following your example,
a = numpy.array((xa, ya, za))
b = numpy.array((xb, yb, zb))
tmp = a - b
sum_squared = numpy.dot(tmp.T, tmp)
result = numpy.sqrt(sum_squared)
I like np.dot (dot product):
a = numpy.array((xa,ya,za))
b = numpy.array((xb,yb,zb))
distance = (np.dot(a-b,a-b))**.5
With Python 3.8, it's very easy.
https://docs.python.org/3/library/math.html#math.dist
math.dist(p, q)
Return the Euclidean distance between two points p and q, each given
as a sequence (or iterable) of coordinates. The two points must have
the same dimension.
Roughly equivalent to:
sqrt(sum((px - qx) ** 2.0 for px, qx in zip(p, q)))
Having a and b as you defined them, you can use also:
distance = np.sqrt(np.sum((a-b)**2))
Since Python 3.8
Since Python 3.8 the math module includes the function math.dist().
See here https://docs.python.org/3.8/library/math.html#math.dist.
math.dist(p1, p2)
Return the Euclidean distance between two points p1 and p2,
each given as a sequence (or iterable) of coordinates.
import math
print( math.dist( (0,0), (1,1) )) # sqrt(2) -> 1.4142
print( math.dist( (0,0,0), (1,1,1) )) # sqrt(3) -> 1.7321
Here's some concise code for Euclidean distance in Python given two points represented as lists in Python.
def distance(v1,v2):
return sum([(x-y)**2 for (x,y) in zip(v1,v2)])**(0.5)
import math
dist = math.hypot(math.hypot(xa-xb, ya-yb), za-zb)
Calculate the Euclidean distance for multidimensional space:
import math
x = [1, 2, 6]
y = [-2, 3, 2]
dist = math.sqrt(sum([(xi-yi)**2 for xi,yi in zip(x, y)]))
5.0990195135927845
import numpy as np
from scipy.spatial import distance
input_arr = np.array([[0,3,0],[2,0,0],[0,1,3],[0,1,2],[-1,0,1],[1,1,1]])
test_case = np.array([0,0,0])
dst=[]
for i in range(0,6):
temp = distance.euclidean(test_case,input_arr[i])
dst.append(temp)
print(dst)
You can easily use the formula
distance = np.sqrt(np.sum(np.square(a-b)))
which does actually nothing more than using Pythagoras' theorem to calculate the distance, by adding the squares of Δx, Δy and Δz and rooting the result.
import numpy as np
# any two python array as two points
a = [0, 0]
b = [3, 4]
You first change list to numpy array and do like this: print(np.linalg.norm(np.array(a) - np.array(b))). Second method directly from python list as: print(np.linalg.norm(np.subtract(a,b)))
The other answers work for floating point numbers, but do not correctly compute the distance for integer dtypes which are subject to overflow and underflow. Note that even scipy.distance.euclidean has this issue:
>>> a1 = np.array([1], dtype='uint8')
>>> a2 = np.array([2], dtype='uint8')
>>> a1 - a2
array([255], dtype=uint8)
>>> np.linalg.norm(a1 - a2)
255.0
>>> from scipy.spatial import distance
>>> distance.euclidean(a1, a2)
255.0
This is common, since many image libraries represent an image as an ndarray with dtype="uint8". This means that if you have a greyscale image which consists of very dark grey pixels (say all the pixels have color #000001) and you're diffing it against black image (#000000), you can end up with x-y consisting of 255 in all cells, which registers as the two images being very far apart from each other. For unsigned integer types (e.g. uint8), you can safely compute the distance in numpy as:
np.linalg.norm(np.maximum(x, y) - np.minimum(x, y))
For signed integer types, you can cast to a float first:
np.linalg.norm(x.astype("float") - y.astype("float"))
For image data specifically, you can use opencv's norm method:
import cv2
cv2.norm(x, y, cv2.NORM_L2)
Find difference of two matrices first. Then, apply element wise multiplication with numpy's multiply command. After then, find summation of the element wise multiplied new matrix. Finally, find square root of the summation.
def findEuclideanDistance(a, b):
euclidean_distance = a - b
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
What's the best way to do this with NumPy, or with Python in general? I have:
Well best way would be safest and also the fastest
I would suggest hypot usage for reliable results for chances of underflow and overflow are very little compared to writing own sqroot calculator
Lets see math.hypot, np.hypot vs vanilla np.sqrt(np.sum((np.array([i, j, k])) ** 2, axis=1))
i, j, k = 1e+200, 1e+200, 1e+200
math.hypot(i, j, k)
# 1.7320508075688773e+200
np.sqrt(np.sum((np.array([i, j, k])) ** 2))
# RuntimeWarning: overflow encountered in square
Speed wise math.hypot look better
%%timeit
math.hypot(i, j, k)
# 100 ns ± 1.05 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
%%timeit
np.sqrt(np.sum((np.array([i, j, k])) ** 2))
# 6.41 µs ± 33.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Underflow
i, j = 1e-200, 1e-200
np.sqrt(i**2+j**2)
# 0.0
Overflow
i, j = 1e+200, 1e+200
np.sqrt(i**2+j**2)
# inf
No Underflow
i, j = 1e-200, 1e-200
np.hypot(i, j)
# 1.414213562373095e-200
No Overflow
i, j = 1e+200, 1e+200
np.hypot(i, j)
# 1.414213562373095e+200
Refer
The fastest solution I could come up with for large number of distances is using numexpr. On my machine it is faster than using numpy einsum:
import numexpr as ne
import numpy as np
np.sqrt(ne.evaluate("sum((a_min_b)**2,axis=1)"))
If you want something more explicit you can easily write the formula like this:
np.sqrt(np.sum((a-b)**2))
Even with arrays of 10_000_000 elements this still runs at 0.1s on my machine.

Tips on how to optimize the inner function with a large for loop

def testing(min_quadReq, stepsize, max_quadReq, S):
y = np.arange(min_quadReq, max_quadReq, stepsize)
print("Y", y)
I_avg = np.zeros(len(y))
Q_avg = np.zeros(len(y))
x = np.arange(0, (len(S)))
debugger = 0
for i in range(0, len(y)):
I = np.array(S * (np.cos(2 * np.pi * y[i] * x)))
Q = np.array(S * (np.sin(2 * np.pi * y[i] * x)))
I_avg[i] = np.sum(I, 0)
Q_avg[i] = np.sum(Q, 0)
debugger += 1
D = [I_avg**2 + Q_avg**2]
maxIndex = np.argmax(D)
#maxValue = D.max()
# in python is arctan2(b,a) compared to matlab's atan2(a,b)
phaseOut = np.arctan2(Q_avg[maxIndex], I_avg[maxIndex])
# returns the out value and the phase
out = min_quadReq + ((maxIndex + 1) - 1) * stepsize
return out, phaseOut
I'm working on a project where uses DSP to process a signal at get out the relevant data. The code above is from the inner function of a quadrature modulation. From what I have seen this is the part of the code that have the biggest potential to be optimized. For example the two sum function is called about 92k times each and the quadrature function itself 2696 times. I'm not that familiar with python so if any have any suggestion to how a more efficient way of writing it or some good documentation it would be lovely.
The signal S is the input source and it's a array of [481][251]. The outer shell of the quadrature is called on by quadReq(cavSig[j, :]) just some extra information to show how it's called and how many times.
def randomnumber():
s = np.random.random_sample((1, 251))
print(s)
return s
randomnumber()
Edit: Added some more information
Your loop produces one I_avg value for each element of y. For compactness I could write it as a list comprehension.
In [61]: x=np.arange(4)
In [62]: y=np.arange(0,1,.2)
In [63]: [np.cos(2*np.pi*y[i]*x).sum() for i in range(y.shape[0])]
Out[63]:
[4.0,
-0.30901699437494745,
0.80901699437494767,
0.80901699437494834,
-0.30901699437494756]
But that y[i]*x part is just an outer product of y and x, which can be written with np.outter, or just as easily with broadcasting, y[:,None]*x.
In [64]: np.cos(2*np.pi*y[:,None]*x).sum(axis=1)
Out[64]: array([ 4. , -0.30901699, 0.80901699, 0.80901699, -0.30901699])
Get on to an interactive Python session, and play around with expressions like this. The best way to learn is by doing and seeing immediate results. Keep the test expressions and arrays small so you can see immediately what is happening.

Categories

Resources