I am generating a series of Gaussian arrays given a x vector of length (1400), and arrays for the sigma, center, amplitude (amp), all with length (100). I thought the best way to speed this up would be to use numpy and list comprehension:
g = np.sum([(amp[i]*np.exp(-0.5*(x - (center[i]))**2/(sigma[i])**2)) for i in range(len(center))],axis=0)
Each row is a gaussian along a vector x, and then I sum the columns into a single array of length x.
But this doesn't seem to speed things up at all. I think there is a faster way to do this while avoiding the for loop but I can't quite figure out how.
You should use vectorized computation instead of comprehension so the loops are all performed at c speed.
In order to do so you have to reshape x to be a column vector. For example you could do x = x.reshape((1400,1)).
Then you can operate directly on the arrays, like this:
v=(amp*np.exp(-0.5*(x - (center))**2/(sigma)**2
Then you obtain an array of shape (1400,100) which you can sum up to a vector by np.sum(v, axe=1)
You should try to vectorize all the operations. IMHO the most efficient to first converts your input data to numpy arrays (if they were plain Python lists) and then let numpy process the computations:
np_amp = np.array(amp)
np_center = np.array(center)
np_sigma = np.array(sigma)
g = np.sum((np_amp*np.exp(-0.5*(x - (np_center))**2/(np_sigma)**2)),axis=0)
Related
From my understanding, when we want to define a numpy array, we have to define its size.
However, in my case, I want to define a numpy array and then extend it based on my values in the for loop. The shape of values might differ in each run. So I cannot define the numpy array shape in advance.
Is there any way to overcome this?
I would like to avoid using lists.
Thanks
import numpy as np
myArrayShape = 2
myArray = np.empty(shape=2)
Note that this generates random values for each element in the array.
I think numpy array is just like array in clang or c++, I mean when you make numpy array you allocate memory depend on your request(size and dtype).
So it is better to make array after the size of array is determinated.
Or you can try numpy.append
https://numpy.org/doc/stable/reference/generated/numpy.append.html
But I don't think it is preferable way because it keeps generate new arrays.
From the Octave (free-MATLAB) docs, https://octave.org/doc/v6.3.0/Advanced-Indexing.html
In cases where a loop cannot be avoided, or a number of values must be combined to form a larger matrix, it is generally faster to set the size of the matrix first (pre-allocate storage), and then insert elements using indexing commands. For example, given a matrix a,
[nr, nc] = size (a);
x = zeros (nr, n * nc);
for i = 1:n
x(:,(i-1)*nc+1:i*nc) = a;
endfor
is considerably faster than
x = a;
for i = 1:n-1
x = [x, a];
endfor
because Octave does not have to repeatedly resize the intermediate result.
The same idea applies in numpy. While you can start with a (0,n) shaped array, and grow by concatenating (1,n) arrays, that is a lot slower than starting with a (m,n) array, and assigning values.
There's a deleted answer that illustrates how to create an array by list append. That is highly recommended.
I have a matrix thing that looks like this:
thing.shape
(8070829, 2)
and I want to scale all elements by some scalingfactor = np.iinfo(np.int16).max/thing.max() to normalize the values. Right now I am iterating over all elements which works, but is really slow:
for j, sample in enumerate(thing):
thing[j] = [int(sample[0] * scalingfactor), int(sample[1] * scalingfactor)]
I thought I could do the following, but the results are not the same:
np.multiply(thing, scalingfactor)
Is there are more efficient way to normalize a matrix?
Use vectorized elementwise multiplication and then change dtype (that does the floor-ing) -
(thing*scalingfactor).astype(int) # for thing as array type
Or use np.floor on the scaled version -
np.floor(thing*scalingfactor)
Using the posted code from the question : np.multiply(thing, scalingfactor) would work too, just needs the additional floor-ing step, as suggested earlier.
In python I have 2 three dimensional arrays:
T with size (n,n,n)
U with size (k,n,n)
T and U can be seen as many 2-D arrays one next to the other. I need to multiply all those matrices, ie I have to perform the following operation:
for i in range(n):
H[:,:,i] = U[:,:,i].dot(T[:,:,i]).dot(U[:,:,i].T)
As n might be very big I am wondering if this operation could be in some way speed up with numpy.
Carefully looking into the iterators and how they are involved in those dot product reductions, we could translate all of those into one np.einsum implementation like so -
H = np.einsum('ijk,jlk,mlk->imk',U,T,U)
I have a weighted moving average function which smooths a curve by averaging 3*width values to the left and to the right of each point using a gaussian weighting mechanism. I am only worried about smoothing a region bounded by [start, end]. The following code works, but the problem is runtime with large arrays.
import numpy as np
def weighted_moving_average(x, y, start, end, width = 3):
def gaussian(x, a, m, s):
return a*exp(-(x-m)**2/(2*s**2))
cut = (x>=start-3*width)*(x<=end+3*width)
x, y = x[cut], y[cut]
x_avg = x[(x>=start)*(x<=end)]
y_avg = np.zeros(len(x_avg))
bin_vals = np.arange(-3*width,3*width+1)
weights = gaussian(bin_vals, 1, 0, width)
for i in range(len(x_avg)):
y_vals = y[i:i+6*width+1]
y_avg[i] = np.average(y_vals, weights = weights)
return x_avg, y_avg
From my understanding, it is generally inefficient to loop through a NumPy array. I was wondering if anyone had an idea to replace the for loop with something more runtime efficient.
Thanks
That slicing and summing/averaging on a weighted window basically corresponds to 1D convolution with the kernel being flipped. Now, for 1D convolution, NumPy has a very efficient implementation in np.convolve and that could be used to get rid of the loop and give us y_avg. Thus, we would have a vectorized implementation like so -
y_sums = np.convolve(y,weights[::-1],'valid')
y_avg = np.true_divide(y_sums,weights.sum())
The main concern with looping over a large array is that the memory allocation for the large array can be expensive, and the whole thing has to be initialized before the loop can start.
In this particular case I'd go with what Divakar is saying.
In general, if you find yourself in a circumstance where you really need to iterate over a large collection, use iterators instead of arrays. For a relatively simple case like this, just replace range with xrange (see https://docs.python.org/2/library/functions.html#xrange).
Is there a way to use numpy.linalg.det or numpy.linalg.inv on an nx3x3 array (a line in a multiband image), for example? Right now I am doing something like:
det = numpy.array([numpy.linalg.det(i) for i in X])
but surely there is a more efficient way. Of course, I could use map:
det = numpy.array(map(numpy.linalg.det, X))
Any other more direct way?
I'm pretty sure there is no substantially more efficient way than what you have. You can save some memory by first creating an empty array for the results and writing all results directly to that array:
res = numpy.empty_like(X)
for i, A in enumerate(X):
res[i] = numpy.linalg.inv(A)
This won't be any faster, though -- it will only use less memory.
a "normal" determinant is only defined for a matrix (dimension=2), so if that's what you want i don't see another way.
if you really want to compute the determinant of a cube then you could try to implement one of the ways described here:
http://en.wikipedia.org/wiki/Hyperdeterminant
notice that it is not necessarily the same value as the one you're currently computing.
New answer to an old question: Since version 1.8.0, numpy supports evaluating a batch of 2D matrices. For a batch of MxM matrices, the input and output now looks like:
linalg.det(a)
Compute the determinant of an array.
Parameters a(…, M, M) array_like
Input array to compute determinants for.
Returns det(…) array_like
Determinant of a.
Note the ellipsis. There can be multiple "batch dimensions", where for example you can evaluate a determinants on a meshgrid.
https://numpy.org/doc/stable/reference/generated/numpy.linalg.det.html
https://numpy.org/doc/stable/reference/generated/numpy.linalg.inv.html