I have a 2-d array for which I want to detect all locally maximal array indices. That is, given an index (i, j), its maximum gradient is the largest absolute change from any of its 8 neighboring values:
Index: (i, j)
Neighbors:
(i-1,j+1) (i,j+1) (i+1,j+1)
(i-1,j) [index] (i+1,j)
(i-1,j-1) (i,j-1) (i+1,j-1)
Neighbor angles:
315 0 45
270 [index] 90
225 180 135
MaxGradient(i,j) = Max(|Val(index) - Val(neighbor)|)
The index is said to be locally maximal if its MaxGradient is at least as large as any of its neighbors' own MaxGradients.
The output of the algorithm should be a 2-d array of tuples, or a 3-d array, where for each index in the original array, the output array contains a value indicating if that index was locally maximal and, if so, the angle of the gradient.
My initial implementation simply passed over the array twice, once to calculate the max gradients (stored in a temporary array) and then once over the temp array to determine the locally maximal indices. Each time, I did this via for loops, looking at each index individually.
Is there some more efficient way to do this in numpy?
Consider these 8 relative indexes:
X1 X2 X3
X4 X X5
X6 X7 X8
You can compute for every pixel X the differences D1=Val(X)-Val(X1), D2=Val(X)-Val(X2), D3=Val(X)-Val(X3), D4=Val(X)-Val(X4). You don't need to compute the other differences because they are mirrors of the first four.
To compute the differences, you can pad the image with a row and a column of zeros and subtract.
As Cyborg pointed out, there are only four differences which need to be computed to complete your calculation (note that there really should be a factor of 1/sqrt(2) for the diagonal and antidiagonal calculations if this really is a spatial gradient calculation on a uniform grid). If I have understood your question, the implementation with numpy could be something like this:
A=np.random.random(100).reshape(10,10)
# Padded copy of A
B=np.empty((12,12))
B[1:-1,1:-1]=A
B[0,1:-1]=A[0,:]
B[-1,1:-1]=A[-1,:]
B[1:-1,0]=A[:,0]
B[1:-1,-1]=A[:,-1]
B[0,0]=A[1,1]
B[-1,-1]=A[-1,-1]
B[-1,0]=A[-1,0]
B[0,1]=A[0,1]
# Compute 4 absolute differences
D1=np.abs(B[1:,1:-1]-B[:-1,1:-1]) # first dimension
D2=np.abs(B[1:-1,1:]-B[1:-1,:-1]) # second dimension
D3=np.abs(B[1:,1:]-B[:-1,:-1]) # Diagonal
D4=np.abs(B[1:,:-1]-B[:-1,1:]) # Antidiagonal
# Compute maxima in each direction
M1=np.maximum(D1[1:,:],D1[:-1,:])
M2=np.maximum(D2[:,1:],D2[:,:-1])
M3=np.maximum(D3[1:,1:],D3[:-1,:-1])
M4=np.maximum(D4[1:,:-1],D4[:-1,1:])
# Compute local maximum for each entry
M=np.max(np.dstack([M1,M2,M3,M4]),axis=2)
That will leave your with the maximum difference in each of the 4 directions of the input A in M. A similar idea can be used for labelling the locally maximal values, culminating in something like
T=np.where((M==np.max(np.dstack([Ma,Mb,Mc,Md,Me,Mf,Mg,Mh]),axis=2)))
which would give you an array contained the coordinates of locally maximal values in M
Related
I have a chunk of code that runs 1000 times and produces 1000 covariance matrices. How do I calculate the average value for each element in the matrices and then print that average matrix?
params_avg1=[]
pcov1avg=[]
i=1000
for n in range(i):
y3=y2+np.random.normal(loc=0.0,scale=.1*y2)
popt1,pcov1=optimize.curve_fit(fluxmeasureMW,bands,y3)
params_avg1.append(popt1)
pcov1avg.append(pcov1) #returns an array of 1000 3x3 covariance matrices
As you already appended all your matrices into a single array, transform it to a 3D numpy array and then average on the correct axis:
np.array(pcov1avg).mean(axis=0) # or equivalently np.mean(pcov1avg, 0)
And just a bit about naming - i usually denotes the current index of the iteration rather than the end value, usually denoted with n
First of all, I apologize for being an absolute beginner in both python and numpy. Please forgive my ignorance.
I have a 4D cube of pressure measurements where the dimensions are (number of samples, time, y-axis, x-axis), which means, for each sample, I have a 3D cube of spatio-temporal profile. I need to collect the pressure readings of this 3D cube (time, y-axis, x-axis) and store it into an array for each sample only where the coordinates satisfy a specific condition. Upon varying the specific condition, the size of this array will vary too. So, I have to use append() to build this array. However, since say for 1000 samples, I have to search through more than a millions coordinates using For-Loop for each sample, the code I have written is pretty inefficient and takes a lot of time to run (more than several hours). Can you please help me to write it more efficiently?
Below is the code I've tried to solve the problem. It works nicely and gives expected result but it is extremely slow.
import numpy as np
# Number of sample points in x,y and t-axis
Nx = 101
Ny = 101
Nt = 100
n_train = 1000
target_array = []
for i_train in range (n_train):
for k in range (Nt):
for j in range (Ny):
for i in range (Nx):
if np.round(np.sqrt((i-np.round(Nx/2))**2+(j-np.round(Ny/2))**2)) == 2*k:
target_array.append(Pressure[i_train,k,j,i])
Since the condition involves the indexes and not the values of your 4D array, you can vectorize it using numpy.meshgrid.
Here pp is your 4D array:
iv, jv, kv = np.meshgrid(np.arange(pp.shape[3]), np.arange(pp.shape[2]), np.arange(pp.shape[1]))
selecting = np.round(np.sqrt((iv - np.round(pp.shape[3]/2))**2 + (jv - np.round(pp.shape[2]/2))**2)) == 2*kv
target = pp[:,selecting]
Provided that I've understood correctly how your 4D array is organized:
the arrays created by meshgrid hold the indexes to select pp elements on the 3 dimensions x, y, t.
selecting is a boolean array created by replicating your equation, to check which coordinates satisfies the condition.
target is a selection of pp, taking all element on 0 axis which satisfies the condition (i.e. selecting is True) on the other 3 axes.
Note that target is a 2D array, to have a 1D array, use target.flatten().
I have a numpy array of shape n x d. Each row represents a point in R^d. I want to filter this array to only rows within a given distance on each axis of a single point--a d-dimensional hypercube, as it were.
In 1 dimension, this could be:
array[np.which(array < lmax and array > lmin)]
where lmax and lmin are the max and min relevant to the point+-distance. But I want to do this in d dimensions. d is not fixed, so hard-coding it out doesn't work. I checked to see if the above works where lmax and lmin are d-length vectors, but it just flattens the array.
I know I could plug the matrix and the point into a distance calculator like scipy.spatial.distance and get some sort of distance metric, but that's likely slower than some simple filtering (if it exists) would be.
The fact I have to do this calculation potentially millions of times means Ideally I'd like a fast solution.
You can try this.
def test(array):
large = array > lmin
small = array < lmax
return array[[i for i in range(array.shape[0])
if np.all(large[i]) and np.all(small[i])]]
For every i, array[i] is a vector. All the elements of a vector should be in range [lmin, lmax], and this process of calculation can be vectorized.
I would like to apply a radial average at the end of a keras pipeline.
At the second to last step, I have an image of size n x n. I then want to map this n x n image to a 1 x n/2 vector, where vector[x] = mean(image(radialPosition = x)). I.e. I want to average all points of distance X from the center of the image, and set this as output[x]. We can assume that n is odd, so the center point is a single point.
I have considered looping over all radii, and selecting the desired indices, as well as a dot product between the image and multiple "averaging" matrices, but neither of these seem computationally efficient.
Is there a better way of doing this?
I have a 2D array of values and I'm trying to analyze spatial correlations. To calculate a 2D autocorrelation like Moran's I in python, pysal provides an implementation.
1) How do I transform my 2D data into a 1D array expected by pysal?
2) How do I construct a weight array w that is based on distance (what does the input array of points mean in the Kernel distance function?)?
1) The weights array should be flattened in the same way as you flatten the data array. The order doesn't matter, as long as the indices agree.
2) The input array can be spatial coordinates (e.g. x and y, or lat and long). By far the easiest are the indices of your original matrix (e.g. 1 to n times 1 to m).
In the end, your data will be a list with 3 elements: x, y and value. Your weights will be a list with 5 elements: x_from, y_from, x_to, y_to and weight.