First of all, I apologize for being an absolute beginner in both python and numpy. Please forgive my ignorance.
I have a 4D cube of pressure measurements where the dimensions are (number of samples, time, y-axis, x-axis), which means, for each sample, I have a 3D cube of spatio-temporal profile. I need to collect the pressure readings of this 3D cube (time, y-axis, x-axis) and store it into an array for each sample only where the coordinates satisfy a specific condition. Upon varying the specific condition, the size of this array will vary too. So, I have to use append() to build this array. However, since say for 1000 samples, I have to search through more than a millions coordinates using For-Loop for each sample, the code I have written is pretty inefficient and takes a lot of time to run (more than several hours). Can you please help me to write it more efficiently?
Below is the code I've tried to solve the problem. It works nicely and gives expected result but it is extremely slow.
import numpy as np
# Number of sample points in x,y and t-axis
Nx = 101
Ny = 101
Nt = 100
n_train = 1000
target_array = []
for i_train in range (n_train):
for k in range (Nt):
for j in range (Ny):
for i in range (Nx):
if np.round(np.sqrt((i-np.round(Nx/2))**2+(j-np.round(Ny/2))**2)) == 2*k:
target_array.append(Pressure[i_train,k,j,i])
Since the condition involves the indexes and not the values of your 4D array, you can vectorize it using numpy.meshgrid.
Here pp is your 4D array:
iv, jv, kv = np.meshgrid(np.arange(pp.shape[3]), np.arange(pp.shape[2]), np.arange(pp.shape[1]))
selecting = np.round(np.sqrt((iv - np.round(pp.shape[3]/2))**2 + (jv - np.round(pp.shape[2]/2))**2)) == 2*kv
target = pp[:,selecting]
Provided that I've understood correctly how your 4D array is organized:
the arrays created by meshgrid hold the indexes to select pp elements on the 3 dimensions x, y, t.
selecting is a boolean array created by replicating your equation, to check which coordinates satisfies the condition.
target is a selection of pp, taking all element on 0 axis which satisfies the condition (i.e. selecting is True) on the other 3 axes.
Note that target is a 2D array, to have a 1D array, use target.flatten().
Related
interp1d works excellently for the individual datasets that I have, however I have in excess of 5 million datasets that I need to have interpolated.
I need the interpolation to be cubic and there should be one interpolation per subset.
Right now I am able to do this with a for loop, however, for 5 million sets to be interpolated, this takes quite some time (15 minutes):
interpolants = []
for i in range(5000000):
interpolants.append(interp1d(xArray[i],interpData[i],kind='cubic'))
What I'd like to do would maybe look something like this:
interpolants = interp1d(xArray, interpData, kind='cubic')
This however fails, with the error:
ValueError: x and y arrays must be equal in length along interpolation axis.
Both my x array (xArray) and my y array (interpData) have identical dimensions...
I could parallelize the for loop, but that would only give me a small increase in speed, I'd greatly prefer to vectorize the operation.
I have also been trying to do something similar over the past few days. I finally managed to do it with np.vectorize, using function signatures. Try with the code snippet below:
fn_vectorized = np.vectorize(interpolate.interp1d,
signature='(n),(n)->()')
interp_fn_array = fn_vectorized(x[np.newaxis, :, :], y)
x and y are arrays of shape (m x n). The objective was to generate an array of interpolation functions, for row i of x and row i of y. The array interp_fn_array contains the interpolation functions (shape is (1 x m).
I have a numpy array of shape n x d. Each row represents a point in R^d. I want to filter this array to only rows within a given distance on each axis of a single point--a d-dimensional hypercube, as it were.
In 1 dimension, this could be:
array[np.which(array < lmax and array > lmin)]
where lmax and lmin are the max and min relevant to the point+-distance. But I want to do this in d dimensions. d is not fixed, so hard-coding it out doesn't work. I checked to see if the above works where lmax and lmin are d-length vectors, but it just flattens the array.
I know I could plug the matrix and the point into a distance calculator like scipy.spatial.distance and get some sort of distance metric, but that's likely slower than some simple filtering (if it exists) would be.
The fact I have to do this calculation potentially millions of times means Ideally I'd like a fast solution.
You can try this.
def test(array):
large = array > lmin
small = array < lmax
return array[[i for i in range(array.shape[0])
if np.all(large[i]) and np.all(small[i])]]
For every i, array[i] is a vector. All the elements of a vector should be in range [lmin, lmax], and this process of calculation can be vectorized.
I got a np.ndarray with ~3000 trajectories. Each trajectory has x, y and z coordinates and a different length; between 150 and 250 (points in time). Now I want to remove the z coordinate for all of these trajectories.
So arr.shape gives me (3000,),(3000 trajectories) and (for example) arr[0].shape yields (3,178) (three axis of coordinates and 178 values).
I have found multiple explanations for removing lines in 2D-arrays and I found np.delete(arr[0], 2, axis=0) working for me. However, I don't just want to delete the z coordinates for the first trajectory; I want to do this for every trajectory.
If I want to do this with a loop for arr[i] I would need to know the exact length of every trajectory (It doesn't suit my purpose to just create the array with the length of the longest and fill it up with zeroes).
TL;DR: So how do I get from a ndarray with [amountOfTrajectories][3][value] to [amountOfTrajectories][2][value]?
The purpose is to use these trajectories as labels for a neural net that creates trajectories. So I guess it's a entirely new question but is the shape I'm asking for suitable for usage as labels for tensorflow?
Also: What would have been a better title and some terms to find results for this with google? I just started with Python and I'm afraid I'm missing some keywords here...
If this comes from loadmat, the source is probably a MATLAB workspace with a cell, which contains these matrices.
loadmat has, evidently created a 1d array of object dtype (the equivalent of a cell, with squeeze on).
A 1d object array is similar to a Python list - it contains pointers to arrays else where in memory. Most operations on such an array use Python iteration. Iterating on the equivalent list is usually faster. (arr.tolist()).
alist = [a[:2,:] for a in arr]
should give you a list of arrays, each of shape (2, n) (n varying). This makes new arrays - but then so does np.delete.
You can't operate on all arrays in the 1d array with one operation. It has to be iterative.
I'm currently working on a 3d array called X of size (100,5,1). I want to assign the randomly created 2d arrays called s, dimension of (5,1) to X. My code is like below.
for i in range(100):
s = np.random.uniform(-1, 2, 5)
for j in range(5):
X[:,j,:] = s[j]
I got 100 (5,1) arrays and they're all the same. I can see why I have this result, but I can't find the solution for this.
I need to have 100 unique (5,1) arrays in X.
You are indexing the entire first dimension and thus broadcasting a single 5 x 1 array. This is why you are seeing copies and it only remembers the last randomly generated 5 x 1 array you've created in the loop seen over the entire first dimension. To fix this, simply change the indexing from : to i.
X[i,j,:] = s[j]
However, this seems like a bad code smell. I would recommend allocating the exact size you need in one go by overriding the size input parameter into numpy.random.uniform.
s = np.random.uniform(low=-1, high=2, size=(100, 5, 1))
Therefore, do not loop and just use the above statement once. This makes sense as each 5 x 1 array you are creating is sampled from the same probability distribution. It would make more sense in an efficiency viewpoint to just allocate the desired size once.
I have to make an assignment called "The 2D Grid" if you have heard of it?
The first task is to make a 2D lattice, and the question goes:
Come up with a way to define a square 2D list (or Numpy array) containing randomly distributed values of -1 or 1.
I have used this:
np.random.randint(1, size = (6,6))
but how can I get numbers between 1 and -1 ?