Can I vectorize scipy.interpolate.interp1d - python

interp1d works excellently for the individual datasets that I have, however I have in excess of 5 million datasets that I need to have interpolated.
I need the interpolation to be cubic and there should be one interpolation per subset.
Right now I am able to do this with a for loop, however, for 5 million sets to be interpolated, this takes quite some time (15 minutes):
interpolants = []
for i in range(5000000):
interpolants.append(interp1d(xArray[i],interpData[i],kind='cubic'))
What I'd like to do would maybe look something like this:
interpolants = interp1d(xArray, interpData, kind='cubic')
This however fails, with the error:
ValueError: x and y arrays must be equal in length along interpolation axis.
Both my x array (xArray) and my y array (interpData) have identical dimensions...
I could parallelize the for loop, but that would only give me a small increase in speed, I'd greatly prefer to vectorize the operation.

I have also been trying to do something similar over the past few days. I finally managed to do it with np.vectorize, using function signatures. Try with the code snippet below:
fn_vectorized = np.vectorize(interpolate.interp1d,
signature='(n),(n)->()')
interp_fn_array = fn_vectorized(x[np.newaxis, :, :], y)
x and y are arrays of shape (m x n). The objective was to generate an array of interpolation functions, for row i of x and row i of y. The array interp_fn_array contains the interpolation functions (shape is (1 x m).

Related

Interpolate rows simultaneously in Python

I am trying to vectorize my code and have reached a roadblock. I have :
nxd array of x values [[x1],[...],[xn]] (where each row [x1] has many points [x11, ..., x1d]
nxd array of y values [[y1],[y2],[y3]] (where each row [y1] has many points [y11, ..., y1d]
nx1 array of x' values [[x'1],[...],[x'n]] that I would like to interpolate a y value for based on the corresponding row of x and y
The only thing I can think to use is a list comprehension like [np.interp(x'[i,:], x[i,:], y[i,:]) for i in range(n)]. I'd like a faster vectorized option if one exists. Thanks for the help!
This is hardly an answer, but I guess it may still be useful for someone (if not, feel free to delete this); and by the way,
I think I misunderstood your question at first. What you have is a collection of n different one-dimensional datasets or functions y(x) that you want to interpolate (correct me otherwise).
As such, it turns out doing this by multidimensional interpolation is a terrible approach.
The idea I thought is to add a new dimension to the data so your datasets are mapped into one single dataset in which this new dimension is what distinguishes between the different xi, where i=1,2,...,n. In other words, you assign a value in this new dimension, let's say, z, to every row of x; this way, different functions are correctly mapped to this higher-dimensional space.
However, this approach is slower than the np.interp list comprehension solution, at least one order of magnitude in my computer. I guess it has to do with two-dimensional interpolation algorithms being at best of order O(nlog(n)) (this is a guess); in this sense, it would seem more efficient to perform multiple interpolations to different datasets rather than one big interpolation.
Anyways, the approach is shown in the following snippet:
import numpy as np
from scipy.interpolate import LinearNDInterpolator
def vectorized_interpolation(x, y, xq):
"""
Vectorized option using LinearNDInterpolator
"""
# Dummy new data points in added dimension
z = np.arange(x.shape[0])
# We must repeat every z value for every row of x
interpolant = LinearNDInterpolator(list(zip(x.ravel(), np.repeat(z, x.shape[1]))), y.ravel())
return interpolant(xq, z)
def non_vectorized_interpolation(x, y, xq):
"""
Your non-vectorized solution
"""
return np.array([np.interp(xq[i], x[i], y[i]) for i in range(x.shape[0])])
if __name__ == "__main__":
n, d = 100, 500
x = np.linspace(0, 2*np.pi, n*d).reshape((n, d))
y = np.sin(x)
xq = np.linspace(0, 2*np.pi, n)
yq1 = vectorized_interpolation(x, y, xq)
yq2 = non_vectorized_interpolation(x, y, xq)
The only advantage of the vectorized solution is that LinearNDInterpolator (and some of the other scipy.interpolate functions) explicitly calculates the interpolant, so you can reuse it if you plan on interpolating the same datasets several times and avoid repetitive calculations. Another thing you could try is using multiprocessing if you have several cores in your machine, but this is not vectorizing which is what you asked for. Sorry I can't be of more help.

Fastest way generate and sum arrays

I am generating a series of Gaussian arrays given a x vector of length (1400), and arrays for the sigma, center, amplitude (amp), all with length (100). I thought the best way to speed this up would be to use numpy and list comprehension:
g = np.sum([(amp[i]*np.exp(-0.5*(x - (center[i]))**2/(sigma[i])**2)) for i in range(len(center))],axis=0)
Each row is a gaussian along a vector x, and then I sum the columns into a single array of length x.
But this doesn't seem to speed things up at all. I think there is a faster way to do this while avoiding the for loop but I can't quite figure out how.
You should use vectorized computation instead of comprehension so the loops are all performed at c speed.
In order to do so you have to reshape x to be a column vector. For example you could do x = x.reshape((1400,1)).
Then you can operate directly on the arrays, like this:
v=(amp*np.exp(-0.5*(x - (center))**2/(sigma)**2
Then you obtain an array of shape (1400,100) which you can sum up to a vector by np.sum(v, axe=1)
You should try to vectorize all the operations. IMHO the most efficient to first converts your input data to numpy arrays (if they were plain Python lists) and then let numpy process the computations:
np_amp = np.array(amp)
np_center = np.array(center)
np_sigma = np.array(sigma)
g = np.sum((np_amp*np.exp(-0.5*(x - (np_center))**2/(np_sigma)**2)),axis=0)

apply_along_axis with 2 arguments varying for each 1d slice

I'm trying to optimize code that currently uses nested for loops & calls scipy's functions.
Basically, I have a first function that calls scipy's find_peaks() methods, and then I want to interpolate those data points (the peak) to find a function that describes them. For example, I first find the peak. This basically is a 2D array of dimension 25*30 (axis 0) with 1000 elements in each (axis 1).
arr = np.random.rand(25,30,1000)
arr = arr.reshape((arr.shape[0]*arr.shape[1], arr.shape[2]))
# we have a 25*30 set of 1000 pts each. find peaks for that
peaks = np.apply_along_axis(find_peaks, 1, arr, height=0,)
Find peaks returns something of the form:
peak_indices = peaks[:,0]
peak_values = peaks[:,1]["peak_heights"]
So far so good. That's essentially the (x,y) coordinates of the points I want to interpolate.
Now, I want to interpolate those couples of indices-heights values to obtain some function, using scipy.interpolate.interpolate.interp1d(...). Interp1d's signature is of the form:
interp1d(x, y, kind='linear', axis=-1, copy=True, bounds_error=None, fill_value=nan, assume_sorted=False)
Where x would be my peak_indices, and y my peak_values.
The question:
How can I pass to this function 2 arguments that vary with each slice? E.g. in other words, my first use of apply_along_axis only used a single slice-dependant argument (the 1000 points for each of my 25*30 elements of axis 0). However here I need to pass to the function TWO arguments - the peak_indices & the peak_values. Can any pythonista think of a clever way to unpack those arguments AFTER I pass them to apply_along_axis as tuples or something? Kind of:
arr=*[peak_indices, peak_values]
I cannot really edit the interp1D function itself, which would be my solution if I was going to call my own function...
EDIT: part of the benefits of using apply along axis is that I should get performance improvements compared to nested ifs, since numpy should be able to bulk-process those calculation. Ideally any solution should use a notation that will still allow those optimisation.
Where do you get the idea that apply_along_axis is a performance tool? Does it actually work faster in this case?
arr = np.random.rand(25,30,1000)
arr = arr.reshape((arr.shape[0]*arr.shape[1], arr.shape[2]))
# we have a 25*30 set of 1000 pts each. find peaks for that
peaks = np.apply_along_axis(find_peaks, 1, arr, height=0,)
compared to:
peaks = np.array([find_peaks(x, height=0) for x in arr])
That is a simple iteration over the 25*30 set of 1d arrays.
apply does a test calculation to determine the return shape and dtype. It constructs are result array, and then iterates on all axes except 1, and calls the function with that 1d array. There's no compiling, or "bulk processing" (what ever that is). It just hides a loop in a function call.
It does make iteration over 2 axes of a 3d array prettier, but not faster:
You could have used it on the original arr, to get (25,30,2) result:
peaks = np.apply_along_axis(find_peaks, 2, arr_3d, height=0,)
I'm guessing find_peaks returns a 2 element tuple of values, and peaks will then be an object dtype array.
Since apply_along_axis does not have any performance advantages, I don't see the point to trying to use it with a more complex array. It's handy when you have a 3d array, and a function that takes a 1d input, but beyond that ....?

How to avoid for-loop while using append()

First of all, I apologize for being an absolute beginner in both python and numpy. Please forgive my ignorance.
I have a 4D cube of pressure measurements where the dimensions are (number of samples, time, y-axis, x-axis), which means, for each sample, I have a 3D cube of spatio-temporal profile. I need to collect the pressure readings of this 3D cube (time, y-axis, x-axis) and store it into an array for each sample only where the coordinates satisfy a specific condition. Upon varying the specific condition, the size of this array will vary too. So, I have to use append() to build this array. However, since say for 1000 samples, I have to search through more than a millions coordinates using For-Loop for each sample, the code I have written is pretty inefficient and takes a lot of time to run (more than several hours). Can you please help me to write it more efficiently?
Below is the code I've tried to solve the problem. It works nicely and gives expected result but it is extremely slow.
import numpy as np
# Number of sample points in x,y and t-axis
Nx = 101
Ny = 101
Nt = 100
n_train = 1000
target_array = []
for i_train in range (n_train):
for k in range (Nt):
for j in range (Ny):
for i in range (Nx):
if np.round(np.sqrt((i-np.round(Nx/2))**2+(j-np.round(Ny/2))**2)) == 2*k:
target_array.append(Pressure[i_train,k,j,i])
Since the condition involves the indexes and not the values of your 4D array, you can vectorize it using numpy.meshgrid.
Here pp is your 4D array:
iv, jv, kv = np.meshgrid(np.arange(pp.shape[3]), np.arange(pp.shape[2]), np.arange(pp.shape[1]))
selecting = np.round(np.sqrt((iv - np.round(pp.shape[3]/2))**2 + (jv - np.round(pp.shape[2]/2))**2)) == 2*kv
target = pp[:,selecting]
Provided that I've understood correctly how your 4D array is organized:
the arrays created by meshgrid hold the indexes to select pp elements on the 3 dimensions x, y, t.
selecting is a boolean array created by replicating your equation, to check which coordinates satisfies the condition.
target is a selection of pp, taking all element on 0 axis which satisfies the condition (i.e. selecting is True) on the other 3 axes.
Note that target is a 2D array, to have a 1D array, use target.flatten().

2d array for 2d function from 2 1d arrays (Python)

I am trying to make a 2D 5850x5850 array from two 1D arrays by putting them into this equation for a 2D gausian.
psf = 1/(2*np.pi*sigma_x*sigma_y) * np.exp(-(x**2/(2*sigma_x**2) + y**2/(2*sigma_y**2)))
However it gives back a 1D array, waht am i doing wrong?
If I understand your question correctly:
All you need to do is to alter shape of your arrays.
E.g.
x.shape=(5850,1) # now it is column array
y.shape=(1,5850) # now it is row array
Then you can proceed as in your original post. The result will be 5850 by 5850 array. Each row will correspond to different x and each column will correspond to different y.
However I would change few things in your code to make it look like that:
psf = 1/(2*np.pi*sigma_x*sigma_y) * np.exp(-(x*x/(2*sigma_x*sigma_x) + y*y/(2*sigma_y*sigma_y)))
Squaring values is usually inefficient (unless your complier translates it to multiplication, but in Python there is no complier to rely on). Squaring is much slower than multiplication. When you take a value to the power your computer needs to be ready that it might be negative or that it is not an integer. There is no such overhead when you multiply values.
Try:
for i in xrange(0,1000000):
z=i**2
for i in xrange(0,1000000):
z=i*i
Formar ran 0.975s on my machine whereas later only 0.267s.
It doesn't understand that x and y are to mean that for every x, you must do this for each y. If you can't find a library to create 2d functions/guassians more conveniently, try:
z = np.empty((len(x), len(y))
for idx, yval in enumerate(y):
z[:,idx] = f(x, yval)
Where f(x, yval) if you 2d function but where you have y, use yval. There's got to be more support for 2d function creation somewhere, maybe try scipy 2d guassian functions in a search?
The proper expression to make a 2d Gaussian would be
x = np.arange(0, size, 1, float)
y = x[:,np.newaxis]
x0 = y0 = 0 # your center
np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / radius**2)

Categories

Resources