python - 2d interpolation, one dimension at a time - python

I am trying to implement some interpolation techniques - specifically using the scipy pchip routine.
What I am trying to determine is whether I can perform interpolation of regularly space 2d data by interpolating each coordinate separately.
For example, if I have:
(1 x m) vector of X coordinates
(1 x n) vector of Y coordinates
(m x n) matrix of Z coordinates //Z value corresponding to (x,y) pair
Is it possible to perform pchip interpolation over each dimension in succession, therefore creating an interpolated surface?
Pchip expects data in the form of pchip(X,Z) - where both X and Z are 1D arrays. What then is the best way to interpolate each dimension? Should I do, for example, pchip(X,Z) for each column of my Z matrix? Then pchip(Y,Z*) over each row of the matrix resulting from the first interpolation?
Thank you for the help. I have seen pv post about performing tensor rpoduct interpolation with pchip, but it results in a pesky divide by zero error I can't get rid of, even with his updates on github.
EDIT:
I found this ticket posted regarding the warning I have using pchip:
http://projects.scipy.org/scipy/ticket/1838
Could anyone please tell me what it means when it says
"The infs/nans so generated are filtered out by applying a boolean condition mask, but the mask could be applied before division to avoid the warnings altogether. "
How do I got about applying this to avoid the warning?

Take a look at the top picture in
Bilinear interpolation.
Find the rows y1, y2 nearest y,
pchip x in those to get R1 R2 (blue),
then linearly interpolate those to get P (green).
(You could also do that in the other order, and average the values x-then-y, y-then-x.)
However if pchip is nearly linear between knots (is it for your data ?),
then it would be simpler to do bilinear directly,
either with
scipy BivariateSpline
or with
scipy.ndimage.interpolation.map_coordinates( ... order=1 )
and (ahem) the wrapper
Intergrid .

Related

Can I vectorize scipy.interpolate.interp1d

interp1d works excellently for the individual datasets that I have, however I have in excess of 5 million datasets that I need to have interpolated.
I need the interpolation to be cubic and there should be one interpolation per subset.
Right now I am able to do this with a for loop, however, for 5 million sets to be interpolated, this takes quite some time (15 minutes):
interpolants = []
for i in range(5000000):
interpolants.append(interp1d(xArray[i],interpData[i],kind='cubic'))
What I'd like to do would maybe look something like this:
interpolants = interp1d(xArray, interpData, kind='cubic')
This however fails, with the error:
ValueError: x and y arrays must be equal in length along interpolation axis.
Both my x array (xArray) and my y array (interpData) have identical dimensions...
I could parallelize the for loop, but that would only give me a small increase in speed, I'd greatly prefer to vectorize the operation.
I have also been trying to do something similar over the past few days. I finally managed to do it with np.vectorize, using function signatures. Try with the code snippet below:
fn_vectorized = np.vectorize(interpolate.interp1d,
signature='(n),(n)->()')
interp_fn_array = fn_vectorized(x[np.newaxis, :, :], y)
x and y are arrays of shape (m x n). The objective was to generate an array of interpolation functions, for row i of x and row i of y. The array interp_fn_array contains the interpolation functions (shape is (1 x m).

Weighted 1D interpolation of cloud data point

I have a cloud of data points (x,y) that I would like to interpolate and smooth.
Currently, I am using scipy :
from scipy.interpolate import interp1d
from scipy.signal import savgol_filter
spl = interp1d(Cloud[:,1], Cloud[:,0]) # interpolation
x = np.linspace(Cloud[:,1].min(), Cloud[:,1].max(), 1000)
smoothed = savgol_filter(spl(x), 21, 1) #smoothing
This is working pretty well, except that I would like to give some weights to the data points given at interp1d. Any suggestion for another function that is handling this ?
Basically, I thought that I could just multiply the occurrence of each point of the cloud according to its weight, but that is not very optimized as it increases a lot the number of points to interpolate, and slows down the algorithm ..
The default interp1d uses linear interpolation, i.e., it simply computes a line between two points. A weighted interpolation does not make much sense mathematically in such scenario - there is only one way in euclidean space to make a straight line between two points.
Depending on your goal, you can look into other methods of interpolation, e.g., B-splines. Then you can use scipy's scipy.interpolate.splrep and set the w argument:
w - Strictly positive rank-1 array of weights the same length as x and y. The weights are used in computing the weighted least-squares spline fit. If the errors in the y values have standard-deviation given by the vector d, then w should be 1/d. Default is ones(len(x)).

Linear interpolation of the 4D array in Python/NumPy

I have a question about the linear interpolation in python\numpy.
I have a 4D array with the data (all data in binary files) that arrange in this way:
t- time (lets say each hour for a month = 720)
Z-levels (lets say Z'=7)
Y-data1 (one for each t and Z)
X-data2 (one for each t and Z)
So, I want to obtain a new Y and X data for the Z'=25 with the same t.
The first thing, I have a small trouble with the right way to read my data from the binary file. Second, I have to interpolate first 3 levels to Z'=15 and others for the other values.
If anyone has an idea how to do it and can help it will be great.
Thank you for your attention!
You can create different interpolation formulas for different combinations of z' and t.
For example, for z=7, and a specific value of t, you can create an interpolation formula:
formula = scipy.interp1d(x,y)
Another one for say z=25 and so on.
Then, given any combination of z and t, you can refer to the specific interpolation formula and do the interpolation.
In 2D for instance there is bilinear interpolation - with an example on the unit square with the z-values 0, 1, 1 and 0.5 as indicated. Interpolated values in between represented by colour:
Then trilinear, and so on...
Follow the pattern and you'll see that you can nest interpolations to any dimension you require...
:)

How to extract the frequencies associated with fft2 values in numpy?

I know that for fft.fft, I can use fft.fftfreq. But there seems to be no such thing as fft.fftfreq2. Can I somehow use fft.fftfreq to calculate the frequencies in 2 dimensions, possibly with meshgrid? Or is there some other way?
Yes. Apply fftfreq to each spatial vector (x and y) separately. Then create a meshgrid from those frequency vectors.
Note that you need to use fftshift if you want the typical representation (zero frequencies in center of spatial spectrum) to both the output and your new spatial frequencies (before using meshgrid).

Circular Dimensionality Reduction?

I want dimensionality reduction such that dimensions it returns are circular.
ex) If I reduce 12d data to 2d, normalized between 0 and 1, then I want (0,0) to be as equally close to (.1,.1) as (.9,.9).
What is my algorithm? (bonus points for python implementation)
PCA gives me 2d plane of data, whereas I want spherical surface of data.
Make sense? Simple? Inherent problems? Thanks.
I think what you ask is all about transformation.
Circular
I want (0,0) to be as equally close to (.1,.1) as (.9,.9).
PCA
Taking your approach of normalization what you could do is to
map the values in the interval from [0.5, 1] to [0.5, 0]
MDS
If you want to use a distance metric, you could first compute the distances and then do the same. For instance taking the correlation, you could do 1-abs(corr). Since the correlation is between [-1, 1] positive and negative correlations will give values close to zero, while non correlated data will give values close to one. Then, having computed the distances you use MDS to get your projection.
Space
PCA gives me 2d plane of data, whereas I want spherical surface of data.
Since you want a spherical surface you can directly transform the 2-d plane to a sphere as I think. A spherical coordinate system with a constant Z would do that, wouldn't it?
Another question is then: Is all this a reasonable thing to do?

Categories

Resources