I have a question about the linear interpolation in python\numpy.
I have a 4D array with the data (all data in binary files) that arrange in this way:
t- time (lets say each hour for a month = 720)
Z-levels (lets say Z'=7)
Y-data1 (one for each t and Z)
X-data2 (one for each t and Z)
So, I want to obtain a new Y and X data for the Z'=25 with the same t.
The first thing, I have a small trouble with the right way to read my data from the binary file. Second, I have to interpolate first 3 levels to Z'=15 and others for the other values.
If anyone has an idea how to do it and can help it will be great.
Thank you for your attention!
You can create different interpolation formulas for different combinations of z' and t.
For example, for z=7, and a specific value of t, you can create an interpolation formula:
formula = scipy.interp1d(x,y)
Another one for say z=25 and so on.
Then, given any combination of z and t, you can refer to the specific interpolation formula and do the interpolation.
In 2D for instance there is bilinear interpolation - with an example on the unit square with the z-values 0, 1, 1 and 0.5 as indicated. Interpolated values in between represented by colour:
Then trilinear, and so on...
Follow the pattern and you'll see that you can nest interpolations to any dimension you require...
:)
Related
I have the following text file:
82 83.2 92.5
89 90 100
These numbers represent experimental data having the shape z=f(x,y), where x are the numbers from the leftmost column, y are the numbers from the middle column and z are the numbers from the rightmost column. The file has more experimental data points, but those two are enough for this example.
I am trying to interpolate the data above but I cannot seem to find an appropriate way. Most scipy interpolation methods give as example data that are on grids or meshgrids and interp2d, if I understood correctly, needs a little preprocessing before the data is fed to the function.
Is there any way to get my interpolation without such preprocessing? Ideally would be simply by reading the data into loadtxt, doing some slicing to separate the inputs and the outputs and then pass them to the interpolation function?
I have two sets of exposure-bracketed images of a color chart from two different camera systems, A and B. Each data set, at a given exposure, gives me 24 RGB tuples from the patches on the color chart.
I want to match camera B to camera A through a 3-dimensional transform via an interpolation of these two data sets. This is basically the process of creating a look-up table. The methods for parsing and applying LUTs to existing images are well-documented, but I cannot find good resources on how to analytically create a LUT given two different data sets. I know that the answer involves interpolation through a sparse data set, and could involve something like trilinear interpolation, but I'm not sure about the actual execution.
For example, taking the case of trilinear interpolation, it expects 8 corners, but in the case of matching image A to image B, what do those 8 corners consist of? The closest hits to the given pixel in all dimensions? Searching through an unordered data set for close values seems expensive and not correct.
Overall, I'm looking for some advice on how to proceed to match two images with the data set I've acquired, specifically with a 3d transformation. Preferred tool is Python but it can be anything non-proprietary.
In the first place, you need to establish the correspondences, i.e. associate the patches of the same color in the two images (much to say about this but not in the scope of this answer). And get the RGB color values (preferably by averaging over the patches to reduce random fluctuations).
Now you have a set of N pairs of RGB triples, to which you want to fit a mathematical model,
RGB' = f(RGB)
(f is a vector function of a vector argument).
To begin with, you should try an affine model,
RGB' = A RGB + RGB0
where A is a 3x3 matrix and RGB0 a constant vector. Notice that in the affine case, the equations are independent, like
R' = Ar RGB + R0
G' = Ag RGB + G0
B' = Ab RGB + B0
where Ar, Ag, Ab are vectors.
There are twelve unknown coefficients so you need N≥4. If N>4, you can resort to least-squares fitting, also easy in the linear case.
In case the affine model is insufficient, you can try a polynomial model such as a quadric one (requires N≥10).
I know that for fft.fft, I can use fft.fftfreq. But there seems to be no such thing as fft.fftfreq2. Can I somehow use fft.fftfreq to calculate the frequencies in 2 dimensions, possibly with meshgrid? Or is there some other way?
Yes. Apply fftfreq to each spatial vector (x and y) separately. Then create a meshgrid from those frequency vectors.
Note that you need to use fftshift if you want the typical representation (zero frequencies in center of spatial spectrum) to both the output and your new spatial frequencies (before using meshgrid).
Given an array of values say 300x80, where 300 represents the # of samples and 80 represents the features you want to keep.
I know in MATLAB and Python you can do interp1d and such, but I don't think that works for me in this situation. All I could find are 1D examples.
Is there a way to do interpolation to make this array say 500x80 in Python?
Simple question of 300x80 -> 500x80.
http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html
x, y are your matrix indices (row/column index), and z is the value at that position. It returns a function that you can call on all points of a new 500x80 grid.
Of course it does not make any sense, since they are sample/variable indices and it just means inventing more of them and extrapolate what the values should look like for them. Interpolation only works for an x (y) that represents several measurements of the same variable (unlike a sample#).
I am trying to implement some interpolation techniques - specifically using the scipy pchip routine.
What I am trying to determine is whether I can perform interpolation of regularly space 2d data by interpolating each coordinate separately.
For example, if I have:
(1 x m) vector of X coordinates
(1 x n) vector of Y coordinates
(m x n) matrix of Z coordinates //Z value corresponding to (x,y) pair
Is it possible to perform pchip interpolation over each dimension in succession, therefore creating an interpolated surface?
Pchip expects data in the form of pchip(X,Z) - where both X and Z are 1D arrays. What then is the best way to interpolate each dimension? Should I do, for example, pchip(X,Z) for each column of my Z matrix? Then pchip(Y,Z*) over each row of the matrix resulting from the first interpolation?
Thank you for the help. I have seen pv post about performing tensor rpoduct interpolation with pchip, but it results in a pesky divide by zero error I can't get rid of, even with his updates on github.
EDIT:
I found this ticket posted regarding the warning I have using pchip:
http://projects.scipy.org/scipy/ticket/1838
Could anyone please tell me what it means when it says
"The infs/nans so generated are filtered out by applying a boolean condition mask, but the mask could be applied before division to avoid the warnings altogether. "
How do I got about applying this to avoid the warning?
Take a look at the top picture in
Bilinear interpolation.
Find the rows y1, y2 nearest y,
pchip x in those to get R1 R2 (blue),
then linearly interpolate those to get P (green).
(You could also do that in the other order, and average the values x-then-y, y-then-x.)
However if pchip is nearly linear between knots (is it for your data ?),
then it would be simpler to do bilinear directly,
either with
scipy BivariateSpline
or with
scipy.ndimage.interpolation.map_coordinates( ... order=1 )
and (ahem) the wrapper
Intergrid .