I am plotting the result of an interpolation in a periodic domain, namely, the earth mercator projection map, [0,2*pi] or [0,360] is the domain for longitude. As you can see on the picture below, I'm plotting a groundtrack.
I am getting first r, i.e. position, and then I'm projecting that right onto earth. Since the coordinate transformations involves trigonometric functions, the results that I obtain are certainly restricted to a domain, where the inverse is bijective. To obtain this plot I've used atan2 in order to obtain a non bijective inverse function, as well as manipulating arccos in order to extend the domain of the inverse function.
All good up to now. The fact is that when I interpolate the resulting points, naturally, the function that returns does not interpret the domain folding property.
I just wanted to know if there is any way around this, apart from manipulating my data and representing it in a non periodic domain, interpolate it, and after that applying %(2*np.pi). These option, even if is doable, implies touching even more those inverse functions. The other option I thought was interpolating in chunks of only increasing values, i.e. and concatenating them.
Nothing found on the scipy documentation.
Solved the issue implementing something like the following. Notice that I am using astropy units module.
adder = 2*np.pi*u.rad
for i in range(1,len(lons)):
if lons[i].value-lons[i-1].value > 1:
sgn=np.sign(lons[i].value-lons[i-1].value)
lons[i:] -= sgn*adder
after doing this, apply the %
f_lons = interp1d(t,lons)
lons = f_lons(new_t) % (2*np.pi)
Related
I have two sets of data points; effectively, one is from a preimage and the other from its image, but I do not know the rule between the two. This rule/function is nonlinear.
I've collected many data points of corresponding locations on both images, and I was wondering if anyone knew of a way to find a more complete mapping. That is, does anyone know the best way to find a mapping from R^2 to R^2 with an extensive set of sample points. This mapping is one-to-one and onto.
My goal is to use the data I've found to find a polynomial function that takes in some x,y coordinate from the preimage, and outputs the shifted coordinates.
edit: I have sample points along the domain and their corresponding points in the image, but not for every point in the domain. I want to be able to input any point (only integer values) in the domain and output the shifted point.
I don't think polynomial is easy (or easy to guarantee is a bijection). The obvious thing to do is to
Construct the delaunay triangulation of the known points in the domain.
For each delaunay triangle the mapping is just the linear mapping which interpolates the map on the vertices.
Then, when you have a random point, look up its delaunay triangle, and apply the requisite map.
I believe that all of the above can be done via scipy.spatial.delaunay.
The transformation you're trying to find sounds a lot like what's accomplished in Geographic Information Systems using a technique called rubber-sheeting https://en.wikipedia.org/wiki/Rubbersheeting
Igor Rivin's description of a process using a Delaunay triangulation is pretty much the solution that's used in such systems. Some systems will use a Barycentric coordinate system rather than a linear mapping to try to reduce the appearance of triangle-related artifacts in the transformed image.
What you are describing also sounds a bit like the "morphing" special effect used in video. Maybe a web search on that topic would turn up some leads for you.
I've been tasked to develop an algorithm that, given a set of sparse points representing measurements of an existing surface, would allow us to compute the z coordinate of any point on the surface. The challenge is to find a suitable interpolation method that can recreate the 3D surface given only a few points and extrapolate values also outside of the range containing the initial measurements (a notorious problem for many interpolation methods).
After trying to fit many analytic curves to the points I've decided to use RBF interpolation as I thought this will better reproduce the surface given that the points should all lie on it (I'm assuming the measurements have a negligible error).
The first results are quite impressive considering the few points that I'm using.
Interpolation results
In the picture that I'm showing the blue points are the ones used for the RBF interpolation which produces the shape represented in gray scale. The red points are instead additional measurements of the same shape that I'm trying to reproduce with my interpolation algorithm.
Unfortunately there are some outliers, especially when I'm trying to extrapolate points outside of the area where the initial measurements were taken (you can see this in the upper right and lower center insets in the picture). This is to be expected, especially in RBF methods, as I'm trying to extract information from an area that initially does not have any.
Apparently the RBF interpolation is trying to flatten out the surface while I would just need to continue with the curvature of the shape. Of course the method does not know anything about that given how it is defined. However this causes a large discrepancy from the measurements that I'm trying to fit.
That's why I'm asking if there is any way to constrain the interpolation method to keep the curvature or use a different radial basis function that doesn't smooth out so quickly only on the border of the interpolation range. I've tried different combination of the epsilon parameters and distance functions without luck. This is what I'm using right now:
from scipy import interpolate
import numpy as np
spline = interpolate.Rbf(df.X.values, df.Y.values, df.Z.values,
function='thin_plate')
X,Y = np.meshgrid(np.linspace(xmin.round(), xmax.round(), precision),
np.linspace(ymin.round(), ymax.round(), precision))
Z = spline(X, Y)
I was also thinking of creating some additional dummy points outside of the interpolation range to constrain the model even more, but that would be quite complicated.
I'm also attaching an animation to give a better idea of the surface.
Animation
Just wanted to post my solution in case someone has the same problem. The issue was indeed with scipy implementation of the RBF interpolation. I tried instead to adopt a more flexible library, https://rbf.readthedocs.io/en/latest/index.html#.
The results are pretty cool! Using the following options
from rbf.interpolate import RBFInterpolant
spline = RBFInterpolant(X_obs, U_obs, phi='phs5', order=1, sigma=0.0, eps=1.)
I was able to get the right shape even at the edge.
Surface interpolation
I've played around with the different phi functions and here is the boxplot of the spread between the interpolated surface and the points that I'm testing the interpolation against (the red points in the picture).
Boxplot
With phs5 I get the best result with an average spread of about 0.5 mm on the upper surface and 0.8 on the lower surface. Before I was getting a similar average but with many outliers > 15 mm. Definitely a success :)
I have two coordinate systems for each record in my dataset. Lat-lon coordinates and what I suppose is utm x-y coordinates.
50% of my dataset only has x-y data without lat-lon, viceversa is 6%.
There is a good portion of the dataset that has both (33%) for each single record.
I wanted to know if there is a way to take advantage of the intersection (and maybe the x-y only part, since it's the biggest) to obtain a full dataset with only one coordinate system that makes sense. The problem is that after a little bit of preprocessing, they look "relaxed" in a different way and the intersection doesn't really match. The scatter plot shows what I believe to be a non linear, warped relationship from one system of coordinates to another. With this, I mean that normalizing them both to [0;1] and centering them to (0,0) (by subtracting the mean), gives two slightly different point distributions, and apparently a coefficient multiplication to scale one down to look like the other is not enough to get them to match completely. Looks like some more complicated relationship between the two.
I also tried to use an external library called utm to convert the lat-long coordinates to x-y to have a third pair of attributes (let's call it my_xy), only to find out that is not matching even one of the first two systems, instead it shows another slight warp.
Notes: When I say I do not have data from one coordinate system, assume NaN.
Furthermore, I know the warping could be a result of the fundamental geometrical differences between latlon and xy systems, but I still do not know what else I could try, if the utm conversion and the scaling did not work.
Blue: latlon, Red: original xy, Green: my_xy calculated from latlon
I have to images, one simulation, one real data, with bright spots.
Simulation:
Reality:
I can detect the spots just fine and get the coordinates. Now I need to compute transformation matrix (scale, rotation, translation, maybe shear) between the two coordinate systems. If needed, I can pick some (5-10) corresponding points by hand to give to the algorithm
I tried a lot of approaches already, including:
2 implementations of ICP:
https://engineering.purdue.edu/kak/distICP/ICP-2.0.html#ICP
https://github.com/KojiKobayashi/iterative_closest_point_2d
Implementing affine transformations:
https://math.stackexchange.com/questions/222113/given-3-points-of-a-rigid-body-in-space-how-do-i-find-the-corresponding-orienta/222170#222170
Implementations of affine transformations:
Determining a homogeneous affine transformation matrix from six points in 3D using Python
how to perform coordinates affine transformation using python? part 2
Most of them simply fail somehow like this:
The red points are the spots from the simulation transformed into the reality - coordinate system.
The best approach so far is this one how to perform coordinates affine transformation using python? part 2 yielding this:
As you see, the scaling and translating mostly works, but the image still needs to be rotated / mirrored.
Any ideas on how to get a working algorithm? If neccessary, I can provide my current non-working implementations, but they are basically as linked.
I found the error.
I used plt.imshow to display both the simulated and real image and from there, pick the reference points from which to calculate the transformation.
Turns out, due to the usual array-to-image-index-flipping-voodoo (or a bad missunderstanding of the transformation on my side), I need to switch the x and y indices of the reference points from the simulated image.
With this, everything works fine using this how to perform coordinates affine transformation using python? part 2
I'm referencing this question and this documentation in trying to turn a set of points (the purple dots in the image below) into an interpolated grid.
As you can see, the image has missing spots where dots should be. I'd like to figure out where those are.
import numpy as np
from scipy import interpolate
CIRCLES_X = 25 # There should be 25 circles going across
CIRCLES_Y = 10 # There should be 10 circles going down
points = []
values = []
# Points range from 0-800 ish X, 0-300 ish Y
for point in points:
points.append([points.x, points.y])
values.append(1) # Not sure what this should be
grid_x, grid_y = np.mgrid[0:CIRCLES_Y, 0:CIRCLES_X]
grid = interpolate.griddata(points, values, (grid_x, grid_y), method='linear')
print(grid)
Whenever I print out the result of the grid, I get nan for all of my values.
Where am I going wrong? Is my problem even the correct use case for interpolate.grid?
First, your uncertain points are mainly at an edge, so it's actually extrapolation. Second, interpolation methods built into scipy deal with continuous functions defined on the entire plane and approximate it as a polynomial. While yours is discrete (1 or 0), somewhat periodic rather than polynomial and only defined in a discrete "grid" of points.
So you have to invent some algorithm to inter/extrapolate your specific kind of function. Whether you'll be able to reuse an existing one - from scipy or elsewhere - is up to you.
One possible way is to replace it with some function (continuous or not) defined everywhere, then calculate that approximation in the missing points - whether as one step as scipy.interpolate non-class functions do or as two separate steps.
e.g. you can use a 3-D parabola with peaks in your dots and troughs exactly between them. Or just with ones in the dots and 0's in the blanks and hope the resulting approximation in the grid's points is good enough to give a meaningful result (random overswings are likely). Then you can use scipy.interpolate.RegularGridInterpolator for both inter- and extrapolation.
or as a harmonic function - then what you're seeking is Fourier transformation
Another possible way is to go straight for a discrete solution rather than try to shoehorn the continual mathanalysis' methods into your case: design a (probably entirely custom) algorithm that'll try to figure out the "shape" and "dimensions" of your "grids of dots" and then simply fill in the blanks. I'm not sure if it is possible to add it into the scipy.interpolate's harness as a selectable algorithm in addition to the built-in ones.
And last but not the least. You didn't specify whether the "missing" points are points where the value is unknown or are actual part of the data - i.e. are incorrect data. If it's the latter, simple interpolation is not applicable at all as it assumes that all the data are strictly correct. Then it's a related but different problem: you can approximate the data but then have to somehow "throw away irregularities" (higher order of smallness entities after some point).