I have a cloud of data points (x,y) that I would like to interpolate and smooth.
Currently, I am using scipy :
from scipy.interpolate import interp1d
from scipy.signal import savgol_filter
spl = interp1d(Cloud[:,1], Cloud[:,0]) # interpolation
x = np.linspace(Cloud[:,1].min(), Cloud[:,1].max(), 1000)
smoothed = savgol_filter(spl(x), 21, 1) #smoothing
This is working pretty well, except that I would like to give some weights to the data points given at interp1d. Any suggestion for another function that is handling this ?
Basically, I thought that I could just multiply the occurrence of each point of the cloud according to its weight, but that is not very optimized as it increases a lot the number of points to interpolate, and slows down the algorithm ..
The default interp1d uses linear interpolation, i.e., it simply computes a line between two points. A weighted interpolation does not make much sense mathematically in such scenario - there is only one way in euclidean space to make a straight line between two points.
Depending on your goal, you can look into other methods of interpolation, e.g., B-splines. Then you can use scipy's scipy.interpolate.splrep and set the w argument:
w - Strictly positive rank-1 array of weights the same length as x and y. The weights are used in computing the weighted least-squares spline fit. If the errors in the y values have standard-deviation given by the vector d, then w should be 1/d. Default is ones(len(x)).
Related
I am trying to interpolate a set of ordered pairs using Numpy's Lagrange Interpolation; I have done this before without incident.
This time, however, I keep getting "Division by zero error" and the interpolating polynomial comes out with infinite coefficientes.
I am aware data points must not be repeated due to the internal workings of Lagrange's Method, and they are not repeated.
Here is my code and the offending ordered pair, in numpy vector format.
Code:
x = out["x"].round(decimals=3)
x = np.array(x)
y = out["y"].round(decimals=3)
y = np.array(y)
print(x)
print(y)
pol = lagrange(x,y)
print(pol)
Ordered pair:
[273.324 285.579 309.292 279.573 297.427 290.681 276.621 293.586 283.463
284.674 273.904 288.064 280.125 294.269 288.51 285.898 273.419 273.023
281.754 281.546 283.21 303.399 297.392 293.359 306.404 356.285 302.487
280.586 299.487 302.487]
[ 0. 5.414 6.202 0. 9.331 11.52 0. 10.495 5.439 4.709
0. 4.916 0. 10.508 6.736 5.25 0. 0. 6.53 4.305
5.124 6.753 10.175 10.545 5.98 9.147 11.137 0. 8.764 9.57 ]
Lots of thanks in advance.
Why Lagrange Interpolation did not work for you.
You have the value 302.487 twice in your array x. I.e. you did repeat it.
Why Lagrange Interpolation is not what you want.
As Tim Roberts pointed out Lagrange interpolation is really not made for 20 points. The problem is that polynomials of high degree tend to overfit. Check out the following example from the wikipedia article of overfitting.
Figure 2. Noisy (roughly linear) data is fitted to a linear function and a polynomial function. Although the polynomial function is a perfect fit, the linear function can be expected to generalize better: if the two functions were used to extrapolate beyond the fitted data, the linear function should make better predictions.
Alternative Regression
There are at least two valid alternatives. One of them being what is recommended in the wikipedia article. If you know what type of function your data is ruffly coming from use regression to fit a function of that type to the data. In the case of the example above thats a linear function. If you want to do that check out scipy's curve fit.
Alternative Spline Interpolation
An other alternative is spline interpolation. Again from the wikipedia article on Spline Interpolation
Instead of fitting a single, high-degree polynomial to all of the values at once, spline interpolation fits low-degree polynomials to small subsets of the values, for example, fitting nine cubic polynomials between each of the pairs of ten points, instead of fitting a single degree-ten polynomial to all of them. Spline interpolation is often preferred over polynomial interpolation because the interpolation error can be made small even when using low-degree polynomials for the spline. Spline interpolation also avoids the problem of Runge's phenomenon, in which oscillation can occur between points when interpolating using high-degree polynomials.
There are just two little technical details that I want to point out. Point one is you points need to be ordered so I did that for you. And two scipy's UnivariateSpline has a smoothing parameter s that you need to choose. If you pick it small it sticks to the data like you're used to with Lagrange interpolation but if you make it bigger it well becomes smoother and hopefully generalizes better. Below I picked 2 different values for you to look at but you should probably play around with it yourself. I included a very small one so you see it can do what you're used to from Lagrange interpolation but wouldn't recommend it. Also you probably should use more data, preprocess it etc.. But that's not what the question was about.
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import UnivariateSpline
idx = np.argsort(x)
x = x[idx]
y = y[idx]
for s in [10,60]:
t = np.linspace(np.min(x), np.max(x), 10**4)
f = UnivariateSpline(x,y, s=s)
plt.scatter(x,y)
plt.plot(t,f(t))
plt.title(f'{s=}')
plt.show()
I am having issues in implementing some less-than-usual interpolation problem. I have some (x,y) data points scattered along some curve which a priori I don't know, and I want to reconstruct this curve at my best, interpolating my point with min square error. I thought of using scipy.interpolate.splrep for this purpose (but maybe there are better options you would advise to use). The additional difficulty in my case, is that I want to constrain the spline curve to pass through some specific points of my original data. I assume that playing with knots and weights could make the trick, but I don't know how to do so (I am procrastinating avoidance of spline interpolation theory besides basic fitting procedures). Also, for some undisclosed reasons, when I try to setup knots in my splrep I get the same error of this post, which keeps complicating things. The following is my sample code:
from __future__ import division
import numpy as np
import scipy.interpolate as spi
import matplotlib.pylab as plt
# Some surrogate sample data
f = lambda x : x**2 - x/2.
x = np.arange(0.,20.,0.1)
y = f(4*(x + np.random.normal(size=np.size(x))))
# I want to use spline interpolation with least-square fitting criterion, making sure though that the spline starts
# from the origin (or in general passes through a precise point of my dataset).
# In my case for example I would like the spline to originate from the point in x=0. So I attempted to include as first knot x=0...
# but it won't work, nor I am sure this is the right procedure...
fy = spi.splrep(x,y)
fy = spi.splrep(x,y,t=fy[0])
yy = spi.splev(x,fy)
plt.plot(x,y,'-',x,yy,'--')
plt.show()
which despite the fact I am even passing knots computed from a first call of splrep, it will give me:
File "/usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack.py", line 289, in splrep
res = _impl.splrep(x, y, w, xb, xe, k, task, s, t, full_output, per, quiet)
File "/usr/lib64/python2.7/site-packages/scipy/interpolate/_fitpack_impl.py", line 515, in splrep
raise _iermess[ier][1](_iermess[ier][0])
ValueError: Error on input data
You use the weights argument of splrep: can give these points you need fixed very large weights. This is a workaround for sure, so keep an eye on the fit quality and stability.
Setting high weights for specific points is indeed a working solution as suggested by #ev-br. In addition, because there is no direct way to match derivatives at the extrema of the curve, the same rationale can be applied in this case as well. Say you want the derivative in y[0] and y[-1] match the derivative of your data points, then you add large weights also for y[1] and y[-2], i.e.
weights = np.ones(len(x))
weights[[0,-1]] = 100 # Promote spline interpolant through first and last point
weights[[1,-2]] = 50 # Make spline interpolant derivative tend to derivatives at first/last point
fy = spi.splrep(x,y,w=weights,s=0.1)
yy = spi.splev(x,fy)
I have created a bspline using splprep as below from a set of points:
tck,uout = splprep([x,y],s=0.,k=2,per=False)
Now, I am trying to evaluate the derivative of a spline using:
dx,dy = splev(uout,tck,der=1)
I find that splev returns two lists for the derivative.
Given that the Spline is parametrized (say in u), does it return dx/du and dy/du ?
If not how to evaluate the derivative (dy/dx) properly ?
Yes, if der = 1 the the lists are the values of dx/du and dy/du at each point. The gradient is then dy/dx = dy/du / dx/du.
I'm slightly concerned about the splprep call: s is optional, but if defined it should have a value of about the same as the number of points (larger means smoother). per is an integer value, not a boolean. And cubic splines are better behaved than quadratic. http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splprep.html
I have done some work in Python, but I'm new to scipy. I'm trying to use the methods from the interpolate library to come up with a function that will approximate a set of data.
I've looked up some examples to get started, and could get the sample code below working in Python(x,y):
import numpy as np
from scipy.interpolate import interp1d, Rbf
import pylab as P
# show the plot (empty for now)
P.clf()
P.show()
# generate random input data
original_data = np.linspace(0, 1, 10)
# random noise to be added to the data
noise = (np.random.random(10)*2 - 1) * 1e-1
# calculate f(x)=sin(2*PI*x)+noise
f_original_data = np.sin(2 * np.pi * original_data) + noise
# create interpolator
rbf_interp = Rbf(original_data, f_original_data, function='gaussian')
# Create new sample data (for input), calculate f(x)
#using different interpolation methods
new_sample_data = np.linspace(0, 1, 50)
rbf_new_sample_data = rbf_interp(new_sample_data)
# draw all results to compare
P.plot(original_data, f_original_data, 'o', ms=6, label='f_original_data')
P.plot(new_sample_data, rbf_new_sample_data, label='Rbf interp')
P.legend()
The plot is displayed as follows:
Now, is there any way to get a polynomial expression representing the interpolated function created by Rbf (i.e. the method created as rbf_interp)?
Or, if this is not possible with Rbf, any suggestions using a different interpolation method, another library, or even a different tool are also welcome.
The RBF uses whatever functions you ask, it is of course a global model, so yes there is a function result, but of course its true that you will probably not like it since it is a sum over many gaussians. You got:
rbf.nodes # the factors for each of the RBF (probably gaussians)
rbf.xi # the centers.
rbf.epsilon # the width of the gaussian, but remember that the Norm plays a role too
So with these things you can calculate the distances (with rbf.xi then pluggin the distances with the factors in rbf.nodes and rbf.epsilon into the gaussian (or whatever function you asked it to use). (You can check the python code of __call__ and _call_norm)
So you get something like sum(rbf.nodes[i] * gaussian(rbf.epsilon, sqrt((rbf.xi - center)**2)) for i, center in enumerate(rbf.nodes)) to give some funny half code/formula, the RBFs function is written in the documentation, but you can also check the python code.
The answer is no, there is no "nice" way to write down the formula, or at least not in a short way. Some types of interpolations, like RBF and Loess, do not directly search for a parametric mathematical function to fit to the data and instead they calculate the value of each new data point separately as a function of the other points.
These interpolations are guaranteed to always give a good fit for your data (such as in your case), and the reason for this is that to describe them you need a very large number of parameters (basically all your data points). Think of it this way: you could interpolate linearly by connecting consecutive data points with straight lines. You could fit any data this way and then describe the function in a mathematical form, but it would take a large number of parameters (at least as many as the number of points). Actually what you are doing right now is pretty much a smoothed version of that.
If you want the formula to be short, this means you want to describe the data with a mathematical function that does not have many parameters (specifically the number of parameters should be much lower than the number of data points). Such examples are logistic functions, polynomial functions and even the sine function (that you used to generate the data). Obviously, if you know which function generated the data that will be the function you want to fit.
RBF likely stands for Radial Basis Function. I wouldn't be surprised if scipy.interpolate.Rbf was the function you're looking for.
However, I doubt you'll be able to find a polynomial expression to represent your result.
If you want to try different interpolation methods, check the corresponding Scipy documentation, that gives link to RBF, splines...
I don’t think SciPy’s RBF will give you the actual function. But one thing that you could do is sample the function that SciPy’s RBF gave you (ie 100 points). Then use Lagrange interpretation with those points. This will generate a polynomial function for you. Here is an example on how this would look. If you do not want to use Lagrange interpolation, You can also use “Newton’s dividend difference method” to generate a polynomial function.
My answer is based on numpy only :
import matplotlib.pyplot as plt
import numpy as np
x_data = [324, 531, 806, 1152, 1576, 2081, 2672, 3285, 3979, 4736]
y_data = [20, 25, 30, 35, 40, 45, 50, 55, 60, 65]
x = np.array(x_data)
y = np.array(y_data)
model = np.poly1d(np.polyfit(x, y, 2))
ynew = model(x)
plt.plot(x, y, 'o', x, ynew, '-' , )
plt.ylabel( str(model).strip())
plt.show()
I want numerically compute the FFT on a numpy array Y. For testing, I'm using the Gaussian function Y = exp(-x^2). The (symbolic) Fourier Transform is Y' = constant * exp(-k^2/4).
import numpy
X = numpy.arange(-100,100)
Y = numpy.exp(-(X/5.0)**2)
The naive approach fails:
from numpy.fft import *
from matplotlib import pyplot
def plotReIm(x,y):
f = pyplot.figure()
ax = f.add_subplot(111)
ax.plot(x, numpy.real(y), 'b', label='R()')
ax.plot(x, numpy.imag(y), 'r:', label='I()')
ax.plot(x, numpy.abs(y), 'k--', label='abs()')
ax.legend()
Y_k = fftshift(fft(Y))
k = fftshift(fftfreq(len(Y)))
plotReIm(k,Y_k)
real(Y_k) jumps between positive and negative values, which correspond to a jumping phase, which is not present in the symbolic result. This is certainly not desirable. (The result is technically correct in the sense that abs(Y_k) gives the amplitudes as expected ifft(Y_k) is Y.)
Here, the function fftshift() renders the array k monotonically increasing and changes Y_k accordingly. The pairs zip(k, Y_k) are not changed by applying this operation to both vectors.
This changes appears to fix the issue:
Y_k = fftshift(fft(ifftshift(Y)))
k = fftshift(fftfreq(len(Y)))
plotReIm(k,Y_k)
Is this the correct way to employ the fft() function if monotonic Y and Y_k are required?
The reverse operation of the above is:
Yx = fftshift(ifft(ifftshift(Y_k)))
x = fftshift(fftfreq(len(Y_k), k[1] - k[0]))
plotReIm(x,Yx)
For this case, the documentation clearly states that Y_k must be sorted compatible with the output of fft() and fftfreq(), which we can achieve by applying ifftshift().
Those questions have been bothering me for a long time: Are the output and input arrays of both fft() and ifft() always such that a[0] should contain the zero frequency term, a[1:n/2+1] should contain the positive-frequency terms, and a[n/2+1:] should contain the negative-frequency terms, in order of decreasingly negative frequency [numpy reference], where 'frequency' is the independent variable?
The answer on Fourier Transform of a Gaussian is not a Gaussian does not answer my question.
The FFT can be thought of as producing a set vectors each with an amplitude and phase. The fft_shift operation changes the reference point for a phase angle of zero, from the edge of the FFT aperture, to the center of the original input data vector.
The phase (and thus the real component of the complex vector) of the result is sometimes less "jumpy" when this is done, especially if some input function is windowed such that it is discontinuous around the edges of the FFT aperture. Or if the input is symmetric around the center of the FFT aperture, the phase of the FFT result will always be zero after an fft_shift.
An fft_shift can be done by a vector rotate of N/2, or by simply flipping alternating sign bits in the FFT result, which may be more CPU dcache friendly.
The definition for the output of fft (and ifft) is here: http://docs.scipy.org/doc/numpy/reference/routines.fft.html#background-information
This is what the routines compute, no more and no less. Observe that the discrete Fourier transform is rather different from the continuous Fourier transform. For a densely sampled function there is a relation between the two, but the relation also involves phase factors and scaling in addition to fftshift. This is the cause of the oscillations you see in your plot. The necessary phase factor you can work out yourself from the above mathematical formula for the DFT.