How to derive equation from Numpy's polyfit? - python

Given an array of x and y values, the following code will calculate a regression curve for these data points.
# calculate polynomial
z = np.polyfit(x, y, 5)
f = np.poly1d(z)
# calculate new x's and y's
x_new = np.linspace(x[0], x[-1], 50)
y_new = f(x_new)
plt.plot(x,y,'o', x_new, y_new)
plt.xlim([x[0]-1, x[-1] + 1 ])
plt.show()
How can I use the above to derive the actual equation for this curve?

If you want to show the equation, you can use sympy to output latex:
from sympy import S, symbols, printing
from matplotlib import pyplot as plt
import numpy as np
x=np.linspace(0,1,100)
y=np.sin(2 * np.pi * x)
p = np.polyfit(x, y, 5)
f = np.poly1d(p)
# calculate new x's and y's
x_new = np.linspace(x[0], x[-1], 50)
y_new = f(x_new)
x = symbols("x")
poly = sum(S("{:6.2f}".format(v))*x**i for i, v in enumerate(p[::-1]))
eq_latex = printing.latex(poly)
plt.plot(x_new, y_new, label="${}$".format(eq_latex))
plt.legend(fontsize="small")
plt.show()
the result:

Construct a simple example:
In [94]: x=np.linspace(0,1,100)
In [95]: y=2*x**3-3*x**2+x-1
In [96]: z=np.polyfit(x,y,3)
In [97]: z
Out[97]: array([ 2., -3., 1., -1.])
The z coefficients correspond to the [2,-3,1,-1] I used to construct y.
In [98]: f=np.poly1d(z)
In [99]: f
Out[99]: poly1d([ 2., -3., 1., -1.])
The str, or print, string for f is a representation of this polynomial equation. But it's the z coeff that defines the equation.
In [100]: print(f)
3 2
2 x - 3 x + 1 x - 1
In [101]: str(f)
Out[101]: ' 3 2\n2 x - 3 x + 1 x - 1'
What else do you mean by 'actual equation'?
polyval will evaluate f at a specific set of x. Thus to recreate y, use polyval(f,x):
In [107]: np.allclose(np.polyval(f,x),y)
Out[107]: True

If you only want to see the equation on your screen to get an impression about the equation, you only need to add the below line:
print(f)
Here comes more details:
polyfit returns a vector of coefficients of the polynomial fit. poly1d takes this vector and make a polynomial function out of it.
For example (from Numpy documentation for poly1d ):
p = np.poly1d([1, 2, 3])
>>> print(np.poly1d(p))
2
1 x + 2 x + 3

Related

How to fit a rotated and translated hyperbola to a set of x,y points in Python

I want to fit a set of data points in the xy plane to the general case of a rotated and translated hyperbola to back out the coefficients of the general equation of a conic.
I've tried the methodology proposed in here but so far I cannot make it work.
When fitting to a set of points known to be a hyperbola I get quite different outputs.
What I'm doing wrong in the code below?
Or is there any other way to solve this problem?
import numpy as np
from sympy import plot_implicit, Eq
from sympy.abc import x, y
def fit_hyperbola(x, y):
D1 = np.vstack([x**2, x*y, y**2]).T
D2 = np.vstack([x, y, np.ones(len(x))]).T
S1 = D1.T # D1
S2 = D1.T # D2
S3 = D2.T # D2
# define the constraint matrix and its inverse
C = np.array(((0, 0, -2), (0, 1, 0), (-2, 0, 0)), dtype=float)
Ci = np.linalg.inv(C)
# Setup and solve the generalized eigenvector problem
T = np.linalg.inv(S3) # S2.T
S = Ci#(S1 - S2#T)
eigval, eigvec = np.linalg.eig(S)
# evaluate and sort resulting constraint values
cond = eigvec[1]**2 - 4*eigvec[0]*eigvec[2]
# [condVals index] = sort(cond)
idx = np.argsort(cond)
condVals = cond[idx]
possibleHs = condVals[1:] + condVals[0]
minDiffAt = np.argmin(abs(possibleHs))
# minDiffVal = possibleHs[minDiffAt]
alpha1 = eigvec[:, idx[minDiffAt + 1]]
alpha2 = T#alpha1
return np.concatenate((alpha1, alpha2)).ravel()
if __name__ == '__main__':
# known hyperbola coefficients
coeffs = [1., 6., -2., 3., 0., 0.]
# hyperbola points
x_ = [1.56011303e+00, 1.38439984e+00, 1.22595618e+00, 1.08313085e+00,
9.54435408e-01, 8.38528681e-01, 7.34202759e-01, 6.40370424e-01,
5.56053814e-01, 4.80374235e-01, 4.12543002e-01, 3.51853222e-01,
2.97672424e-01, 2.49435970e-01, 2.06641170e-01, 1.68842044e-01,
1.35644673e-01, 1.06703097e-01, 8.17157025e-02, 6.04220884e-02,
4.26003457e-02, 2.80647476e-02, 1.66638132e-02, 8.27872926e-03,
2.82211172e-03, 2.37095181e-04, 4.96740239e-04, 3.60375275e-03,
9.59051203e-03, 1.85194083e-02, 3.04834928e-02, 4.56074477e-02,
6.40488853e-02, 8.59999904e-02, 1.11689524e-01, 1.41385205e-01,
1.75396504e-01, 2.14077865e-01, 2.57832401e-01, 3.07116093e-01,
3.62442545e-01, 4.24388335e-01, 4.93599021e-01, 5.70795874e-01,
6.56783391e-01, 7.52457678e-01, 8.58815793e-01, 9.76966133e-01,
1.10813998e+00, 1.25370436e+00]
y_ = [-0.66541515, -0.6339625 , -0.60485332, -0.57778425, -0.5524732 ,
-0.52865638, -0.50608561, -0.48452564, -0.46375182, -0.44354763,
-0.42370253, -0.4040097 , -0.38426392, -0.3642594 , -0.34378769,
-0.32263542, -0.30058217, -0.27739811, -0.25284163, -0.22665682,
-0.19857079, -0.16829086, -0.13550147, -0.0998609 , -0.06099773,
-0.01850695, 0.02805425, 0.07917109, 0.13537629, 0.19725559,
0.26545384, 0.34068177, 0.42372336, 0.51544401, 0.61679957,
0.72884632, 0.85275192, 0.98980766, 1.14144182, 1.30923466,
1.49493479, 1.70047747, 1.92800474, 2.17988774, 2.45875143,
2.76750196, 3.10935692, 3.48787892, 3.90701266, 4.3711261 ]
plot_implicit (Eq(coeffs[0]*x**2 + coeffs[1]*x*y + coeffs[2]*y**2 + coeffs[3]*x + coeffs[4]*y, -coeffs[5]))
coeffs_fit = fit_hyperbola(x_, y_)
plot_implicit (Eq(coeffs_fit[0]*x**2 + coeffs_fit[1]*x*y + coeffs_fit[2]*y**2 + coeffs_fit[3]*x + coeffs_fit[4]*y, -coeffs_fit[5]))
The general equation of hyperbola is defined with 5 independent coefficients (not 6). If the model equation includes dependant coefficients (which is the case with 6 coefficients) trouble might occur in the numerical regression calculus.
That is why the equation A * x * x + B * x * y + C * y * y + D * x + F * y = 1 is considered in the calculus below. The fitting is very good.
Then one can goback to the standard equation a * x * x + 2 * b * x * y + c * y * y + 2 * d * x + 2 * f * y + g = 0 in setting a value for g (for example g=-1).
The formulas to find the coordinates of the center, the equations of asymptotes, the equations of axis, are given in addition.
https://mathworld.wolfram.com/ConicSection.html
https://en.wikipedia.org/wiki/Conic_section
https://en.wikipedia.org/wiki/Hyperbola

Parabolic fit with fixed peak

I have a set of data and want to put a parabolic fit over it. This already works with the polyfit function from numpy like this:
fit = np.polyfit(X, y, 2)
formula = np.poly1d(fit)
Now I want the parabula to have its peak value at a fixed x value and that the fit is still carried out as best as possible with this fixed peak. Is there a way to accomplish that?
From my data I know that the parabola will always be open downwards.
I think this is quite a difficult problem since the x coordinate of the peak of a second-order polynomial (ax^2 + bx + c) always lies in x = -b/2a.
A thing you could do is to drop the b term and offset it by the desired peak x value in fitting the polynomial like the code below. Note that I used scipy.optimize.curve_fit to fit for the custom function func.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# generating a parabola with noise
np.random.seed(42)
x = np.linspace(-10, 10, 100)
y = 10 -(x-2)**2 + np.random.normal(0, 5, x.shape)
# function to fit
def func(x, a, c):
return a*x**2 + c
# desired x peak value
x_peak = 2
popt, pcov = curve_fit(func, x - x_peak, y)
y_fit = func(x - x_peak, *popt)
# plotting
plt.plot(x, y, 'k.')
plt.plot(x, y_fit)
plt.axvline(x_peak)
plt.show()
Outputs the image:
Fixing a point on your parabola simplifies the problem, since you can rewrite your equation slightly in terms of a constant now:
y = A(x - B)**2 + C
Given the coefficients a, b, c in your original unconstrained fit, you have the relationships
a = A
b = -2AB
c = AB**2 + C
The only difference is that since B is a constant and you don't have an x - B term in the equation, you need to set up the least-squares problem yourself. Given arrays x, y and constant B, the problem looks like this:
m = np.stack((x - B, np.ones_like(x)), axis=-1)
(A, C), *_ = np.linalg.lstsq(m, y, rcond=None)
You can then extract the normal coefficient from the formulas for a, b, c above.
Here is a complete example, just like the one in the other answer:
B = 2
np.random.seed(42)
x = np.linspace(-10, 10, 100)
y = 10 -(x - B)**2 + np.random.normal(0, 5, x.shape)
m = np.stack(((x - B)**2, np.ones_like(x)), axis=-1)
(A, C), *_ = np.linalg.lstsq(m, y, rcond=None)
a = A
b = -2 * A * B
c = A * B**2 + C
y_fit = a * x**2 + b * x + c
You can drop a, b, c entirely and do
y_fit = A * (x - B)**2 + C
The result will be identical.
plt.plot(x, y, 'k.')
plt.plot(x, y_fit)
Without the condition of location of the peak the function to be fitted would be :
y = a x^2 + b x + c
With condition of location of the peak at x=p , given p :
-b/(2a)=p
b=-2 a p
y = a x^2 -2 a p x + c
y = a (x^2 - 2 p x) +c
Knowing p , one change of variable :
X = x^2 -2 p x
So, from the data (x,y) one first compute the new data (X,y)
Then a and c are computed thanks to linear regression
y = a X + c

numpy.polyfit vs numpy.polynomial.polynomial.polyfit

Why do numpy.polyfit and numpy.polynomial.polynomial.polyfit
produce different plots in the test below?
import numpy as np
from numpy.polynomial.polynomial import polyfit
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 50)
y = 5 * x + 10 + (np.random.random(len(x)) - 0.5) * 5
plt.scatter(x, y,marker='.', label='Data for regression')
plt.plot(x, np.poly1d(np.polyfit(x, y, 1))(x), label='numpy.polyfit')
plt.plot(x, np.poly1d(polyfit(x, y, 1))(x), label='polynomial.polyfit')
plt.legend()
plt.show()
At first glance, the documentation seems to indicate they should give the same result -
numpy.polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False)
Least squares polynomial fit.
Fit a polynomial p(x) = p[0] * x**deg + ... + p[deg] of degree deg to points (x, y). Returns a vector of coefficients p that minimises the squared error in the order deg, deg-1, … 0.
and
numpy.polynomial.polynomial.polyfit(x, y, deg, rcond=None, full=False, w=None)
Least-squares fit of a polynomial to data.
Return the coefficients of a polynomial of degree deg that is the least squares fit to the data values y given at points x. If y is 1-D the returned coefficients will also be 1-D. If y is 2-D multiple fits are done, one for each column of y, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form
p(x) = c0 + c1 * x + ... + cn * xn
But the difference is in the order of coefficients returned from the two methods, at least for the use case in question.
numpy.polyfit returns the coefficients in descending order of degree, according to the generation equation
p(x) = cn * xn + c(n-1) * x(n-1) + ... + c1 * x + c0
numpy.polynomial.polynomial.polyfit returns the coefficients in ascending order of degree, according to the generation equation
p(x) = c0 + c1 * x + ... + c(n-1) * x(n-1) + cn * xn
though mathematically identical, those two equations are not the same in ndarray representation. This might be obfuscated by the use of different notations in the documentation. For demonstration, consider the following
import numpy as np
x = np.linspace(0, 10, 50)
y = x**2 + 5 * x + 10
print(np.polyfit(x, y, 2))
print(np.polynomial.polynomial.polyfit(x, y, 2))
[ 1. 5. 10.]
[10. 5. 1.]
Both methods get the same result, but in opposite order, the former being what np.poly1d() expects,
print(np.poly1d(np.polyfit(x, y, 2)))
print(np.poly1d(np.polynomial.polynomial.polyfit(x, y, 2)))
2
1 x + 5 x + 10
2
10 x + 5 x + 1
and the latter being what the np.polynomial.polynomial.Polynomial() constructor expects.,
print(np.polynomial.polynomial.Polynomial(np.polynomial.polynomial.polyfit(x, y, 2)))
print(np.polynomial.polynomial.Polynomial(np.polyfit(x, y, 2)))
poly([10. 5. 1.]) # 10 + 5 * x + 1 * x**2
poly([ 1. 5. 10.]) # 1 + 5 * x + 10 * x**2
Flipping the result from np.polynomial.polynomial.polyfit before passing it to poly1d() or using a np.polynomial.polynomial.Polynomial will produce the expected result:

Equivalent of `polyfit` for a 2D polynomial in Python

I'd like to find a least-squares solution for the a coefficients in
z = (a0 + a1*x + a2*y + a3*x**2 + a4*x**2*y + a5*x**2*y**2 + a6*y**2 +
a7*x*y**2 + a8*x*y)
given arrays x, y, and z of length 20. Basically I'm looking for the equivalent of numpy.polyfit but for a 2D polynomial.
This question is similar, but the solution is provided via MATLAB.
Here is an example showing how you can use numpy.linalg.lstsq for this task:
import numpy as np
x = np.linspace(0, 1, 20)
y = np.linspace(0, 1, 20)
X, Y = np.meshgrid(x, y, copy=False)
Z = X**2 + Y**2 + np.random.rand(*X.shape)*0.01
X = X.flatten()
Y = Y.flatten()
A = np.array([X*0+1, X, Y, X**2, X**2*Y, X**2*Y**2, Y**2, X*Y**2, X*Y]).T
B = Z.flatten()
coeff, r, rank, s = np.linalg.lstsq(A, B)
the adjusting coefficients coeff are:
array([ 0.00423365, 0.00224748, 0.00193344, 0.9982576 , -0.00594063,
0.00834339, 0.99803901, -0.00536561, 0.00286598])
Note that coeff[3] and coeff[6] respectively correspond to X**2 and Y**2, and they are close to 1. because the example data was created with Z = X**2 + Y**2 + small_random_component.
Based on the answers from #Saullo and #Francisco I have made a function which I have found helpful:
def polyfit2d(x, y, z, kx=3, ky=3, order=None):
'''
Two dimensional polynomial fitting by least squares.
Fits the functional form f(x,y) = z.
Notes
-----
Resultant fit can be plotted with:
np.polynomial.polynomial.polygrid2d(x, y, soln.reshape((kx+1, ky+1)))
Parameters
----------
x, y: array-like, 1d
x and y coordinates.
z: np.ndarray, 2d
Surface to fit.
kx, ky: int, default is 3
Polynomial order in x and y, respectively.
order: int or None, default is None
If None, all coefficients up to maxiumum kx, ky, ie. up to and including x^kx*y^ky, are considered.
If int, coefficients up to a maximum of kx+ky <= order are considered.
Returns
-------
Return paramters from np.linalg.lstsq.
soln: np.ndarray
Array of polynomial coefficients.
residuals: np.ndarray
rank: int
s: np.ndarray
'''
# grid coords
x, y = np.meshgrid(x, y)
# coefficient array, up to x^kx, y^ky
coeffs = np.ones((kx+1, ky+1))
# solve array
a = np.zeros((coeffs.size, x.size))
# for each coefficient produce array x^i, y^j
for index, (j, i) in enumerate(np.ndindex(coeffs.shape)):
# do not include powers greater than order
if order is not None and i + j > order:
arr = np.zeros_like(x)
else:
arr = coeffs[i, j] * x**i * y**j
a[index] = arr.ravel()
# do leastsq fitting and return leastsq result
return np.linalg.lstsq(a.T, np.ravel(z), rcond=None)
And the resultant fit can be visualised with:
fitted_surf = np.polynomial.polynomial.polyval2d(x, y, soln.reshape((kx+1,ky+1)))
plt.matshow(fitted_surf)
Excellent answer by Saullo Castro. Just to add the code to reconstruct the function using the least-squares solution for the a coefficients,
def poly2Dreco(X, Y, c):
return (c[0] + X*c[1] + Y*c[2] + X**2*c[3] + X**2*Y*c[4] + X**2*Y**2*c[5] +
Y**2*c[6] + X*Y**2*c[7] + X*Y*c[8])
You can also use scikit-learn for this.
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
x = np.linspace(0, 1, 20)
y = np.linspace(0, 1, 20)
X, Y = np.meshgrid(x, y, copy=False)
X = X.flatten()
Y = Y.flatten()
# Generate noisy data
np.random.seed(0)
Z = X**2 + Y**2 + np.random.randn(*X.shape)*0.01
# Process 2D inputs
poly = PolynomialFeatures(degree=2)
input_pts = np.stack([X, Y]).T
assert(input_pts.shape == (400, 2))
in_features = poly.fit_transform(input_pts)
# Linear regression
model = LinearRegression()
model.fit(in_features, Z)
# Display coefficients
print(dict(zip(poly.get_feature_names_out(), model.coef_.round(4))))
# Check fit
print(f"R-squared: {model.score(poly.transform(input_pts), Z):.3f}")
# Make predictions
Z_predicted = model.predict(poly.transform(input_pts))
Out:
{'1': 0.0, 'x0': 0.003, 'x1': -0.0074, 'x0^2': 0.9974, 'x0 x1': 0.0047, 'x1^2': 1.0014}
R-squared: 1.000
Note that if kx != ky the code will fail because the j and i indices are inverted in the loop.
You get (j,i) from enumerate(np.ndindex(coeffs.shape)), but then you address elements in coeffs as coeffs[i,j]. Since the shape of the coefficient matrix is given by the maximum polynomial order that you are asking to use, the matrix will be rectangular if kx != ky and you will exceed one of its dimensions.

Line fitting below points

I have a set of x, y points and I'd like to find the line of best fit such that the line is below all points using SciPy. I'm trying to use leastsq for this, but I'm unsure how to adjust the line to be below all points instead of the line of best fit. The coefficients for the line of best fit can be produced via:
def linreg(x, y):
fit = lambda params, x: params[0] * x - params[1]
err = lambda p, x, y: (y - fit(p, x))**2
# initial slope/intercept
init_p = np.array((1, 0))
p, _ = leastsq(err, init_p.copy(), args=(x, y))
return p
xs = sp.array([1, 2, 3, 4, 5])
ys = sp.array([10, 20, 30, 40, 50])
print linreg(xs, ys)
The output is the coefficients for the line of best fit:
array([ 9.99999997e+00, -1.68071668e-15])
How can I get the coefficients of the line of best fit that is below all points?
A possible algorithm is as follows:
Move the axes to have all the data on the positive half of the x axis.
If the fit is of the form y = a * x + b, then for a given b the best fit for a will be the minimum of the slopes joining the point (0, b) with each of the (x, y) points.
You can then calculate a fit error, which is a function of only b, and use scipy.optimize.minimize to find the best value for b.
All that's left is computing a for that b and calculating b for the original position of the axes.
The following does that most of the time, except when the minimization fails with some mysterious error:
from __future__ import division
import numpy as np
import scipy.optimize
import matplotlib.pyplot as plt
def fit_below(x, y) :
idx = np.argsort(x)
x = x[idx]
y = y[idx]
x0, y0 = x[0] - 1, y[0]
x -= x0
y -= y0
def error_function_2(b, x, y) :
a = np.min((y - b) / x)
return np.sum((y - a * x - b)**2)
b = scipy.optimize.minimize(error_function_2, [0], args=(x, y)).x[0]
a = np.min((y - b) / x)
return a, b - a * x0 + y0
x = np.arange(10).astype(float)
y = x * 2 + 3 + 3 * np.random.rand(len(x))
a, b = fit_below(x, y)
plt.plot(x, y, 'o')
plt.plot(x, a*x + b, '-')
plt.show()
And as TheodrosZelleke wisely predicted, it goes through two points that are part of the convex hull:

Categories

Resources