Finding a better scipy curve_fit! -- exponential function? - python
I am trying to fit data using scipy curve_fit. I believe that negative exponential is probably best, as this works well for some of my other (similarly generated) data -- but I am achieving sub-optimal results.
I've normalized the dataset to avoid supplying initial values and am applying an exponential function as follows:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
data = np.array([[0.32,0.38],[0.61,0.32],[0.28,0.50],[0.60,0.32],[0.26,0.45],[0.19,0.57],[0.61,0.32],[0.59,0.29],[0.39,0.42],[0.61,0.32],[0.20,0.46],[0.24,0.45],[0.59,0.29],[0.39,0.42],[0.56,0.39],[0.32,0.43],[0.38,0.44],[0.54,0.34],[0.61,0.32],[0.20,0.46],[0.28,0.51],[0.54,0.34],[0.60,0.32],[0.30,0.42],[0.28,0.43],[0.14,0.57],[0.24,0.54],[0.39,0.42],[0.20,0.56],[0.56,0.39],[0.24,0.54],[0.33,0.37],[0.33,0.51],[0.20,0.46],[0.32,0.39],[0.20,0.56],[0.19,0.57],[0.32,0.39],[0.30,0.42],[0.33,0.50],[0.54,0.34],[0.28,0.50],[0.32,0.39],[0.28,0.43],[0.27,0.42],[0.56,0.39],[0.19,0.57],[0.19,0.57],[0.60,0.32],[0.44,0.41],[0.27,0.42],[0.19,0.57],[0.24,0.38],[0.24,0.54],[0.61,0.32],[0.39,0.40],[0.30,0.41],[0.19,0.57],[0.14,0.57],[0.32,0.43],[0.14,0.57],[0.59,0.29],[0.44,0.41],[0.30,0.41],[0.32,0.38],[0.61,0.32],[0.20,0.46],[0.20,0.56],[0.30,0.41],[0.33,0.36],[0.14,0.57],[0.19,0.57],[0.46,0.38],[0.36,0.44],[0.61,0.32],[0.31,0.48],[0.60,0.32],[0.39,0.40],[0.14,0.57],[0.44,0.41],[0.24,0.49],[0.41,0.40],[0.19,0.57],[0.19,0.57],[0.31,0.49],[0.31,0.43],[0.35,0.35],[0.20,0.46],[0.54,0.34],[0.20,0.56],[0.39,0.44],[0.33,0.36],[0.20,0.56],[0.30,0.41],[0.56,0.39],[0.31,0.48],[0.28,0.51],[0.14,0.57],[0.61,0.32],[0.30,0.50],[0.20,0.56],[0.19,0.57],[0.59,0.31],[0.20,0.56],[0.27,0.42],[0.29,0.48],[0.56,0.39],[0.32,0.39],[0.20,0.56],[0.59,0.29],[0.24,0.49],[0.56,0.39],[0.60,0.32],[0.35,0.35],[0.28,0.50],[0.46,0.38],[0.14,0.57],[0.54,0.34],[0.32,0.38],[0.26,0.45],[0.26,0.45],[0.39,0.42],[0.19,0.57],[0.28,0.51],[0.27,0.42],[0.33,0.50],[0.54,0.34],[0.39,0.40],[0.19,0.57],[0.33,0.36],[0.22,0.44],[0.33,0.51],[0.61,0.32],[0.28,0.51],[0.25,0.50],[0.39,0.40],[0.34,0.35],[0.59,0.31],[0.31,0.49],[0.20,0.46],[0.39,0.46],[0.20,0.50],[0.32,0.39],[0.30,0.41],[0.23,0.44],[0.29,0.53],[0.28,0.50],[0.31,0.48],[0.61,0.32],[0.54,0.34],[0.28,0.53],[0.56,0.39],[0.19,0.57],[0.14,0.57],[0.59,0.29],[0.29,0.48],[0.44,0.41],[0.27,0.51],[0.50,0.29],[0.14,0.57],[0.60,0.32],[0.32,0.39],[0.19,0.57],[0.24,0.38],[0.56,0.39],[0.14,0.57],[0.54,0.34],[0.61,0.38],[0.27,0.53],[0.20,0.46],[0.61,0.32],[0.27,0.42],[0.27,0.42],[0.20,0.56],[0.30,0.41],[0.31,0.51],[0.32,0.39],[0.31,0.51],[0.29,0.48],[0.20,0.46],[0.33,0.51],[0.31,0.43],[0.30,0.41],[0.27,0.44],[0.31,0.51],[0.29,0.48],[0.35,0.35],[0.46,0.38],[0.28,0.51],[0.61,0.38],[0.31,0.49],[0.33,0.51],[0.59,0.29],[0.14,0.57],[0.31,0.51],[0.39,0.40],[0.32,0.39],[0.20,0.56],[0.55,0.31],[0.56,0.39],[0.24,0.49],[0.56,0.39],[0.27,0.50],[0.60,0.32],[0.54,0.34],[0.19,0.57],[0.28,0.51],[0.54,0.34],[0.56,0.39],[0.19,0.57],[0.59,0.31],[0.37,0.45],[0.19,0.57],[0.44,0.41],[0.32,0.43],[0.35,0.48],[0.24,0.49],[0.26,0.45],[0.14,0.57],[0.59,0.30],[0.26,0.45],[0.26,0.45],[0.14,0.57],[0.20,0.50],[0.31,0.45],[0.27,0.51],[0.30,0.41],[0.19,0.57],[0.30,0.41],[0.27,0.50],[0.34,0.35],[0.30,0.42],[0.27,0.42],[0.27,0.42],[0.34,0.35],[0.35,0.35],[0.14,0.57],[0.45,0.36],[0.26,0.45],[0.56,0.39],[0.34,0.35],[0.19,0.57],[0.30,0.41],[0.19,0.57],[0.26,0.45],[0.26,0.45],[0.59,0.29],[0.19,0.57],[0.26,0.45],[0.32,0.39],[0.30,0.50],[0.28,0.50],[0.32,0.39],[0.59,0.29],[0.32,0.51],[0.56,0.39],[0.59,0.29],[0.61,0.38],[0.33,0.51],[0.22,0.44],[0.33,0.36],[0.27,0.42],[0.20,0.56],[0.28,0.51],[0.31,0.48],[0.20,0.56],[0.61,0.32],[0.24,0.54],[0.59,0.29],[0.32,0.43],[0.61,0.32],[0.19,0.57],[0.61,0.38],[0.55,0.31],[0.19,0.57],[0.31,0.46],[0.32,0.52],[0.30,0.41],[0.28,0.51],[0.28,0.50],[0.60,0.32],[0.61,0.32],[0.27,0.50],[0.59,0.29],[0.41,0.47],[0.39,0.42],[0.20,0.46],[0.19,0.57],[0.14,0.57],[0.23,0.47],[0.54,0.34],[0.28,0.51],[0.19,0.57],[0.33,0.37],[0.46,0.38],[0.27,0.42],[0.20,0.56],[0.39,0.42],[0.30,0.47],[0.26,0.45],[0.61,0.32],[0.61,0.38],[0.35,0.35],[0.14,0.57],[0.35,0.35],[0.28,0.51],[0.61,0.32],[0.24,0.54],[0.54,0.34],[0.28,0.43],[0.24,0.54],[0.30,0.41],[0.56,0.39],[0.23,0.52],[0.14,0.57],[0.26,0.45],[0.30,0.42],[0.32,0.43],[0.19,0.57],[0.45,0.36],[0.27,0.42],[0.29,0.48],[0.28,0.43],[0.27,0.51],[0.39,0.44],[0.32,0.49],[0.24,0.49],[0.56,0.39],[0.20,0.56],[0.30,0.42],[0.24,0.38],[0.46,0.38],[0.28,0.50],[0.26,0.45],[0.27,0.50],[0.23,0.47],[0.39,0.42],[0.28,0.51],[0.24,0.49],[0.27,0.42],[0.26,0.45],[0.60,0.32],[0.32,0.43],[0.39,0.42],[0.28,0.50],[0.28,0.52],[0.61,0.32],[0.32,0.39],[0.24,0.50],[0.39,0.40],[0.33,0.36],[0.24,0.38],[0.54,0.33],[0.19,0.57],[0.61,0.32],[0.33,0.36],[0.19,0.57],[0.30,0.41],[0.19,0.57],[0.34,0.35],[0.24,0.42],[0.27,0.42],[0.54,0.34],[0.54,0.34],[0.24,0.49],[0.27,0.42],[0.56,0.39],[0.19,0.57],[0.20,0.50],[0.14,0.57],[0.30,0.41],[0.30,0.41],[0.33,0.36],[0.26,0.45],[0.26,0.45],[0.23,0.47],[0.32,0.39],[0.27,0.53],[0.30,0.41],[0.20,0.46],[0.34,0.35],[0.34,0.35],[0.14,0.57],[0.46,0.38],[0.27,0.42],[0.36,0.44],[0.17,0.51],[0.60,0.32],[0.27,0.42],[0.20,0.56],[0.24,0.49],[0.41,0.40],[0.61,0.38],[0.19,0.57],[0.28,0.50],[0.23,0.52],[0.61,0.32],[0.39,0.46],[0.33,0.51],[0.19,0.57],[0.39,0.44],[0.56,0.39],[0.35,0.35],[0.28,0.43],[0.54,0.34],[0.36,0.44],[0.14,0.57],[0.61,0.38],[0.46,0.38],[0.61,0.32],[0.19,0.57],[0.54,0.34],[0.27,0.53],[0.33,0.51],[0.31,0.51],[0.59,0.29],[0.24,0.42],[0.28,0.43],[0.56,0.39],[0.28,0.50],[0.61,0.32],[0.29,0.48],[0.20,0.46],[0.50,0.29],[0.56,0.39],[0.20,0.50],[0.24,0.38],[0.32,0.39],[0.32,0.43],[0.28,0.50],[0.22,0.44],[0.20,0.56],[0.27,0.42],[0.61,0.38],[0.31,0.49],[0.20,0.46],[0.27,0.42],[0.24,0.38],[0.61,0.32],[0.26,0.45],[0.23,0.44],[0.59,0.30],[0.56,0.39],[0.33,0.44],[0.27,0.42],[0.31,0.51],[0.27,0.53],[0.32,0.39],[0.28,0.51],[0.30,0.42],[0.46,0.38],[0.27,0.42],[0.30,0.47],[0.39,0.40],[0.28,0.43],[0.30,0.42],[0.32,0.39],[0.59,0.31],[0.36,0.44],[0.54,0.34],[0.34,0.35],[0.30,0.41],[0.32,0.49],[0.32,0.43],[0.31,0.51],[0.32,0.52],[0.60,0.32],[0.19,0.57],[0.41,0.47],[0.32,0.39],[0.28,0.43],[0.28,0.51],[0.32,0.51],[0.56,0.39],[0.24,0.45],[0.55,0.31],[0.24,0.43],[0.61,0.38],[0.33,0.51],[0.30,0.41],[0.32,0.47],[0.32,0.38],[0.33,0.51],[0.39,0.40],[0.19,0.57],[0.27,0.42],[0.54,0.33],[0.59,0.29],[0.28,0.51],[0.61,0.38],[0.19,0.57],[0.30,0.41],[0.14,0.57],[0.32,0.39],[0.34,0.35],[0.54,0.34],[0.24,0.54],[0.56,0.39],[0.24,0.49],[0.61,0.32],[0.61,0.38],[0.61,0.32],[0.19,0.57],[0.14,0.57],[0.54,0.34],[0.59,0.29],[0.28,0.43],[0.19,0.57],[0.61,0.32],[0.32,0.43],[0.29,0.48],[0.56,0.39],[0.19,0.57],[0.56,0.39],[0.59,0.29],[0.59,0.29],[0.59,0.30],[0.14,0.57],[0.23,0.44],[0.28,0.50],[0.29,0.48],[0.31,0.45],[0.27,0.51],[0.24,0.45],[0.61,0.38],[0.24,0.49],[0.14,0.57],[0.61,0.32],[0.39,0.40],[0.33,0.44],[0.54,0.33],[0.33,0.51],[0.20,0.50],[0.19,0.57],[0.25,0.50],[0.28,0.43],[0.17,0.51],[0.19,0.57],[0.27,0.42],[0.20,0.56],[0.24,0.38],[0.19,0.57],[0.28,0.50],[0.28,0.50],[0.27,0.42],[0.26,0.45],[0.39,0.42],[0.23,0.47],[0.28,0.43],[0.32,0.39],[0.32,0.39],[0.24,0.54],[0.33,0.36],[0.29,0.53],[0.27,0.42],[0.44,0.41],[0.27,0.42],[0.33,0.36],[0.24,0.43],[0.61,0.38],[0.20,0.50],[0.55,0.31],[0.31,0.46],[0.60,0.32],[0.30,0.41],[0.41,0.47],[0.39,0.40],[0.27,0.53],[0.61,0.38],[0.46,0.38],[0.28,0.43],[0.44,0.41],[0.35,0.35],[0.24,0.49],[0.31,0.43],[0.27,0.42],[0.61,0.38],[0.29,0.48],[0.54,0.34],[0.61,0.32],[0.20,0.56],[0.24,0.49],[0.39,0.40],[0.27,0.42],[0.59,0.29],[0.59,0.29],[0.19,0.57],[0.24,0.54],[0.59,0.31],[0.24,0.38],[0.33,0.51],[0.23,0.44],[0.20,0.46],[0.24,0.45],[0.29,0.48],[0.28,0.50],[0.61,0.32],[0.19,0.57],[0.22,0.44],[0.19,0.57],[0.39,0.44],[0.19,0.57],[0.28,0.50],[0.30,0.41],[0.44,0.41],[0.28,0.52],[0.28,0.43],[0.54,0.33],[0.28,0.50],[0.19,0.57],[0.14,0.57],[0.30,0.41],[0.26,0.45],[0.56,0.39],[0.27,0.51],[0.20,0.46],[0.24,0.38],[0.32,0.38],[0.26,0.45],[0.61,0.32],[0.59,0.29],[0.19,0.57],[0.43,0.45],[0.14,0.57],[0.35,0.35],[0.56,0.39],[0.34,0.35],[0.19,0.57],[0.56,0.39],[0.27,0.42],[0.19,0.57],[0.60,0.32],[0.24,0.54],[0.54,0.34],[0.61,0.38],[0.33,0.51],[0.27,0.42],[0.32,0.39],[0.34,0.35],[0.20,0.56],[0.26,0.45],[0.32,0.51],[0.33,0.51],[0.35,0.35],[0.31,0.43],[0.56,0.39],[0.59,0.29],[0.28,0.43],[0.30,0.42],[0.27,0.44],[0.28,0.53],[0.29,0.48],[0.33,0.51],[0.60,0.32],[0.54,0.33],[0.19,0.57],[0.33,0.49],[0.30,0.41],[0.54,0.34],[0.27,0.53],[0.19,0.57],[0.19,0.57],[0.32,0.39],[0.20,0.56],[0.35,0.35],[0.30,0.42],[0.46,0.38],[0.54,0.34],[0.54,0.34],[0.14,0.57],[0.33,0.51],[0.32,0.39],[0.14,0.57],[0.59,0.29],[0.59,0.31],[0.30,0.41],[0.26,0.45],[0.32,0.38],[0.32,0.39],[0.59,0.31],[0.20,0.56],[0.20,0.46],[0.29,0.48],[0.59,0.29],[0.39,0.40],[0.28,0.50],[0.32,0.39],[0.28,0.53],[0.44,0.41],[0.20,0.50],[0.24,0.49],[0.20,0.46],[0.28,0.52],[0.24,0.50],[0.32,0.43],[0.39,0.40],[0.38,0.44],[0.60,0.32],[0.54,0.33],[0.61,0.32],[0.19,0.57],[0.59,0.29],[0.33,0.49],[0.28,0.43],[0.24,0.38],[0.30,0.41],[0.27,0.51],[0.35,0.48],[0.61,0.32],[0.43,0.45],[0.20,0.50],[0.24,0.49],[0.20,0.50],[0.20,0.56],[0.29,0.48],[0.14,0.57],[0.14,0.57],[0.26,0.45],[0.26,0.45],[0.39,0.40],[0.33,0.36],[0.56,0.39],[0.59,0.29],[0.27,0.42],[0.35,0.35],[0.30,0.41],[0.20,0.50],[0.19,0.57],[0.29,0.48],[0.39,0.42],[0.37,0.45],[0.30,0.41],[0.20,0.56],[0.30,0.42],[0.41,0.47],[0.28,0.43],[0.14,0.57],[0.27,0.53],[0.32,0.39],[0.30,0.41],[0.34,0.35],[0.32,0.47],[0.33,0.51],[0.20,0.56],[0.56,0.39],[0.60,0.32],[0.28,0.52],[0.56,0.39],[0.44,0.41],[0.27,0.42],[0.00,1.00],[0.29,0.49],[0.89,0.06],[0.22,0.66],[0.18,0.70],[0.67,0.22],[0.14,0.79],[0.58,0.17],[0.67,0.12],[0.95,0.05],[0.46,0.26],[0.15,0.54],[0.16,0.67],[0.48,0.31],[0.41,0.29],[0.18,0.66],[0.10,0.71],[0.11,0.72],[0.65,0.15],[0.94,0.03],[0.17,0.67],[0.44,0.29],[0.32,0.38],[0.79,0.10],[0.52,0.26],[0.25,0.59],[0.89,0.04],[0.69,0.13],[0.43,0.34],[0.75,0.07],[0.16,0.65],[0.02,0.70],[0.38,0.33],[0.57,0.23],[0.75,0.07],[0.25,0.58],[0.94,0.02],[0.55,0.22],[0.58,0.17],[0.14,0.79],[0.20,0.56],[0.10,0.88],[0.15,0.79],[0.11,0.77],[0.67,0.22],[0.07,0.87],[0.43,0.33],[0.08,0.84],[0.05,0.67],[0.07,0.77],[0.17,0.68],[1.00,0.00],[0.15,0.79],[0.08,0.77],[0.16,0.67],[0.69,0.13],[0.07,0.87],[0.15,0.54],[0.55,0.19],[0.14,0.63],[0.75,0.18],[0.25,0.63],[0.83,0.05],[0.55,0.50],[0.86,0.04],[0.73,0.18],[0.44,0.32],[0.70,0.15],[0.89,0.06],[0.17,0.67],[0.61,0.12],[0.55,0.50],[0.36,0.56],[0.03,0.86],[0.09,0.82],[0.09,0.82],[0.09,0.83],[0.17,0.68],[0.88,0.03],[0.64,0.22],[0.08,0.85],[0.74,0.16],[0.47,0.28],[0.05,0.84],[0.14,0.54],[0.01,0.93],[0.77,0.16],[0.17,0.60],[0.64,0.22],[0.84,0.05],[0.85,0.03],[0.23,0.67],[0.20,0.69],[0.00,0.87],[0.14,0.77],[0.11,0.69],[0.17,0.67],[0.56,0.27],[0.14,0.67],[0.37,0.31],[0.11,0.69],[0.35,0.52],[0.53,0.27],[0.50,0.21],[0.25,0.64],[0.36,0.56],[0.39,0.26],[0.02,0.83],[0.41,0.29],[0.07,0.77],[0.16,0.63],[0.92,0.03],[0.10,0.71],[0.83,0.05],[0.42,0.27],[0.62,0.12],[0.23,0.60],[0.19,0.61],[0.69,0.19],[0.21,0.65],[0.67,0.19],[0.18,0.69],[0.44,0.29],[0.14,0.65],[0.73,0.18],[0.15,0.66],[0.44,0.34],[0.74,0.10],[0.18,0.69],[0.25,0.61],[0.52,0.23],[0.06,0.82],[0.52,0.29],[0.22,0.68],[0.46,0.26],[0.14,0.54],[0.78,0.07],[0.80,0.05],[0.15,0.67],[0.10,0.82],[0.56,0.27],[0.64,0.22],[0.87,0.06],[0.14,0.66],[0.10,0.84],[0.88,0.05],[0.02,0.81],[0.62,0.15],[0.13,0.68],[0.50,0.28],[0.11,0.62],[0.46,0.32],[0.56,0.28],[0.43,0.28],[0.12,0.83],[0.11,0.80],[0.10,0.83],[0.90,0.04],[0.17,0.65],[0.15,0.63],[0.72,0.15],[0.64,0.26],[0.84,0.06],[0.09,0.83],[0.16,0.68],[0.09,0.63],[0.43,0.29],[0.88,0.05],[0.20,0.69],[0.73,0.09],[0.61,0.20],[0.67,0.13],[0.08,0.85],[0.73,0.16],[0.89,0.05],[0.41,0.25],[0.61,0.23],[0.58,0.22],[0.03,0.84],[0.58,0.24],[0.48,0.30],[0.25,0.54],[0.23,0.63],[0.41,0.46],[0.84,0.06],[0.45,0.29],[0.09,0.55],[0.54,0.26],[0.11,0.82],[0.69,0.18],[0.43,0.45],[0.43,0.28],[0.45,0.32],[0.07,0.78],[0.26,0.64],[0.92,0.04],[0.12,0.66],[0.32,0.51],[0.28,0.59],[0.70,0.18]])
x = data[:,0]
y = data[:,1]
def func(x,a,b,c):
return a * np.exp(-b*x) + c
popt, pcov = curve_fit(func, x, y)
a, b, c = popt
x_line = np.arange(min(x), max(x), 0.01)
x_line =np.reshape(x_line,(-1,1))
y_line = func(x_line, a, b, c)
y_line = np.reshape(y_line,(-1,1))
plt.scatter(x,y)
plt.plot(x_line,y_line)
plt.show()
As you can see, the fit deviates towards high x values. example plot I know there are numerous similar questions out there, and I have read many - but my math skills are not phenomenal so i am struggling coming up with a better solution for my particular problem.
I'm not tied to the exponential function - can anyone suggest something better?
I need to do this semi-automatically for hundreds of datasets, so ideally I want something as flexible as possible.
Any help greatly appreciated!
p.s. I am sorry about posting such a large sample dataset - but I figured this kind of question necessitates the actual data, and I didn't want to post links to suspicious looking files.. =)
This is not an optimal solution, but this should work for any kind of density distribution in your data. The idea is to resample the data a given number of times by computing local averages along the x-axis to have evenly distributed points.
#!/usr/bin/python3.6
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def my_floor(a, precision=0):
return np.round(a - 0.5 * 10**(-precision), precision)
data = np.array([[0.32,0.38],[0.61,0.32],[0.28,0.50],[0.60,0.32],[0.26,0.45],[0.19,0.57],[0.61,0.32],[0.59,0.29],[0.39,0.42],[0.61,0.32],[0.20,0.46],[0.24,0.45],[0.59,0.29],[0.39,0.42],[0.56,0.39],[0.32,0.43],[0.38,0.44],[0.54,0.34],[0.61,0.32],[0.20,0.46],[0.28,0.51],[0.54,0.34],[0.60,0.32],[0.30,0.42],[0.28,0.43],[0.14,0.57],[0.24,0.54],[0.39,0.42],[0.20,0.56],[0.56,0.39],[0.24,0.54],[0.33,0.37],[0.33,0.51],[0.20,0.46],[0.32,0.39],[0.20,0.56],[0.19,0.57],[0.32,0.39],[0.30,0.42],[0.33,0.50],[0.54,0.34],[0.28,0.50],[0.32,0.39],[0.28,0.43],[0.27,0.42],[0.56,0.39],[0.19,0.57],[0.19,0.57],[0.60,0.32],[0.44,0.41],[0.27,0.42],[0.19,0.57],[0.24,0.38],[0.24,0.54],[0.61,0.32],[0.39,0.40],[0.30,0.41],[0.19,0.57],[0.14,0.57],[0.32,0.43],[0.14,0.57],[0.59,0.29],[0.44,0.41],[0.30,0.41],[0.32,0.38],[0.61,0.32],[0.20,0.46],[0.20,0.56],[0.30,0.41],[0.33,0.36],[0.14,0.57],[0.19,0.57],[0.46,0.38],[0.36,0.44],[0.61,0.32],[0.31,0.48],[0.60,0.32],[0.39,0.40],[0.14,0.57],[0.44,0.41],[0.24,0.49],[0.41,0.40],[0.19,0.57],[0.19,0.57],[0.31,0.49],[0.31,0.43],[0.35,0.35],[0.20,0.46],[0.54,0.34],[0.20,0.56],[0.39,0.44],[0.33,0.36],[0.20,0.56],[0.30,0.41],[0.56,0.39],[0.31,0.48],[0.28,0.51],[0.14,0.57],[0.61,0.32],[0.30,0.50],[0.20,0.56],[0.19,0.57],[0.59,0.31],[0.20,0.56],[0.27,0.42],[0.29,0.48],[0.56,0.39],[0.32,0.39],[0.20,0.56],[0.59,0.29],[0.24,0.49],[0.56,0.39],[0.60,0.32],[0.35,0.35],[0.28,0.50],[0.46,0.38],[0.14,0.57],[0.54,0.34],[0.32,0.38],[0.26,0.45],[0.26,0.45],[0.39,0.42],[0.19,0.57],[0.28,0.51],[0.27,0.42],[0.33,0.50],[0.54,0.34],[0.39,0.40],[0.19,0.57],[0.33,0.36],[0.22,0.44],[0.33,0.51],[0.61,0.32],[0.28,0.51],[0.25,0.50],[0.39,0.40],[0.34,0.35],[0.59,0.31],[0.31,0.49],[0.20,0.46],[0.39,0.46],[0.20,0.50],[0.32,0.39],[0.30,0.41],[0.23,0.44],[0.29,0.53],[0.28,0.50],[0.31,0.48],[0.61,0.32],[0.54,0.34],[0.28,0.53],[0.56,0.39],[0.19,0.57],[0.14,0.57],[0.59,0.29],[0.29,0.48],[0.44,0.41],[0.27,0.51],[0.50,0.29],[0.14,0.57],[0.60,0.32],[0.32,0.39],[0.19,0.57],[0.24,0.38],[0.56,0.39],[0.14,0.57],[0.54,0.34],[0.61,0.38],[0.27,0.53],[0.20,0.46],[0.61,0.32],[0.27,0.42],[0.27,0.42],[0.20,0.56],[0.30,0.41],[0.31,0.51],[0.32,0.39],[0.31,0.51],[0.29,0.48],[0.20,0.46],[0.33,0.51],[0.31,0.43],[0.30,0.41],[0.27,0.44],[0.31,0.51],[0.29,0.48],[0.35,0.35],[0.46,0.38],[0.28,0.51],[0.61,0.38],[0.31,0.49],[0.33,0.51],[0.59,0.29],[0.14,0.57],[0.31,0.51],[0.39,0.40],[0.32,0.39],[0.20,0.56],[0.55,0.31],[0.56,0.39],[0.24,0.49],[0.56,0.39],[0.27,0.50],[0.60,0.32],[0.54,0.34],[0.19,0.57],[0.28,0.51],[0.54,0.34],[0.56,0.39],[0.19,0.57],[0.59,0.31],[0.37,0.45],[0.19,0.57],[0.44,0.41],[0.32,0.43],[0.35,0.48],[0.24,0.49],[0.26,0.45],[0.14,0.57],[0.59,0.30],[0.26,0.45],[0.26,0.45],[0.14,0.57],[0.20,0.50],[0.31,0.45],[0.27,0.51],[0.30,0.41],[0.19,0.57],[0.30,0.41],[0.27,0.50],[0.34,0.35],[0.30,0.42],[0.27,0.42],[0.27,0.42],[0.34,0.35],[0.35,0.35],[0.14,0.57],[0.45,0.36],[0.26,0.45],[0.56,0.39],[0.34,0.35],[0.19,0.57],[0.30,0.41],[0.19,0.57],[0.26,0.45],[0.26,0.45],[0.59,0.29],[0.19,0.57],[0.26,0.45],[0.32,0.39],[0.30,0.50],[0.28,0.50],[0.32,0.39],[0.59,0.29],[0.32,0.51],[0.56,0.39],[0.59,0.29],[0.61,0.38],[0.33,0.51],[0.22,0.44],[0.33,0.36],[0.27,0.42],[0.20,0.56],[0.28,0.51],[0.31,0.48],[0.20,0.56],[0.61,0.32],[0.24,0.54],[0.59,0.29],[0.32,0.43],[0.61,0.32],[0.19,0.57],[0.61,0.38],[0.55,0.31],[0.19,0.57],[0.31,0.46],[0.32,0.52],[0.30,0.41],[0.28,0.51],[0.28,0.50],[0.60,0.32],[0.61,0.32],[0.27,0.50],[0.59,0.29],[0.41,0.47],[0.39,0.42],[0.20,0.46],[0.19,0.57],[0.14,0.57],[0.23,0.47],[0.54,0.34],[0.28,0.51],[0.19,0.57],[0.33,0.37],[0.46,0.38],[0.27,0.42],[0.20,0.56],[0.39,0.42],[0.30,0.47],[0.26,0.45],[0.61,0.32],[0.61,0.38],[0.35,0.35],[0.14,0.57],[0.35,0.35],[0.28,0.51],[0.61,0.32],[0.24,0.54],[0.54,0.34],[0.28,0.43],[0.24,0.54],[0.30,0.41],[0.56,0.39],[0.23,0.52],[0.14,0.57],[0.26,0.45],[0.30,0.42],[0.32,0.43],[0.19,0.57],[0.45,0.36],[0.27,0.42],[0.29,0.48],[0.28,0.43],[0.27,0.51],[0.39,0.44],[0.32,0.49],[0.24,0.49],[0.56,0.39],[0.20,0.56],[0.30,0.42],[0.24,0.38],[0.46,0.38],[0.28,0.50],[0.26,0.45],[0.27,0.50],[0.23,0.47],[0.39,0.42],[0.28,0.51],[0.24,0.49],[0.27,0.42],[0.26,0.45],[0.60,0.32],[0.32,0.43],[0.39,0.42],[0.28,0.50],[0.28,0.52],[0.61,0.32],[0.32,0.39],[0.24,0.50],[0.39,0.40],[0.33,0.36],[0.24,0.38],[0.54,0.33],[0.19,0.57],[0.61,0.32],[0.33,0.36],[0.19,0.57],[0.30,0.41],[0.19,0.57],[0.34,0.35],[0.24,0.42],[0.27,0.42],[0.54,0.34],[0.54,0.34],[0.24,0.49],[0.27,0.42],[0.56,0.39],[0.19,0.57],[0.20,0.50],[0.14,0.57],[0.30,0.41],[0.30,0.41],[0.33,0.36],[0.26,0.45],[0.26,0.45],[0.23,0.47],[0.32,0.39],[0.27,0.53],[0.30,0.41],[0.20,0.46],[0.34,0.35],[0.34,0.35],[0.14,0.57],[0.46,0.38],[0.27,0.42],[0.36,0.44],[0.17,0.51],[0.60,0.32],[0.27,0.42],[0.20,0.56],[0.24,0.49],[0.41,0.40],[0.61,0.38],[0.19,0.57],[0.28,0.50],[0.23,0.52],[0.61,0.32],[0.39,0.46],[0.33,0.51],[0.19,0.57],[0.39,0.44],[0.56,0.39],[0.35,0.35],[0.28,0.43],[0.54,0.34],[0.36,0.44],[0.14,0.57],[0.61,0.38],[0.46,0.38],[0.61,0.32],[0.19,0.57],[0.54,0.34],[0.27,0.53],[0.33,0.51],[0.31,0.51],[0.59,0.29],[0.24,0.42],[0.28,0.43],[0.56,0.39],[0.28,0.50],[0.61,0.32],[0.29,0.48],[0.20,0.46],[0.50,0.29],[0.56,0.39],[0.20,0.50],[0.24,0.38],[0.32,0.39],[0.32,0.43],[0.28,0.50],[0.22,0.44],[0.20,0.56],[0.27,0.42],[0.61,0.38],[0.31,0.49],[0.20,0.46],[0.27,0.42],[0.24,0.38],[0.61,0.32],[0.26,0.45],[0.23,0.44],[0.59,0.30],[0.56,0.39],[0.33,0.44],[0.27,0.42],[0.31,0.51],[0.27,0.53],[0.32,0.39],[0.28,0.51],[0.30,0.42],[0.46,0.38],[0.27,0.42],[0.30,0.47],[0.39,0.40],[0.28,0.43],[0.30,0.42],[0.32,0.39],[0.59,0.31],[0.36,0.44],[0.54,0.34],[0.34,0.35],[0.30,0.41],[0.32,0.49],[0.32,0.43],[0.31,0.51],[0.32,0.52],[0.60,0.32],[0.19,0.57],[0.41,0.47],[0.32,0.39],[0.28,0.43],[0.28,0.51],[0.32,0.51],[0.56,0.39],[0.24,0.45],[0.55,0.31],[0.24,0.43],[0.61,0.38],[0.33,0.51],[0.30,0.41],[0.32,0.47],[0.32,0.38],[0.33,0.51],[0.39,0.40],[0.19,0.57],[0.27,0.42],[0.54,0.33],[0.59,0.29],[0.28,0.51],[0.61,0.38],[0.19,0.57],[0.30,0.41],[0.14,0.57],[0.32,0.39],[0.34,0.35],[0.54,0.34],[0.24,0.54],[0.56,0.39],[0.24,0.49],[0.61,0.32],[0.61,0.38],[0.61,0.32],[0.19,0.57],[0.14,0.57],[0.54,0.34],[0.59,0.29],[0.28,0.43],[0.19,0.57],[0.61,0.32],[0.32,0.43],[0.29,0.48],[0.56,0.39],[0.19,0.57],[0.56,0.39],[0.59,0.29],[0.59,0.29],[0.59,0.30],[0.14,0.57],[0.23,0.44],[0.28,0.50],[0.29,0.48],[0.31,0.45],[0.27,0.51],[0.24,0.45],[0.61,0.38],[0.24,0.49],[0.14,0.57],[0.61,0.32],[0.39,0.40],[0.33,0.44],[0.54,0.33],[0.33,0.51],[0.20,0.50],[0.19,0.57],[0.25,0.50],[0.28,0.43],[0.17,0.51],[0.19,0.57],[0.27,0.42],[0.20,0.56],[0.24,0.38],[0.19,0.57],[0.28,0.50],[0.28,0.50],[0.27,0.42],[0.26,0.45],[0.39,0.42],[0.23,0.47],[0.28,0.43],[0.32,0.39],[0.32,0.39],[0.24,0.54],[0.33,0.36],[0.29,0.53],[0.27,0.42],[0.44,0.41],[0.27,0.42],[0.33,0.36],[0.24,0.43],[0.61,0.38],[0.20,0.50],[0.55,0.31],[0.31,0.46],[0.60,0.32],[0.30,0.41],[0.41,0.47],[0.39,0.40],[0.27,0.53],[0.61,0.38],[0.46,0.38],[0.28,0.43],[0.44,0.41],[0.35,0.35],[0.24,0.49],[0.31,0.43],[0.27,0.42],[0.61,0.38],[0.29,0.48],[0.54,0.34],[0.61,0.32],[0.20,0.56],[0.24,0.49],[0.39,0.40],[0.27,0.42],[0.59,0.29],[0.59,0.29],[0.19,0.57],[0.24,0.54],[0.59,0.31],[0.24,0.38],[0.33,0.51],[0.23,0.44],[0.20,0.46],[0.24,0.45],[0.29,0.48],[0.28,0.50],[0.61,0.32],[0.19,0.57],[0.22,0.44],[0.19,0.57],[0.39,0.44],[0.19,0.57],[0.28,0.50],[0.30,0.41],[0.44,0.41],[0.28,0.52],[0.28,0.43],[0.54,0.33],[0.28,0.50],[0.19,0.57],[0.14,0.57],[0.30,0.41],[0.26,0.45],[0.56,0.39],[0.27,0.51],[0.20,0.46],[0.24,0.38],[0.32,0.38],[0.26,0.45],[0.61,0.32],[0.59,0.29],[0.19,0.57],[0.43,0.45],[0.14,0.57],[0.35,0.35],[0.56,0.39],[0.34,0.35],[0.19,0.57],[0.56,0.39],[0.27,0.42],[0.19,0.57],[0.60,0.32],[0.24,0.54],[0.54,0.34],[0.61,0.38],[0.33,0.51],[0.27,0.42],[0.32,0.39],[0.34,0.35],[0.20,0.56],[0.26,0.45],[0.32,0.51],[0.33,0.51],[0.35,0.35],[0.31,0.43],[0.56,0.39],[0.59,0.29],[0.28,0.43],[0.30,0.42],[0.27,0.44],[0.28,0.53],[0.29,0.48],[0.33,0.51],[0.60,0.32],[0.54,0.33],[0.19,0.57],[0.33,0.49],[0.30,0.41],[0.54,0.34],[0.27,0.53],[0.19,0.57],[0.19,0.57],[0.32,0.39],[0.20,0.56],[0.35,0.35],[0.30,0.42],[0.46,0.38],[0.54,0.34],[0.54,0.34],[0.14,0.57],[0.33,0.51],[0.32,0.39],[0.14,0.57],[0.59,0.29],[0.59,0.31],[0.30,0.41],[0.26,0.45],[0.32,0.38],[0.32,0.39],[0.59,0.31],[0.20,0.56],[0.20,0.46],[0.29,0.48],[0.59,0.29],[0.39,0.40],[0.28,0.50],[0.32,0.39],[0.28,0.53],[0.44,0.41],[0.20,0.50],[0.24,0.49],[0.20,0.46],[0.28,0.52],[0.24,0.50],[0.32,0.43],[0.39,0.40],[0.38,0.44],[0.60,0.32],[0.54,0.33],[0.61,0.32],[0.19,0.57],[0.59,0.29],[0.33,0.49],[0.28,0.43],[0.24,0.38],[0.30,0.41],[0.27,0.51],[0.35,0.48],[0.61,0.32],[0.43,0.45],[0.20,0.50],[0.24,0.49],[0.20,0.50],[0.20,0.56],[0.29,0.48],[0.14,0.57],[0.14,0.57],[0.26,0.45],[0.26,0.45],[0.39,0.40],[0.33,0.36],[0.56,0.39],[0.59,0.29],[0.27,0.42],[0.35,0.35],[0.30,0.41],[0.20,0.50],[0.19,0.57],[0.29,0.48],[0.39,0.42],[0.37,0.45],[0.30,0.41],[0.20,0.56],[0.30,0.42],[0.41,0.47],[0.28,0.43],[0.14,0.57],[0.27,0.53],[0.32,0.39],[0.30,0.41],[0.34,0.35],[0.32,0.47],[0.33,0.51],[0.20,0.56],[0.56,0.39],[0.60,0.32],[0.28,0.52],[0.56,0.39],[0.44,0.41],[0.27,0.42],[0.00,1.00],[0.29,0.49],[0.89,0.06],[0.22,0.66],[0.18,0.70],[0.67,0.22],[0.14,0.79],[0.58,0.17],[0.67,0.12],[0.95,0.05],[0.46,0.26],[0.15,0.54],[0.16,0.67],[0.48,0.31],[0.41,0.29],[0.18,0.66],[0.10,0.71],[0.11,0.72],[0.65,0.15],[0.94,0.03],[0.17,0.67],[0.44,0.29],[0.32,0.38],[0.79,0.10],[0.52,0.26],[0.25,0.59],[0.89,0.04],[0.69,0.13],[0.43,0.34],[0.75,0.07],[0.16,0.65],[0.02,0.70],[0.38,0.33],[0.57,0.23],[0.75,0.07],[0.25,0.58],[0.94,0.02],[0.55,0.22],[0.58,0.17],[0.14,0.79],[0.20,0.56],[0.10,0.88],[0.15,0.79],[0.11,0.77],[0.67,0.22],[0.07,0.87],[0.43,0.33],[0.08,0.84],[0.05,0.67],[0.07,0.77],[0.17,0.68],[1.00,0.00],[0.15,0.79],[0.08,0.77],[0.16,0.67],[0.69,0.13],[0.07,0.87],[0.15,0.54],[0.55,0.19],[0.14,0.63],[0.75,0.18],[0.25,0.63],[0.83,0.05],[0.55,0.50],[0.86,0.04],[0.73,0.18],[0.44,0.32],[0.70,0.15],[0.89,0.06],[0.17,0.67],[0.61,0.12],[0.55,0.50],[0.36,0.56],[0.03,0.86],[0.09,0.82],[0.09,0.82],[0.09,0.83],[0.17,0.68],[0.88,0.03],[0.64,0.22],[0.08,0.85],[0.74,0.16],[0.47,0.28],[0.05,0.84],[0.14,0.54],[0.01,0.93],[0.77,0.16],[0.17,0.60],[0.64,0.22],[0.84,0.05],[0.85,0.03],[0.23,0.67],[0.20,0.69],[0.00,0.87],[0.14,0.77],[0.11,0.69],[0.17,0.67],[0.56,0.27],[0.14,0.67],[0.37,0.31],[0.11,0.69],[0.35,0.52],[0.53,0.27],[0.50,0.21],[0.25,0.64],[0.36,0.56],[0.39,0.26],[0.02,0.83],[0.41,0.29],[0.07,0.77],[0.16,0.63],[0.92,0.03],[0.10,0.71],[0.83,0.05],[0.42,0.27],[0.62,0.12],[0.23,0.60],[0.19,0.61],[0.69,0.19],[0.21,0.65],[0.67,0.19],[0.18,0.69],[0.44,0.29],[0.14,0.65],[0.73,0.18],[0.15,0.66],[0.44,0.34],[0.74,0.10],[0.18,0.69],[0.25,0.61],[0.52,0.23],[0.06,0.82],[0.52,0.29],[0.22,0.68],[0.46,0.26],[0.14,0.54],[0.78,0.07],[0.80,0.05],[0.15,0.67],[0.10,0.82],[0.56,0.27],[0.64,0.22],[0.87,0.06],[0.14,0.66],[0.10,0.84],[0.88,0.05],[0.02,0.81],[0.62,0.15],[0.13,0.68],[0.50,0.28],[0.11,0.62],[0.46,0.32],[0.56,0.28],[0.43,0.28],[0.12,0.83],[0.11,0.80],[0.10,0.83],[0.90,0.04],[0.17,0.65],[0.15,0.63],[0.72,0.15],[0.64,0.26],[0.84,0.06],[0.09,0.83],[0.16,0.68],[0.09,0.63],[0.43,0.29],[0.88,0.05],[0.20,0.69],[0.73,0.09],[0.61,0.20],[0.67,0.13],[0.08,0.85],[0.73,0.16],[0.89,0.05],[0.41,0.25],[0.61,0.23],[0.58,0.22],[0.03,0.84],[0.58,0.24],[0.48,0.30],[0.25,0.54],[0.23,0.63],[0.41,0.46],[0.84,0.06],[0.45,0.29],[0.09,0.55],[0.54,0.26],[0.11,0.82],[0.69,0.18],[0.43,0.45],[0.43,0.28],[0.45,0.32],[0.07,0.78],[0.26,0.64],[0.92,0.04],[0.12,0.66],[0.32,0.51],[0.28,0.59],[0.70,0.18]])
x = data[:,0]
y = data[:,1]
#---------------------------ADD THIS---------------------------
# Define how to resample the data
n_bins = 20 # choose how many samples to use
bin_size = (max(x) - min(x))/n_bins
# Prepare empty arrays for resampled x and y
x_res, y_res = [],[]
# Resample the data with consistent density
for i in range(n_bins-1):
lower = x >= min(x)+i*bin_size
higher = x < min(x)+(i+1)*bin_size
x_res.append(np.mean(x[np.where(lower & higher)]))
y_res.append(np.mean(y[np.where(lower & higher)]))
#------------------------------------------------------
def func(x,a,b,c):
return a * np.exp(-b*x) + c
popt, pcov = curve_fit(func, x_res, y_res)
a, b, c = popt
x_line = np.arange(min(x), max(x), 0.01)
x_line = np.reshape(x_line,(-1,1))
y_line = func(x_line, a, b, c)
y_line = np.reshape(y_line,(-1,1))
plt.scatter(x,y,alpha=0.5)
plt.scatter(x_res, y_res)
plt.plot(x_line,y_line, c='red')
plt.show()
Which gives the output:
curve_fit does not give you a great deal of control over the fit, you may want to look at the much more general, but somewhat more complicated to use, least_squares:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html
where you can control a lot of things. curve_fit does give you a sigma parameter which allows you to weight the 'uncertainty' in your points. The trick here is to assign lower uncertainty to the points around x=1 where the fit is poor. By giving it lower uncertainty the fitter will try harder to fit them.
after some experimenting, if replacing your ...curve_fit... line with
uncertainty = np.exp(-5*x*x)
popt, pcov = curve_fit(func, x, y, sigma = uncertainty)
I got the following fit
you can try to improve this by playing with the uncertainty vector above
Related
How to fit any non-linear functions in python?
I have already checked post1, post2, post3 and post4 but didn't help. I have a data about a specific plant including two variables called "Age" and "Height". The correlation between them is non-linear. To fit a model, one solution I assume is as follows: If the non-linear function is then we can bring in a new variable k where so we have changed the first non-linear function into a multilinear regression one. Based on this, I have the following code: data['K'] = data["Age"].pow(2) x = data[["Age", "K"]] y = data["Height"] model = LinearRegression().fit(x, y) print(model.score(x, y)) # = 0.9908571840250205 Am I doing correctly? How to do with cubic and exponential functions? Thanks.
for cubic polynomials data['x2'] = data["Age"].pow(2) data['x3'] = data["Age"].pow(3) x = data[["Age", "x2","x3"]] y = data["Height"] model = LinearRegression().fit(x, y) print(model.score(x, y)) you can handle exponential data by fitting log(y). or find some library that can fit polynomials automatically t.ex: https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html
Hopefully you don't have a religious fervor for using SKLearn here because the answer I'm going to suggest is going to completely ignore it. If you're interested doing regression analysis where you get to have complete autonomy with the fitting function, I'd suggest cutting directly down to the least-squares optimization algorithm that drives a lot of this type of work, which you can do using scipy import numpy as np import matplotlib.pyplot as plt from scipy.optimize import leastsq x, y = np.array([0,1,2,3,4,5]), np.array([0,1,4,9,16,25]) # initial_guess[i] maps to p[x] in function_to_fit, must be reasonable initial_guess = [1, 1, 1] def function_to_fit(x, p): return pow(p[0]*x, 2) + p[1]*x + p[2] def residuals(p,y,x): return y - function_to_fit(x,p) cnsts = leastsq( residuals, initial_guess, args=(y, x) )[0] fig, ax = plt.subplots() ax.plot(x, y, 'o') xi = np.arange(0,10,0.1) ax.plot(xi, [function_to_fit(x, cnsts) for x in xi]) plt.show() Now this is a numeric approach to the solution, so I would recommend taking a moment to make sure you understand the limitations of such an approach - but for problems like these I've found they're more than adequate for functionalizing non-linear data sets without trying to do some hand-waving to make it if inside a linearizable manifold.
How to prioritise some points over others using curve fit from SciPy
I want to model the following curve: To perform it, I'm using curve_fit from SciPy, fitting an exponential function. def exponenial_func(x, a, b, c): return a * b**(c*x) popt, pcov = curve_fit(exponenial_func, x, y, p0=(1,2,2), bounds=((0, 0, 0), (np.inf, np.inf, np.inf))) When I first do it, I get this: Which is minimising the residuals, each point with the same level of importance. What I want, is to get a curve that gives more importance to the last values of the curve (from x-axis 30, for example) than to the first values, so it fits better in the end of the curve than in the beginning of it. I know that from here there are many ways to approach this (first of all, define what is the importance that I want to give to each of the residuals). My question here, is to get some idea of how to approach this. One idea that I had, is to change the sigma value to weight each data point by its inverse value. popt, pcov = curve_fit(exponenial_func, x, y, p0=(1,2,2), bounds=((0, 0, 0), (np.inf, np.inf, np.inf)), sigma=1/y) In this case, I get something like I was looking for: It doesn't look bad, but I'm looking for another way of doing this, so that I can "control" each of the data points, like to weight each of the residuals in a linear way, or exponential, or even choosing it manually (rather than all of them by the inverse, as in the previous case). Thanks in advance
First of all, note that there's no need for three coefficients. Since a * b**(c*x) = a * exp(log(b)*c*x). we can define k = log(b)*c. Here's a suggestion how you could tackle your problem by hands with scipy.optimize.least_squares and a priority vector: import numpy as np from scipy.optimize import least_squares def exponenial_func2(x, a, k): return a * np.exp(k*x) # returns the vector of residuals def fitwrapper2(coeffs, *args): xdata, ydata, prio = args return prio*(exponenial_func2(xdata, *coeffs)-ydata) # Data n = 31 xdata = np.arange(n) ydata = np.array([155.0,229,322,453,655,888,1128,1694, 2036,2502,3089,3858,4636,5883,7375, 9172,10149,12462,12462,17660,21157, 24747,27980,31506,35713,41035,47021, 53578,59138,63927,69176]) # The priority vector prio = np.ones(n) prio[-1] = 5 res = least_squares(fitwrapper2, x0=[1.0,2.0], bounds=(0,np.inf), args=(xdata,ydata,prio)) With prio[-1] = 5 we give the last point a high priority. res.x contains your optimal coefficients. Here a, k = res.x. Note that for prio = np.ones(n) it's a normal least squares fitting (like curve_fit does) where all points have the same priority. You can control the priority of each point by increasing its value in the prio array. Comparing both results gives me:
Fitting a two-layer model to wind profile data in python
I'm trying to fit a model to my dataset of wind profiles, i.e. wind speed values u(z) at different altitudes z. The model consists of two parts, which I for now simplified to: u(z) = ust/k * ln(z/z0) for z < zsl u(z) = a*z + b for z > zsl In the logarithmic model, ust and z0 are free parameters k is fixed. zsl is the height of the surface layer, which is also not known a priori. I want to fit this model to my data, and I have already tried different approaches. The best result I'm getting so far is with: def two_layer(z,hsl,ust,z0,a,b): return ust/0.4*(np.log(z/z0)) if z<hsl else a*z+b two_layer = np.vectorize(two_layer) def two_layer_err(p,z,u): return two_layer(z,*p)-u popt, pcov ,infodict, mesg, ier = optimize.leastsq(two_layer_err,[150.,0.3,0.002,0.1,10.],(wspd_hgt,prof),full_output=1) # wspd_hgt are my measurements heights and # prof are the corresponding wind speed values This gives me reasonable estimates for all parameters, except for zsl which is not changed during the fitting procedure. I guess this has to do with the fact that is used as a threshold rather than a function parameter. Is there any way I could let zsl be varied during the optimization? I tried something with numpy.piecewise, but that didn't work very well, perhaps because I don't really understand it very well, or I might be completely off here because it's not suitable for my cause. For the idea, the wind profile looks like this if the axes are reversed (z plotted versus u):
I think I finally have a solution for this type of problem, which I came across while answering a similar question. The solution seems to be to implement a constraint saying that u1 == u2 at the switch between the two models. Because I cannot try it with your model because there is no data posted, I will instead show how it works for the other model, and you can adapt it to your situation. I solved this using a scipy wrapper I wrote to make fitting of such problems more pythonic, called symfit. But you could do the same using the SLSQP algorithm in scipy if you prefer. from symfit import parameters, variables, Fit, Piecewise, exp, Eq import numpy as np import matplotlib.pyplot as plt t, y = variables('t, y') m, c, d, k, t0 = parameters('m, c, d, k, t0') # Help the fit by bounding the switchpoint between the models t0.min = 0.6 t0.max = 0.9 # Make a piecewise model y1 = m * t + c y2 = d * exp(- k * t) model = {y: Piecewise((y1, t <= t0), (y2, t > t0))} # As a constraint, we demand equality between the two models at the point t0 # Substitutes t in the components by t0 constraints = [Eq(y1.subs({t: t0}), y2.subs({t: t0}))] # Read the data tdata, ydata = np.genfromtxt('Experimental Data.csv', delimiter=',', skip_header=1).T fit = Fit(model, t=tdata, y=ydata, constraints=constraints) fit_result = fit.execute() print(fit_result) plt.scatter(tdata, ydata) plt.plot(tdata, fit.model(t=tdata, **fit_result.params).y) plt.show() I think you should be able to adapt this example to your situation! Edit: as requested in the comment, it is possible to demand matching derivatives as well. In order to do this, the following additional code is needed for the above example: from symfit import Derivative dy1dt = Derivative(y1, t) dy2dt = Derivative(y2, t) constraints = [ Eq(y1.subs({t: t0}), y2.subs({t: t0})), Eq(dy1dt.subs({t: t0}), dy2dt.subs({t: t0})) ] That should do the trick! So from a programming point of view it is very doable, although depending on the model this might not actually have a solution.
numpy.polyfit versus scipy.odr
I have a data set which in theory is described by a polynomial of the second degree. I would like to fit this data and I have used numpy.polyfit to do this. However, the down side is that the error on the returned coefficients is not available. Therefore I decided to also fit the data using scipy.odr. The weird thing was that the coefficients for the polynomial deviated from each other. I do not understand this and therefore decided to test both fitting routines on a set of data that I produce my self: import numpy import scipy.odr import matplotlib.pyplot as plt x = numpy.arange(-20, 20, 0.1) y = 1.8 * x**2 -2.1 * x + 0.6 + numpy.random.normal(scale = 100, size = len(x)) #Define function for scipy.odr def fit_func(p, t): return p[0] * t**2 + p[1] * t + p[2] #Fit the data using numpy.polyfit fit_np = numpy.polyfit(x, y, 2) #Fit the data using scipy.odr Model = scipy.odr.Model(fit_func) Data = scipy.odr.RealData(x, y) Odr = scipy.odr.ODR(Data, Model, [1.5, -2, 1], maxit = 10000) output = Odr.run() #output.pprint() beta = output.beta betastd = output.sd_beta print "poly", fit_np print "ODR", beta plt.plot(x, y, "bo") plt.plot(x, numpy.polyval(fit_np, x), "r--", lw = 2) plt.plot(x, fit_func(beta, x), "g--", lw = 2) plt.tight_layout() plt.show() An example of an outcome is as follows: poly [ 1.77992643 -2.42753714 3.86331152] ODR [ 3.8161735 -23.08952492 -146.76214989] In the included image, the solution of numpy.polyfit (red dashed line) corresponds pretty well. The solution of scipy.odr (green dashed line) is basically completely off. I do have to note that the difference between numpy.polyfit and scipy.odr was less in the actual data set I wanted to fit. However, I do not understand where the difference between the two comes from, why in my own testing example the difference is extremely big, and which fitting routine is better? I hope you can provide answers that might help me give a better understanding between the two fitting routines and in the process provide answers to the questions I have.
In the way you are using ODR it does a full orthogonal distance regression. To have it do a normal nonlinear least squares fit add Odr.set_job(fit_type=2) before starting the optimization and you will get what you expected. The reason that the full ODR fails so badly is due to not specifying weights/standard deviations. Obviously it does hard to interpret that point cloud and assumes equal wheights for x and y. If you provide estimated standard deviations, odr will yield a good (though different of course) result, too. Data = scipy.odr.RealData(x, y, sx=0.1, sy=10)
The actual problem is that the odr output has the beta coefficients in the opposite order than numpy.polyfit has. So the green curve is not calculated correctly. To plot it, use instead plt.plot(x, fit_func(beta[::-1], x), "g--", lw = 2)
How to calculate error for polynomial fitting (in slope and intercept)
Hi I want to calculate errors in slope and intercept which are calculated by scipy.polyfit function. I have (+/-) uncertainty for ydata so how can I include it for calculating uncertainty into slope and intercept? My code is, from scipy import polyfit import pylab as plt from numpy import * data = loadtxt("data.txt") xdata,ydata = data[:,0],data[:,1] x_d,y_d = log10(xdata),log10(ydata) polycoef = polyfit(x_d, y_d, 1) yfit = 10**( polycoef[0]*x_d+polycoef[1] ) plt.subplot(111) plt.loglog(xdata,ydata,'.k',xdata,yfit,'-r') plt.show() Thanks a lot
You could use scipy.optimize.curve_fit instead of polyfit. It has a parameter sigma for errors of ydata. If you have your error for every y value in a sequence yerror (so that yerror has the same length as your y_d sequence) you can do: polycoef, _ = scipy.optimize.curve_fit(lambda x, a, b: a*x+b, x_d, y_d, sigma=yerror) For an alternative see the paragraph Fitting a power-law to data with errors in the Scipy Cookbook.