How to achieve line of best fit instead of a logarithmic - python

I attempted to fit a best fit line with my data points using scipy.optimize.curvefit function:
x = np.array([0,2246,2600,3465,4392])
y = [-0.763,0.052,0.081,0.266,0.179]
yerror = [0.201,0.113,0.139,0.162,0.204]
plt.errorbar(wavelength,A,yerr=B, xerr=None, fmt='o')
def func(x, a, b, c):#the best fit function
return a + (b * x)**c
popt, pcov = scipy.optimize.curve_fit(func, x, y)
x_fit = np.linspace(0, np.max(x), 1000) # create curve line of best fit
plt.plot(x_fit, func(x_fit, *popt), "b")
My popt value is: array([-7.63283206e-01, 2.23580046e-04, 2.63164486e-01])
where the first value -7.63283206e-01 is the intercept I wish it to show in the graph.
The data points and best fit are plotted here using code above and gives a logarithmic curve, but I want the line of best fit to pass through the y axis like this instead to illustrate a straighter curve.
Thanks in advance!

"Best fit" means nothing until the criteria of fitting be specified (Least mean square error or least mean square relative error or least mean absolute error or etc.). The "best fit" is different for each one.
Sine there is no criteria of fitting specified, why not choosing the simplest method, without iterative process and without need of guessed initial values of parameters.
The method below, from https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales gives :
If you specify a particular criteria of fitting then an iterative method with guessed initial values are required. You can use the above values a,b,c as initial values for a robust and fast convergence.

Related

Better fitting for a power-law curve

So I had some points in a dataframe that led me to believe I was dealing with a power law curve. After some googling, I used what I found in this post to go about curve fitting.
def func_powerlaw(x, m, c, c0):
return c0 + x**m * c
target_func = func_powerlaw
X = np.array(selection_to_feed.selection[1:])
y = np.array(selection_to_feed.avg_feed_size[1:])
popt, pcov = curve_fit(func_powerlaw, X, y, p0 =np.asarray([-1,10**5,0]))
curvex = np.linspace(0,5000,1000)
curvey = target_func(curvex, *popt)
plt.figure(figsize=(10, 5))
plt.plot(curvex, curvey, '--')
plt.plot(X, y, 'ro')
plt.legend()
plt.show()
This is the result:
Curve
The problem is, the curve fit results in negative values for the first few values (as you can see in the blue line), and in the actual relationship, no negative Y values can exist.
A few questions:
What can I do make sure no negative Y values can be output? Really, an X of 0 should have a Y value of 0 as well.
Is power law curve fitting even the right thing to do? How would you describe this curve?
Thank you!
If you are only looking for a simple approximating equation with a better fit, I extracted data from your plot and added the known data point [0,0] per your note. Since the uncertainty for the [0,0] point is zero - that is, you are 100% certain of that value - I used a weighted regression where that one known point was given an extremely high weight and the weight for all other points was 1. This had the effect of forcing the curve through the [0,0] point, which can be done with any software that allows weighted fitting. I found that a Standard Geometric plus offset equation, "y = a * pow(x, (b * x)) + offset", with parameters:
a = -1.0704001788540748E+02
b = -1.5095055897637395E-03
Offset = 1.0704001788540748E+02
fits as shown in the attached plot and passes through [0,0]. My suggestion is to perform a regression using this equation with the actual data plus the known [0,0] point, using these values as the initial parameter estimates - and if possible using a very large weight for the [0,0] point as I did.

Getting more refined results from Python SciPy curve_fit

I've got the following bit of Python (v2.7.14) code, which uses curve_fit from SciPy (v1.0.1) to find parameters for an exponential decay function. Most of the time, I get reasonable results. Occasionally though, I'll get some results which are completely out of my expected range, even though the found parameters will look fine when plotted against the original graph.
First, my understanding of the exponential decay formula comes from https://en.wikipedia.org/wiki/Exponential_decay which I've translated to Python as:
y = a * numpy.exp(-b * x) + c
Where by:
a is the initial value of the data
b is the decay rate, which is the inverse of when the signal gets to 1/e from initial value
c is an offset, as I am dealing with non-negative values in my data which never reach zero
x is the current time
The script takes into account that non-negative data is being fitted and offsets the initial guess appropriately. But even without guessing, not offsetting, using max/min (instead of first/last values) and other random things I've tried, I cannot seem to get curve_fit to produce sensible values on the troublesome datasets.
My hypothesis is that the troublesome datasets don't have enough of a curve that can be fit without going way outside the realm of the data. I've looked at the bounds argument for curve_fit, and thought that might be a reasonable option. I'm unsure as to what would make good lower and upper bounds for the calculation, or if it is actually the option I am looking for.
Here is the code. Commented out code are things I've tried.
#!/usr/local/bin/python
import numpy as numpy
from scipy.optimize import curve_fit
import matplotlib.pyplot as pyplot
def exponential_decay(x, a, b, c):
return a * numpy.exp(-b * x) + c
def fit_exponential(decay_data, time_data, decay_time):
# The start of the curve is offset by the last point, so subtract
guess_a = decay_data[0] - decay_data[-1]
#guess_a = max(decay_data) - min(decay_data)
# The time that it takes for the signal to reach 1/e becomes guess_b
guess_b = 1/decay_time
# Since this is non-negative data, above 0, we use the last data point as the baseline (c)
guess_c = decay_data[-1]
#guess_c = min(decay_data)
guess=[guess_a, guess_b, guess_c]
print "guess: {0}".format(guess)
#popt, pcov = curve_fit(exponential_decay, time_data, decay_data, maxfev=20000)
popt, pcov = curve_fit(exponential_decay, time_data, decay_data, p0=guess, maxfev=20000)
#bound_lower = [0.05, 0.05, 0.05]
#bound_upper = [decay_data[0]*2, guess_b * 10, decay_data[-1]]
#print "bound_lower: {0}".format(bound_lower)
#print "bound_upper: {0}".format(bound_upper)
#popt, pcov = curve_fit(exponential_decay, time_data, decay_data, p0=guess, bounds=[bound_lower, bound_upper], maxfev=20000)
a, b, c = popt
print "a: {0}".format(a)
print "b: {0}".format(b)
print "c: {0}".format(c)
plot_fit = exponential_decay(time_data, a, b, c)
pyplot.plot(time_data, decay_data, 'g', label='Data')
pyplot.plot(time_data, plot_fit, 'r', label='Fit')
pyplot.legend()
pyplot.show()
print "Gives reasonable results"
time_data = numpy.array([0.0,0.040000000000000036,0.08100000000000018,0.12200000000000011,0.16200000000000014,0.20300000000000007,0.2430000000000001,0.28400000000000003,0.32400000000000007,0.365,0.405,0.44599999999999995,0.486,0.5269999999999999,0.567,0.6079999999999999,0.6490000000000002,0.6889999999999998,0.7300000000000002,0.7700000000000002,0.8110000000000002,0.8510000000000002,0.8920000000000001,0.9320000000000002,0.9730000000000001])
decay_data = numpy.array([1.342146870531986,1.405586070225509,1.3439802492549762,1.3567811728250267,1.2666276377825874,1.1686375326985337,1.216119360088685,1.2022841507836042,1.1926979408026064,1.1544395213303447,1.1904416926531907,1.1054720201415882,1.112100683833435,1.0811434035632939,1.1221671794680403,1.0673295063196415,1.0036146509494743,0.9984005680821595,1.0134498134883763,0.9996920772051201,0.929782730581616,0.9646581154122312,0.9290690593684447,0.8907360533169936,0.9121560047238627])
fit_exponential(decay_data, time_data, 0.567)
print
print "Gives results that are way outside my expectations"
time_data = numpy.array([0.0,0.040000000000000036,0.08099999999999996,0.121,0.16199999999999992,0.20199999999999996,0.24300000000000033,0.28300000000000036,0.32399999999999984,0.3650000000000002,0.40500000000000025,0.44599999999999973,0.48599999999999977,0.5270000000000001,0.5670000000000002,0.6079999999999997,0.6479999999999997,0.6890000000000001,0.7290000000000001,0.7700000000000005,0.8100000000000005,0.851,0.8920000000000003,0.9320000000000004,0.9729999999999999,1.013,1.0540000000000003])
decay_data = numpy.array([1.4401611921948776,1.3720688158534153,1.3793465463227048,1.2939909686762128,1.3376345321949346,1.3352710161631154,1.3413634841956348,1.248705138603995,1.2914294791901497,1.2581763134585313,1.246975264018646,1.2006447776495062,1.188232179689515,1.1032789127515186,1.163294324147017,1.1686263160765304,1.1434009568472243,1.0511578409946472,1.0814520440570896,1.1035953824496334,1.0626893599266163,1.0645580326776076,0.994855722989818,0.9959891485338087,0.9394584009825916,0.949504060086646,0.9278639431146273])
fit_exponential(decay_data, time_data, 0.6890000000000001)
And here is the text output:
Gives reasonable results
guess: [0.4299908658081232, 1.7636684303350971, 0.9121560047238627]
a: 1.10498934435
b: 0.583046565885
c: 0.274503681044
Gives results that are way outside my expectations
guess: [0.5122972490802503, 1.4513788098693758, 0.9278639431146273]
a: 742.824622191
b: 0.000606308344957
c: -741.41398516
Most notably, with the second set of results, the value for a is very high, with the value for c being equally low on the negative scale, and b being a very small decimal number.
Here is the graph of the first dataset, which gives reasonable results.
Here is the graph of the second dataset, which does not give good results.
Note that the graph itself plots correctly, though the line does not really have a good curve to it.
My questions:
Is my implementation of the exponential decay algorithm with curve_fit correct?
Are my initial guess parameters good enough?
Is the bounds parameter the correct solution for this problem? If so, what is a good way to determine lower and upper bounds?
Have I missed something here?
Again, thank you!
When you say that the second fit gives results that are "way outside" of your expectations and that although the second graph "plots correctly" the line does not really "have a good curve fit" you are on the right track to understanding what is going on. I think you are just missing a piece of the puzzle.
The second graph is fit pretty well by a curve that does look linear. That probably means that you don't really have enough change in your data (well, perhaps below the noise level) to detect that it is an exponential decay.
I would bet that if you printed out not only the best-fit values but also the uncertainties and correlations for the variables that you would see that the uncertainties are huge and some of the correlations are very close to 1. That may mean that taking into account the uncertainties (and measurements always have uncertainties) the results might actually fit with your expectation. And that may also tell you that the data you have does not support an exponential decay very well.
You might also try other models for this data ("linear" comes to mind ;)) and compare goodness-of-fit statistics such as chi-square and Akaike information criterion.
scipy.curve_fit does return the covariance matrix -- the pcov that you did not use in your example. Unfortunately, scipy.curve_fit does not convert these values into uncertainties and correlation values, and it does not attempt to return any goodness-of-fit statistics at all.
To fully explain any fit to data, you need not only the best-fit values but also an estimate of the uncertainties for the variable parameters. And you need the goodness-of-fit statistics in order to determine if a fit is good, or at least whether one fit is better than another.

Extrapolating data from a curve using Python

I am trying to extrapolate future data points from a data set that contains one continuous value per day for almost 600 days. I am currently fitting a 1st order function to the data using numpy.polyfit and numpy.poly1d. In the graph below you can see the curve (blue) and the 1st order function (green). The x-axis is days since beginning. I am looking for an effective way to model this curve in Python in order to extrapolate future data points as accurately as possible. A linear regression isnt accurate enough and Im unaware of any methods of nonlinear regression that can work in this instance.
This solution isnt accurate enough as if I feed
x = dfnew["days_since"]
y = dfnew["nonbrand"]
z = numpy.polyfit(x,y,1)
f = numpy.poly1d(z)
x_new = future_days
y_new = f(x_new)
plt.plot(x,y, '.', x_new, y_new, '-')
EDIT:
I have now tried the curve_fit using a logarithmic function as the curve and data behaviour seems to conform to:
def func(x, a, b):
return a*numpy.log(x)+b
x = dfnew["days_since"]
y = dfnew["nonbrand"]
popt, pcov = curve_fit(func, x, y)
plt.plot( future_days, func(future_days, *popt), '-')
However when I plot it, my Y-values are way off:
The very general rule of thumb is that if your fitting function is not fitting well enough to your actual data then either:
You are using the function wrong, e.g. You are using 1st order polynomials - So if you are convinced that it is a polynomial then try higher order polynomials.
You are using the wrong function, it is always worth taking a look at:
your data curve &
what you know about the process that is generating the data
to come up with some speculation/theorem/guesses about what sort of model might fit better.
Might your process be a logarithmic one, a saturating on, etc. try them!
Finally, if you are not getting a consistent long term trend then you might be able to justify using cubic splines.

Fitting a straight line to a log-log curve in matplotlib

I have a plot with me which is logarithmic on both the axes. I have pyplot's loglog function to do this. It also gives me the logarithmic scale on both the axes.
Now, using numpy I fit a straight line to the set of points that I have. However, when I plot this line on the plot, I cannot get a straight line. I get a curved line.
The blue line is the supposedly "straight line". It is not getting plotted straight. I want to fit this straight line to the curve plotted by red dots
Here is the code I am using to plot the points:
import numpy
from matplotlib import pyplot as plt
import math
fp=open("word-rank.txt","r")
a=[]
b=[]
for line in fp:
string=line.strip().split()
a.append(float(string[0]))
b.append(float(string[1]))
coefficients=numpy.polyfit(b,a,1)
polynomial=numpy.poly1d(coefficients)
ys=polynomial(b)
print polynomial
plt.loglog(b,a,'ro')
plt.plot(b,ys)
plt.xlabel("Log (Rank of frequency)")
plt.ylabel("Log (Frequency)")
plt.title("Frequency vs frequency rank for words")
plt.show()
To better understand this problem, let's first talk about plain ol' linear regression (the polyfit function, in this case, is your linear regression algorithm).
Suppose you have a set of data points (x,y), shown below:
You want to create a model that predicts y as a function of x, so you use linear regression. That uses the model:
y = mx + b
and computes the values of m and b that best predict your data, using some linear algebra.
Next, you use your model to predict values of y as a function of x. You do this by picking a set of values for x (think linspace) and computing the corresponding values of y. Plotting these (x,y) pairs gives you your regression line.
Now, let's talk about logarithmic regression. In this case, we still have two variables, y versus x, and we are still interested in their relationship, i.e., being able to predict y given x. The only difference is, now y and x happen to be logarithms of two other variables, which I'll call log(F) and log(R). Thus far, this is nothing more than a simple change of name.
The linear regression also works the same way. You're still regressing y versus x. The linear regression algorithm doesn't care that y and x are actually log(F) and log(R) - it makes no difference to the algorithm.
The last step is a little bit different - and this is where you're getting tripped up in your plot above. What you're doing is computing
F = m R + b
but this is incorrect, because the relationship between F and R is not linear. (That's why you're using a log-log plot.)
Instead, you should compute
log(F) = m log(R) + b
If you transform this (raise 10 to the power of both sides and rearrange), you get
F = c R^m
where c = 10^b. This is the relationship between F and R: it is a power law relationship. (Power law relationships are what log-log plots are best at.)
In your code, you're using A and B when calling polyfit, but you should be using log(A) and log(B).
Your linear fit is not performed on the same data as shown in the loglog-plot.
Make a and b numpy arrays like this
a = numpy.asarray(a, dtype=float)
b = numpy.asarray(b, dtype=float)
Now you can perform operations on them. What the loglog-plot does, is to take the logarithm to base 10 of both a and b. You can do the same by
logA = numpy.log10(a)
logB = numpy.log10(b)
This is what the loglog plot visualizes. Check this by ploting both logA and logB as a regular plot. Repeat the linear fit on the log data and plot your line in the same plot as the logA, logB data.
coefficients = numpy.polyfit(logB, logA, 1)
polynomial = numpy.poly1d(coefficients)
ys = polynomial(b)
plt.plot(logB, logA)
plt.plot(b, ys)
The other answers offer great explanations and a solution. However I would like to propose a solution that helped myself a lot and maybe will help you as well.
Another simple way of writing a line fit for log-log scale is the function powerfit in the code below. It takes in the original x and y data and by using a number of new x-points you can get a straight line on log-log scale. In the current case the values xnew are the same as x (which are both b).
The advantage of defining new x-coordinates is that you can get as few or as many points of the powerfitted line for whatever purpose you might need them.
import numpy as np
from matplotlib import pyplot as plt
import math
def powerfit(x, y, xnew):
"""line fitting on log-log scale"""
k, m = np.polyfit(np.log(x), np.log(y), 1)
return np.exp(m) * xnew**(k)
fp=open("word-rank.txt","r")
a=[]
b=[]
for line in fp:
string=line.strip().split()
a.append(float(string[0]))
b.append(float(string[1]))
ys = powerfit(b, a, b)
plt.loglog(b,a,'ro')
plt.plot(b,ys)
plt.xlabel("Log (Rank of frequency)")
plt.ylabel("Log (Frequency)")
plt.title("Frequency vs frequency rank for words")
plt.show()

SciPy + Numpy: Finding the slope of a sigmoid curve

I have some data that follow a sigmoid distribution as you can see in the following image:
After normalizing and scaling my data, I have adjusted the curve at the bottom using scipy.optimize.curve_fit and some initial parameters:
popt, pcov = curve_fit(sigmoid_function, xdata, ydata, p0 = [0.05, 0.05, 0.05])
>>> print popt
[ 2.82019932e+02 -1.90996563e-01 5.00000000e-02]
So popt, according to the documentation, returns *"Optimal values for the parameters so that the sum of the squared error of f(xdata, popt) - ydata is minimized". I understand here that there is no calculation of the slope with curve_fit, because I do not think the slope of this gentle curve is 282, neither is negative.
Then I tried with scipy.optimize.leastsq, because the documentation says it returns "The solution (or the result of the last iteration for an unsuccessful call).", so I thought the slope would be returned. Like this:
p, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args = (nxdata, nydata), full_output=True)
>>> print p
Param(x0=281.73193626250207, y0=-0.012731420027056234, c=1.0069006606656596, k=0.18836680131910222)
But again, I did not get what I expected. curve_fit and leastsq returned almost the same values, with is not surprising I guess, as curve_fit is using an implementation of the least squares method within to find the curve. But no slope back...unless I overlooked something.
So, how to calculate the slope in a point, say, where X = 285 and Y = 0.5?
I am trying to avoid manual methods, like calculating the derivative in, say, (285.5, 0.55) and (284.5, 0.45) and subtract and divide results and so. I would like to know if there is a more automatic method for this.
Thank you all!
EDIT #1
This is my "sigmoid_function", used by curve_fit and leastsq methods:
def sigmoid_function(xdata, x0, k, p0): # p0 not used anymore, only its components (x0, k)
# This function is called by two different methods: curve_fit and leastsq,
# this last one through function "residuals". I don't know if it makes sense
# to use a single function for two (somewhat similar) methods, but there
# it goes.
# p0:
# + Is the initial parameter for scipy.optimize.curve_fit.
# + For residuals calculation is left empty
# + It is initialized to [0.05, 0.05, 0.05]
# x0:
# + Is the convergence parameter in X-axis and also the shift
# + It starts with 0.05 and ends up being around ~282 (days in a year)
# k:
# + Set up either by curve_fit or leastsq
# + In least squares it is initially fixed at 0.5 and in curve_fit
# + to 0.05. Why? Just did this approach in two different ways and
# + it seems it is working.
# + But honestly, I have no clue on what it represents
# xdata:
# + Positions in X-axis. In this case from 240 to 365
# Finally I changed those parameters as suggested in the answer.
# Sigmoid curve has 2 degrees of freedom, therefore, the initial
# guess only needs to be this size. In this case, p0 = [282, 0.5]
y = np.exp(-k*(xdata-x0)) / (1 + np.exp(-k*(xdata-x0)))
return y
def residuals(p_guess, xdata, ydata):
# For the residuals calculation, there is no need of setting up the initial parameters
# After fixing the initial guess and sigmoid_function header, remove []
# return ydata - sigmoid_function(xdata, p_guess[0], p_guess[1], [])
return ydata - sigmoid_function(xdata, p_guess[0], p_guess[1], [])
I am sorry if I made mistakes while describing the parameters or confused technical terms. I am very new with numpy and I have not studied maths for years, so I am catching up again.
So, again, what is your advice to calculate the slope of X = 285, Y = 0.5 (more or less the midpoint) for this dataset? Thanks!!
EDIT #2
Thanks to Oliver W., I updated my code as he suggested and understood a bit better the problem.
There is a final detail I do not fully get. Apparently, curve_fit returns a popt array (x0, k) with the optimum parameters for the fitting:
x0 seems to be how shifted is the curve by indicating the central point of the curve
k parameter is the slope when y = 0.5, also in the center of the curve (I think!)
Why if the sigmoid function is a growing one, the derivative/slope in popt is negative? Does it make sense?
I used sigmoid_derivative to calculate the slope and, yes, I obtained the same results that popt but with positive sign.
# Year 2003, 2005, 2007. Slope in midpoint.
k = [-0.1910, -0.2545, -0.2259] # Values coming from popt
slope = [0.1910, 0.2545, 0.2259] # Values coming from sigmoid_derivative function
I know this is being a bit peaky because I could use both. The relevant data is in there but with negative sign, but I was wondering why is this happening.
So, the calculation of the derivative function as you suggested, is only required if I need to know the slope in other points than y = 0.5. Only for midpoint, I can use popt.
Thanks for your help, it saved me a lot of time. :-)
You're never using the parameter p0 you're passing to your sigmoid function. Hence, curve fitting will not have any good measure to find convergence, because it can take any value for this parameter. You should first rewrite your sigmoid function like this:
def sigmoid_function(xdata, x0, k):
y = np.exp(-k*(xdata-x0)) / (1 + np.exp(-k*(xdata-x0)))
return y
This means your model (the sigmoid) has only two degrees of freedom. This will be returned in popt:
initial_guess = [282, 1] # (x0, k): at x0, the sigmoid reaches 50%, k is slope related
popt, pcov = curve_fit(sigmoid_function, xdata, ydata, p0=initial_guess)
Now popt will be a tuple (or array of 2 values), being the best possible x0 and k.
To get the slope of this function at any point, to be honest, I would just calculate the derivative symbolically as the sigmoid is not such a hard function. You will end up with:
def sigmoid_derivative(x, x0, k):
f = np.exp(-k*(x-x0))
return -k / f
If you have the results from your curve fitting stored in popt, you could pass this easily to this function:
print(sigmoid_derivative(285, *popt))
which will return for you the derivative at x=285. But, because you ask specifically for the midpoint, so when x==x0 and y==.5, you'll see (from the sigmoid_derivative) that the derivative there is just -k, which can be observed immediately from the curve_fit output you've already obtained. In the output you've shown, that's about 0.19.

Categories

Resources