I'm troubled by the warnings of odeint and curve fit. So the thing I want to do is:
my 1st problem is that curve fit and odeint gives warnings as below repeatedly for the 3 data sets(6 warnings in total), but meanwhile curve_fit does give results seemingly correct.
828: OptimizeWarning: Covariance of the parameters could not be estimated
247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
** the 2nd problem is for integrated curves, with the EXACT SAME code just by executing it multiple times, it gives differents results. It's seems to have a period of repetition, after a few executions, it gives correct curves, but on the next execution, it become once again incorrect and so on. Maybe a problem of instability?**
import math
import numpy as np
import pathlib as pl
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.integrate import odeint
def loadData(path):
with open(path,"r") as fid:
res=np.loadtxt(fid,comments="#")
return res
def modeleeq(a,N,p,we,wp):
res=(we*a/p[0])**p[1] + (wp*a/p[2])**p[3]
return res
#get path
chemin=pl.Path(input("Paste the path to the directory containing data files\n"))
#get we and wp in array 3X2
wewp=loadData(chemin/'wewp.dat').transpose()
#plotting
plt.style.use('seaborn')
fig, (ax1,ax2) = plt.subplots(1,2)
colors = ['#79ccff', '#f78db4', '#a07ffb']
files = pl.Path(chemin).glob("a_dadN*")
para=[]
for i,f in enumerate(files):
res=loadData(f)
a=res[:,0]
dadN=res[:,1]
we=wewp[i,0]
wp=wewp[i,1]
#exp data
ax1.scatter(a,dadN,c=colors[i],marker="x", label = f"test {i+1}")
#evaluation of parameters
modele=lambda a, *p: (we*a/p[0])**p[1] + (wp*a/p[2])**p[3] #p=[ge,me,gp,mp]
p, pcov= curve_fit(modele,a,dadN, p0 = [2e5,0,1e5,0])
para.append(p)
ax1.plot(a,modele(a,*p),c=colors[i],label=f"test {i+1} identification")
#intergration
a0=a.min() #initial condition
N = np.linspace(0, 4000)
aitgr=odeint(modeleeq,a0,N,args=(p,we,wp))
ax2.plot(N,aitgr,c=colors[i],label = f"test {i+1} integration")
#the following code is just for adding titles and print the identified paras
#so I won't put them here
Big thanks to you all!
I think here are a few issues. First it is clear that the two addends are very similar. So it can, and actually does, happen that the data cannot distinguish between the two. Scaling is a second issue. Having fit parameters spanning orders of magnitude is usually not a good idea. In fact, the gamma just rescale the W. Hence, one can easily rewrite the function as ( f1 * a )**e1 One fits f1 and calculates gamma by knowing f1=W/gamma. Another issue is the possibility of negative powers of a negative number coming up. So one should use either abs( f1 ) or f1**2. With this in mind I modified the code and got the result below. From the fit results in the second and third case, one can see that either f1=0 or the exponents are almost equal. In such a case it is normal that the covariance matrix cannot be determined.
Finally, when it comes to integrating / plotting a(N) The differential equation is somewhat of type da / dN = a**k so we have da / a**k = dN integrating will give a**(-k+1) = N+N0 so a(N) = 1 / (N + N0 )**( 1 / (k-1) ). As the initial slope is positive N0 < 0 and the function diverges at N0. Numerical integration beyond this value does not make sense.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.integrate import odeint
def loadData(path):
with open(path,"r") as fid:
res=np.loadtxt(fid,comments="#")
return res
def simplified_model(a, N, f1, e1, f2, e2 ):
dadN = ( np.fabs( f1 ) * a )**e1 + ( np.fabs( f2 ) * a )**e2
return dadN
plt.style.use('seaborn')
fig = plt.figure()
ax1 = fig.add_subplot( 1, 2, 1)
ax2 = fig.add_subplot( 1, 2, 2)
colors = ['#79ccff', '#f78db4', '#a07ffb']
para = list()
for i in range( 3 ):
res=loadData( "a_dadN_{}.dat".format( i + 1 ) )
a = res[ ::, 0 ]
dadN = res[ ::, 1 ]
# ~we = wewp[i, 0 ]
# ~wp = wewp[ i, 1 ]
def model_wrapper(a, ge, me, gp, mp ):
return simplified_model(a, 0, ge, me, gp, mp )
#exp data
ax1.scatter( a, dadN, c=colors[i], marker="x", label = f"test {i+1}" )
print(i)
al = np.linspace( min(a), max(a), 50 )
guess = [ .2 + i, 2.0, 2, 2.2 ]
dal = np.fromiter( ( model_wrapper( av, *guess ) for av in al ), np.float )
ax1.plot( al, dal, color=colors[i], ls=':')
p, pcov= curve_fit(model_wrapper, a, dadN, p0 = guess, maxfev=100000 )
print( p )
dalf = np.fromiter( ( model_wrapper( av, *p ) for av in al ), np.float )
ax1.plot( al, dalf, color=colors[i])
#intergration
a0 = a.min() #initial condition
N = np.linspace(0, 4000, 100000)
aitgr=odeint( simplified_model, a0, N,args=tuple(p) )
ax2.plot( N , aitgr, c=colors[i], label = f"test {i+1} integration")
ax2.set_ylim([1e-4,1e-2])
ax2.set_yscale("log")
plt.show()
providing
Related
I want to fit my data using two linear functions (broken power law) with one breaking point which is user given. Currently Im using the curve_fit function from the scipy.optimize module. Here are my datasets frequencies, binned data, errors
Here is my code:
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
freqs=np.loadtxt('binf11.dat')
binys=np.loadtxt('binp11.dat')
errs=np.loadtxt('bine11.dat')
def brkPowLaw(xArray, breakp, slopeA, offsetA, slopeB):
returnArray = []
for x in xArray:
if x <= breakp:
returnArray.append(slopeA * x + offsetA)
elif x>breakp:
returnArray.append(slopeB * x + offsetA)
return returnArray
#define initial guesses, breakpoint=-3.2
a_fit,cov=curve_fit(brkPowLaw,freqs,binys,sigma=errs,p0=(-3.2,-2.0,-2.0,-2.0))
modelPredictions = brkPowLaw(freqs, *a_fit)
plt.errorbar(freqs, binys, yerr=errs, fmt='kp',fillstyle='none',elinewidth=1)
plt.xlim(-5,-2)
plt.plot(freqs,modelPredictions,'r')
The offset of the second linear function is set to be equal to the offset of the first one.
It looks like this works but I get this fit:
Now I thought that the condition in by brkPowLaw function should suffice but it does not. What I want is that the first linear equation is used to fit the data up to a chosen breaking point and then from this breaking point a second linear fit will be done, but without the hump as it shows in the plot because now there it looks like there are two breaking points instead of one and three linear functions for fitting which is not what I expected nor wanted.
What I want is that when the first linear fit ends the second one starts from the point where the first linear fit ended.
I have tried using the numpy.piecewise function with no plausible result, looked into some topics like this or this but I did not manage to make my script work
Thank you for your time
This would be my approach, not with linear but quadratic functions.
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
def soft_step(x, s): ### not my usual np.tanh() ...OMG
return 1+ 0.5 * s * x / np.sqrt( 1 + ( s * x )**2 )
### for the looks of the data I decided to go for two parabolas with
### one discontinuity
def fit_func( x, a0, b0, c0, a1, b1, c1, x0, s ):
out = ( a0 * x**2 + b0 * x + c0 ) * ( 1 - soft_step( x - x0, s ) )
out += ( a1 * x**2 + b1 * x + c1 ) * soft_step( x - x0, s )
return out
### with global parameter for iterative fit
### if using least_squares one could avoid globals
def fit_short( x, a0, b0, c0, a1, b1, c1, x0 ):
global stepwidth
return fit_func( x, a0, b0, c0, a1, b1, c1, x0, stepwidth )
### getting data
xl = np.loadtxt( "binf11.dat" )
yl = np.loadtxt( "binp11.dat" )
el = np.loadtxt( "bine11.dat" )
### check for initial values
p0 = [ 0, -2,-11, 0, -2, -9, -3, 10 ]
xth = np.linspace( -5.5, -1.5, 250 )
yth = np.fromiter( ( fit_func(x, *p0 ) for x in xth ), np.float )
### initial fit
sol, pcov = curve_fit( fit_func, xl, yl, sigma=el, p0=p0, absolute_sigma=True )
yft = np.fromiter( ( fit_func( x, *sol ) for x in xth ), np.float )
sol=sol[: -1]
###iterating with fixed and decreasing softness in the step
for stepwidth in range(10,55,5):
sol, pcov = curve_fit( fit_short, xl, yl, sigma=el, p0=sol, absolute_sigma=True )
### printing the step position
print sol[-1]
yiter = np.fromiter( ( fit_short(x, *sol ) for x in xth ), np.float )
print sol
###plotting
fig = plt.figure()
ax = fig.add_subplot( 1, 1, 1 )
# ~ax.plot( xth, yth ) ### no need to show start parameters
ax.plot( xth, yft ) ### first fit with variable softness
ax.plot( xth, yiter ) ### last fit with fixed softness of 50
ax.errorbar( xl, yl, el, marker='o', ls='' ) ### data
plt.show()
This gives:
-3.1762721614559712
-3.1804393481217477
-3.1822672190583603
-3.183493292415725
-3.1846976088390333
-3.185974760198917
-3.1872472903175266
-3.188427041827035
-3.1894705102541843
[ -0.78797351 -5.33255174 -12.48258537 0.53024954 1.14252783 -4.44589397 -3.18947051]
and
putting the jump at -3.189
I am trying to fit data with an admittance equation for an rlc circuit having 6 components. I am following an example given he[fit]1re and inserted my equation. The equation is the real part of admittance for the 6 component circuit simplified using Mathcad. In the figure attached the x axis is omega (w=2*pi*f) and y is admittance in milli Siemens.
The program runs but it doesn't do the fitting despite a good trial function. I appreciate any help why the fit is a straight line. I attached also a Gaussian fitting example.
this is what i get when I try to fit with the equation. The data is the one with a smaller peak on the left and the trial function is the dotted line. the fit is a straight line
from numpy import sqrt, pi, exp, linspace, loadtxt
from lmfit import Model
import matplotlib.pyplot as plt
data = loadtxt("C:/Users/susu/circuit_eq_real5.dat")
x = data[:, 0]
y = data[:, 1]
def circuit(x,C0,Cm,Lm,Rm,R0,Rs):
return ((C0**2*Cm**2*Lm**2*R0*x**4)+(Rs*C0**2*Cm**2*Lm**2*x**4)+(C0**2*Cm**2*R0**2*Rm*x**2)+(Rs*C0**2*Cm**2*R0**2*x**2)+(C0**2*Cm**2*R0*Rm**2*x**2)+(2*Rs*C0**2*Cm**2*R0*Rm*x**2)+(Rs*C0**2*Cm**2*Rm**2*x**2)-(2*C0**2*Cm*Lm*R0*x**2)-(2*Rs*C0**2*Cm*Lm*x**2)+(C0**2*R0)+(Rs*C0**2)-(2*Rs*C0*Cm**2*Lm*x**2)+(2*Rs*C0*Cm)+(Cm**2*Rm)+(Rs*Cm**2))/((C0**2*Cm**2*Lm**2*x**4)+(C0**2*Cm**2*R0**2*x**2)+(2*C0**2*Cm**2*R0*Rm*x**2)+(C0**2*Cm**2*Rm**2*x**2)-(2*C0**2*Cm*Lm*x**2)+(C0**2)-(2*C0*Cm**2*Lm*x**2)+(2*C0*Cm)+(Cm**2))
gmodel = Model(circuit)
result = gmodel.fit(y, x=x, C0=1.0408*10**(-12), Cm=5.953*10**(-14),
Lm=1.475*10**(-7), Rm=1.571, R0=2.44088, Rs=0.42)
print(result.fit_report())
plt.plot(x, y, 'bo')
plt.plot(x, result.init_fit, 'k--')
plt.plot(x, result.best_fit, 'r-')
plt.show()
Below is the Fit Report
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 14005
# data points = 237
# variables = 6
chi-square = 32134074.5
reduced chi-square = 139108.548
Akaike info crit = 2812.71607
Bayesian info crit = 2833.52443
[[Variables]]
C0: -7.5344e-15 +/- 6.3081e-09 (83723736.65%) (init = 1.0408e-12)
Cm: -8.9529e-13 +/- 1.4518e-06 (162164237.47%) (init = 5.953e-14)
Lm: 2.4263e-06 +/- 1.94051104 (79978205.20%) (init = 1.475e-07)
Rm: -557.974399 +/- 1.3689e+09 (245334051.75%) (init = 1.571)
R0: -5178.53517 +/- 6.7885e+08 (13108904.45%) (init = 2.44088)
Rs: 2697.67659 +/- 7.3197e+08 (27133477.70%) (init = 0.42)
[[Correlations]] (unreported correlations are < 0.100)
C(R0, Rs) = -1.003
C(Rm, Rs) = -0.987
C(Rm, R0) = 0.973
C(C0, Lm) = 0.952
C(C0, Cm) = -0.502
C(Cm, R0) = -0.483
C(Cm, Rs) = 0.453
C(Cm, Rm) = -0.388
C(Cm, Lm) = -0.349
C(C0, R0) = 0.310
C(C0, Rs) = -0.248
C(C0, Rm) = 0.148
Thank you so much M Newville and Mikuszefski and others for your insights and feedback. I agreed what I put there is perhaps a mess to put in a program. It is apparent from the python code that I am not versed in Python or programming.
Mikuszefsky, thanks for posting the rlc example code. Your approach is neat and interesting. I didn't know Python does direct complex fitting.I will try your approach and see if can do the fit. I want to fit both the real and imaginary part of Y (admittance). I will definitely get stuck somewhere and will post my progress here.
Best,
Susu
Here a way to clean up RLC circuits with parallel and series connections. This avoids this super long line and hard to check function. It also avoids Matlab or similar programs, as it directly computes the circuit. Surely, it can be extended easily the OP's circuit. As pointed out by M Newville, the simple fit fails. If, on the other hand, units are scaled to natural units, it works even without initial parameters. Note, results are onnly correct by a scaling factor. One needs to know at least one components value.
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
def r_l( w, l ):
return 1j * w * l
def r_c( w, c ):
return 1. / ( 1j * w * c )
def parallel( a, b ):
return( 1. / ( 1./ a + 1. / b ) )
def series( a, b ):
return a + b
# simple rlc band pass filter (to be extended)
def rlc_band( w , r, l, c ):
lc = parallel( r_c( w , c ), r_l( w, l ) )
return lc / series( r, lc )
def rlc_band_real( w , r, l, c ):
return rlc_band( w , r, l, c ).real
def rlc_band_real_milli_nano( w , r, l, c ):
return rlc_band_real( w , r, 1e-6 * l, 1e-9 * c ).real
wList = np.logspace( 5, 7, 25 )
wFullList = np.logspace( 5, 7, 500 )
rComplexList = np.fromiter( ( rlc_band(w, 12, 1.3e-5, 1e-7 ) for w in wList ), np.complex )
rList = np.fromiter( ( r.real for r in rComplexList ), np.float )
pList = np.fromiter( ( np.angle( r ) for r in rComplexList ), np.float )
fit1, pcov = curve_fit( rlc_band_real, wList, rList )
print fit1
print "does not work"
fit2, pcov = curve_fit( rlc_band_real_milli_nano, wList, rList )
print fit2
print "works, but is not unique (scaling is possible)"
print 12, fit2[1] * 12 / fit2[0], fit2[2] * fit2[0] / 12.
fig = plt.figure()
ax = fig.add_subplot( 1, 1, 1 )
ax.plot( wList, rList , ls='', marker='o', label='data')
#~ ax.plot( wList, pList )
ax.plot( wFullList, [ rlc_band_real( w , *fit1 ) for w in wFullList ], label='naive fit')
ax.plot( wFullList, [ rlc_band_real_milli_nano( w , *fit2 ) for w in wFullList ], label='scaled units')
ax.set_xscale('log')
ax.legend( loc=0 )
plt.show()
Providing:
>> /...minpack.py:785: OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning)
>> [1. 1. 1.]
>> does not work
>> [ 98.869924 107.10908434 12.13715912]
>> works, but is not unique (scaling is possible)
>> 12 13.0 100.0
Providing a real link to a text file of the actual data you are using and/or a real plot of what you are actually seeing would be most helpful. Also, please provide an accurate and complete description of the results including the text of what is actually printed out by the print(result.fit_report()). Basically, ask yourself how you might try to help someone who asked such a question, and provide as much information as you can.
No one (including you) is ever going to be able to spell-check the implementation of your function. You will need thorough and robust testing of this function in order to convince anyone (including you, I hope) that it is doing what you think it should do. You should provide the results of those tests before worrying about why it is not working as a fitting function. You should definitely consider refactoring that mess of an equation into more manageable and readable pieces.
That said, I also strongly recommend that you do not work in units of Farads and Henrys but picoFarads or nanoFarads and microHenrys. That will make the values much closer to 1 (say, order 1e-6 to 1e+6), which will make it much easier for the fit to do its job.
I was trying to implement a Radial Basis Function in Python and Numpy as describe by CalTech lecture here. The mathematics seems clear to me so I find it strange that its not working (or it seems to not work). The idea is simple, one chooses a subsampled number of centers for each Gaussian form a kernal matrix and tries to find the best coefficients. i.e. solve Kc = y where K is the guassian kernel (gramm) matrix with least squares. For that I did:
beta = 0.5*np.power(1.0/stddev,2)
Kern = np.exp(-beta*euclidean_distances(X=X,Y=subsampled_data_points,squared=True))
#(C,_,_,_) = np.linalg.lstsq(K,Y_train)
C = np.dot( np.linalg.pinv(Kern), Y )
but when I try to plot my interpolation with the original data they don't look at all alike:
with 100 random centers (from the data set). I also tried 10 centers which produces essentially the same graph as so does using every data point in the training set. I assumed that using every data point in the data set should more or less perfectly copy the curve but it didn't (overfit). It produces:
which doesn't seem correct. I will provide the full code (that runs without error):
import numpy as np
from sklearn.metrics.pairwise import euclidean_distances
from scipy.interpolate import Rbf
import matplotlib.pyplot as plt
## Data sets
def get_labels_improved(X,f):
N_train = X.shape[0]
Y = np.zeros( (N_train,1) )
for i in range(N_train):
Y[i] = f(X[i])
return Y
def get_kernel_matrix(x,W,S):
beta = get_beta_np(S)
#beta = 0.5*tf.pow(tf.div( tf.constant(1.0,dtype=tf.float64),S), 2)
Z = -beta*euclidean_distances(X=x,Y=W,squared=True)
K = np.exp(Z)
return K
N = 5000
low_x =-2*np.pi
high_x=2*np.pi
X = low_x + (high_x - low_x) * np.random.rand(N,1)
# f(x) = 2*(2(cos(x)^2 - 1)^2 -1
f = lambda x: 2*np.power( 2*np.power( np.cos(x) ,2) - 1, 2) - 1
Y = get_labels_improved(X , f)
K = 2 # number of centers for RBF
indices=np.random.choice(a=N,size=K) # choose numbers from 0 to D^(1)
subsampled_data_points=X[indices,:] # M_sub x D
stddev = 100
beta = 0.5*np.power(1.0/stddev,2)
Kern = np.exp(-beta*euclidean_distances(X=X,Y=subsampled_data_points,squared=True))
#(C,_,_,_) = np.linalg.lstsq(K,Y_train)
C = np.dot( np.linalg.pinv(Kern), Y )
Y_pred = np.dot( Kern , C )
plt.plot(X, Y, 'o', label='Original data', markersize=1)
plt.plot(X, Y_pred, 'r', label='Fitted line', markersize=1)
plt.legend()
plt.show()
Since the plots look strange I decided to read the docs for the ploting functions but I couldn't find anything obvious that was wrong.
Scaling of interpolating functions
The main problem is unfortunate choice of standard deviation of the functions used for interpolation:
stddev = 100
The features of your functions (its humps) are of size about 1. So, use
stddev = 1
Order of X values
The mess of red lines is there because plt from matplotlib connects consecutive data points, in the order given. Since your X values are in random order, this results in chaotic left-right movements. Use sorted X:
X = np.sort(low_x + (high_x - low_x) * np.random.rand(N,1), axis=0)
Efficiency issues
Your get_labels_improved method is inefficient, looping over the elements of X. Use Y = f(X), leaving the looping to low-level NumPy internals.
Also, the computation of least-squared solution of an overdetermined system should be done with lstsq instead of computing the pseudoinverse (computationally expensive) and multiplying by it.
Here is the cleaned-up code; using 30 centers gives a good fit.
import numpy as np
from sklearn.metrics.pairwise import euclidean_distances
import matplotlib.pyplot as plt
N = 5000
low_x =-2*np.pi
high_x=2*np.pi
X = np.sort(low_x + (high_x - low_x) * np.random.rand(N,1), axis=0)
f = lambda x: 2*np.power( 2*np.power( np.cos(x) ,2) - 1, 2) - 1
Y = f(X)
K = 30 # number of centers for RBF
indices=np.random.choice(a=N,size=K) # choose numbers from 0 to D^(1)
subsampled_data_points=X[indices,:] # M_sub x D
stddev = 1
beta = 0.5*np.power(1.0/stddev,2)
Kern = np.exp(-beta*euclidean_distances(X=X, Y=subsampled_data_points,squared=True))
C = np.linalg.lstsq(Kern, Y)[0]
Y_pred = np.dot(Kern, C)
plt.plot(X, Y, 'o', label='Original data', markersize=1)
plt.plot(X, Y_pred, 'r', label='Fitted line', markersize=1)
plt.legend()
plt.show()
I've been trying to fit the amplitude, frequency and phase of a sine curve given some generated two dimensional toy data. (Code at the end)
To get estimates for the three parameters, I first perform an FFT. I use the values from the FFT as initial guesses for the actual frequency and phase and then fit for them (row by row). I wrote my code such that I input which bin of the FFT I want the frequency to be in, so I can check if the fitting is working well. But there's some pretty strange behaviour. If my input bin is say 3.1 (a non integral bin, so the FFT won't give me the right frequency) then the fit works wonderfully. But if the input bin is 3 (so the FFT outputs the exact frequency) then my fit fails, and I'm trying to understand why.
Here's the output when I give the input bins (in the X and Y direction) as 3.0 and 2.1 respectively:
(The plot on the right is data - fit)
Here's the output when I give the input bins as 3.0 and 2.0:
Question: Why does the non linear fit fail when I input the exact frequency of the curve?
Code:
#! /usr/bin/python
# For the purposes of this code, it's easier to think of the X-Y axes as transposed,
# so the X axis is vertical and the Y axis is horizontal
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as optimize
import itertools
import sys
PI = np.pi
# Function which accepts paramters to define a sin curve
# Used for the non linear fit
def sineFit(t, a, f, p):
return a * np.sin(2.0 * PI * f*t + p)
xSize = 18
ySize = 60
npt = xSize * ySize
# Get frequency bin from user input
xFreq = float(sys.argv[1])
yFreq = float(sys.argv[2])
xPeriod = xSize/xFreq
yPeriod = ySize/yFreq
# arrays should be defined here
# Generate the 2D sine curve
for jj in range (0, xSize):
for ii in range(0, ySize):
sineGen[jj, ii] = np.cos(2.0*PI*(ii/xPeriod + jj/yPeriod))
# Compute 2dim FFT as well as freq bins along each axis
fftData = np.fft.fft2(sineGen)
fftMean = np.mean(fftData)
fftRMS = np.std(fftData)
xFreqArr = np.fft.fftfreq(fftData.shape[1]) # Frequency bins along x
yFreqArr = np.fft.fftfreq(fftData.shape[0]) # Frequency bins along y
# Find peak of FFT, and position of peak
maxVal = np.amax(np.abs(fftData))
maxPos = np.where(np.abs(fftData) == maxVal)
# Iterate through peaks in the FFT
# For this example, number of loops will always be only one
prevPhase = -1000
for col, row in itertools.izip(maxPos[0], maxPos[1]):
# Initial guesses for fit parameters from FFT
init_phase = np.angle(fftData[col,row])
init_amp = 2.0 * maxVal/npt
init_freqY = yFreqArr[col]
init_freqX = xFreqArr[row]
cntr = 0
if prevPhase == -1000:
prevPhase = init_phase
guess = [init_amp, init_freqX, prevPhase]
# Fit each row of the 2D sine curve independently
for rr in sineGen:
(amp, freq, phs), pcov = optimize.curve_fit(sineFit, xDat, rr, guess)
# xDat is an linspace array, containing a list of numbers from 0 to xSize-1
# Subtract fit from original data and plot
fitData = sineFit(xDat, amp, freq, phs)
sub1 = rr - fitData
# Plot
fig1 = plt.figure()
ax1 = fig1.add_subplot(121)
p1, = ax1.plot(rr, 'g')
p2, = ax1.plot(fitData, 'b')
plt.legend([p1,p2], ["data", "fit"])
ax2 = fig1.add_subplot(122)
p3, = ax2.plot(sub1)
plt.legend([p3], ['residual1'])
fig1.tight_layout()
plt.show()
cntr += 1
prevPhase = phs # Update guess for phase of sine curve
I've tried to distill the important parts of your question into this answer.
First of all, try fitting a single block of data, not an array. Once you are confident that your model is sufficient you can move on.
Your fit is only going to be as good as your model, if you move on to something not "sine"-like you'll need to adjust accordingly.
Fitting is an "art", in that the initial conditions can greatly change the convergence of the error function. In addition there may be more than one minima in your fits, so you often have to worry about the uniqueness of your proposed solution.
While you were on the right track with your FFT idea, I think your implementation wasn't quite correct. The code below should be a great toy system. It generates random data of the type f(x) = a0*sin(a1*x+a2). Sometimes a random initial guess will work, sometimes it will fail spectacularly. However, using the FFT guess for the frequency the convergence should always work for this system. An example output:
import numpy as np
import pylab as plt
import scipy.optimize as optimize
# This is your target function
def sineFit(t, (a, f, p)):
return a * np.sin(2.0*np.pi*f*t + p)
# This is our "error" function
def err_func(p0, X, Y, target_function):
err = ((Y - target_function(X, p0))**2).sum()
return err
# Try out different parameters, sometimes the random guess works
# sometimes it fails. The FFT solution should always work for this problem
inital_args = np.random.random(3)
X = np.linspace(0, 10, 1000)
Y = sineFit(X, inital_args)
# Use a random inital guess
inital_guess = np.random.random(3)
# Fit
sol = optimize.fmin(err_func, inital_guess, args=(X,Y,sineFit))
# Plot the fit
Y2 = sineFit(X, sol)
plt.figure(figsize=(15,10))
plt.subplot(211)
plt.title("Random Inital Guess: Final Parameters: %s"%sol)
plt.plot(X,Y)
plt.plot(X,Y2,'r',alpha=.5,lw=10)
# Use an improved "fft" guess for the frequency
# this will be the max in k-space
timestep = X[1]-X[0]
guess_k = np.argmax( np.fft.rfft(Y) )
guess_f = np.fft.fftfreq(X.size, timestep)[guess_k]
inital_guess[1] = guess_f
# Guess the amplitiude by taking the max of the absolute values
inital_guess[0] = np.abs(Y).max()
sol = optimize.fmin(err_func, inital_guess, args=(X,Y,sineFit))
Y2 = sineFit(X, sol)
plt.subplot(212)
plt.title("FFT Guess : Final Parameters: %s"%sol)
plt.plot(X,Y)
plt.plot(X,Y2,'r',alpha=.5,lw=10)
plt.show()
The problem is due to a bad initial guess of the phase, not the frequency. While cycling through the rows of genSine (inner loop) you use the fit result of the previous line as initial guess for the next row which does not work always. If you determine the phase from an fft of the current row and use that as initial guess the fit will succeed.
You could change the inner loop as follows:
for n,rr in enumerate(sineGen):
fftx = np.fft.fft(rr)
fftx = fftx[:len(fftx)/2]
idx = np.argmax(np.abs(fftx))
init_phase = np.angle(fftx[idx])
print fftx[idx], init_phase
...
Also you need to change
def sineFit(t, a, f, p):
return a * np.sin(2.0 * np.pi * f*t + p)
to
def sineFit(t, a, f, p):
return a * np.cos(2.0 * np.pi * f*t + p)
since phase=0 means that the imaginary part of the fft is zero and thus the function is cosine like.
Btw. your sample above is still lacking definitions of sineGen and xDat.
Without understanding much of your code, according to http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html:
(amp2, freq2, phs2), pcov = optimize.curve_fit(sineFit, tDat,
sub1, guess2)
should become:
(amp2, freq2, phs2), pcov = optimize.curve_fit(sineFit, tDat,
sub1, p0=guess2)
Assuming that tDat and sub1 are x and y, that should do the trick. But, once again, it is quite difficult to understand such a complex code with so many interlinked variables and no comments at all. A code should always be build from bottom up, meaning that you don't do a loop of fits when a single one is not working, you don't add noise until the code works to fit the non-noisy examples... Good luck!
By "nothing fancy" I meant something like removing EVERYTHING that is not related with the fit, and doing a simplified mock example such as:
import numpy as np
import scipy.optimize as optimize
def sineFit(t, a, f, p):
return a * np.sin(2.0 * np.pi * f*t + p)
# Create array of x and y with given parameters
x = np.asarray(range(100))
y = sineFit(x, 1, 0.05, 0)
# Give a guess and fit, printing result of the fitted values
guess = [1., 0.05, 0.]
print optimize.curve_fit(sineFit, x, y, guess)[0]
The result of this is exactly the answer:
[1. 0.05 0.]
But if you change guess not too much, just enough:
# Give a guess and fit, printing result of the fitted values
guess = [1., 0.06, 0.]
print optimize.curve_fit(sineFit, x, y, guess)[0]
the result gives absurdly wrong numbers:
[ 0.00823701 0.06391323 -1.20382787]
Can you explain this behavior?
You can use curve_fit with a series of trigonometric functions, usually very robust and ajustable to the precision that you need just by increasing the number of terms... here is an example:
from scipy import sin, cos, linspace
def f(x, a0,s1,s2,s3,s4,s5,s6,s7,s8,s9,s10,s11,s12,
c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12):
return a0 + s1*sin(1*x) + c1*cos(1*x) \
+ s2*sin(2*x) + c2*cos(2*x) \
+ s3*sin(3*x) + c3*cos(3*x) \
+ s4*sin(4*x) + c4*cos(4*x) \
+ s5*sin(5*x) + c5*cos(5*x) \
+ s6*sin(6*x) + c6*cos(6*x) \
+ s7*sin(7*x) + c7*cos(7*x) \
+ s8*sin(8*x) + c8*cos(8*x) \
+ s9*sin(9*x) + c9*cos(9*x) \
+ s10*sin(9*x) + c10*cos(9*x) \
+ s11*sin(9*x) + c11*cos(9*x) \
+ s12*sin(9*x) + c12*cos(9*x)
from scipy.optimize import curve_fit
pi/2. / (x.max() - x.min())
x_norm *= norm_factor
popt, pcov = curve_fit(f, x_norm, y)
x_fit = linspace(x_norm.min(), x_norm.max(), 1000)
y_fit = f(x_fit, *popt)
plt.plot( x_fit/x_norm, y_fit )
My knowledge of maths is limited which is why I am probably stuck. I have a spectra to which I am trying to fit two Gaussian peaks. I can fit to the largest peak, but I cannot fit to the smallest peak. I understand that I need to sum the Gaussian function for the two peaks but I do not know where I have gone wrong. An image of my current output is shown:
The blue line is my data and the green line is my current fit. There is a shoulder to the left of the main peak in my data which I am currently trying to fit, using the following code:
import matplotlib.pyplot as pt
import numpy as np
from scipy.optimize import leastsq
from pylab import *
time = []
counts = []
for i in open('/some/folder/to/file.txt', 'r'):
segs = i.split()
time.append(float(segs[0]))
counts.append(segs[1])
time_array = arange(len(time), dtype=float)
counts_array = arange(len(counts))
time_array[0:] = time
counts_array[0:] = counts
def model(time_array0, coeffs0):
a = coeffs0[0] + coeffs0[1] * np.exp( - ((time_array0-coeffs0[2])/coeffs0[3])**2 )
b = coeffs0[4] + coeffs0[5] * np.exp( - ((time_array0-coeffs0[6])/coeffs0[7])**2 )
c = a+b
return c
def residuals(coeffs, counts_array, time_array):
return counts_array - model(time_array, coeffs)
# 0 = baseline, 1 = amplitude, 2 = centre, 3 = width
peak1 = np.array([0,6337,16.2,4.47,0,2300,13.5,2], dtype=float)
#peak2 = np.array([0,2300,13.5,2], dtype=float)
x, flag = leastsq(residuals, peak1, args=(counts_array, time_array))
#z, flag = leastsq(residuals, peak2, args=(counts_array, time_array))
plt.plot(time_array, counts_array)
plt.plot(time_array, model(time_array, x), color = 'g')
#plt.plot(time_array, model(time_array, z), color = 'r')
plt.show()
This code worked for me providing that you are only fitting a function that is a combination of two Gaussian distributions.
I just made a residuals function that adds two Gaussian functions and then subtracts them from the real data.
The parameters (p) that I passed to Numpy's least squares function include: the mean of the first Gaussian function (m), the difference in the mean from the first and second Gaussian functions (dm, i.e. the horizontal shift), the standard deviation of the first (sd1), and the standard deviation of the second (sd2).
import numpy as np
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
######################################
# Setting up test data
def norm(x, mean, sd):
norm = []
for i in range(x.size):
norm += [1.0/(sd*np.sqrt(2*np.pi))*np.exp(-(x[i] - mean)**2/(2*sd**2))]
return np.array(norm)
mean1, mean2 = 0, -2
std1, std2 = 0.5, 1
x = np.linspace(-20, 20, 500)
y_real = norm(x, mean1, std1) + norm(x, mean2, std2)
######################################
# Solving
m, dm, sd1, sd2 = [5, 10, 1, 1]
p = [m, dm, sd1, sd2] # Initial guesses for leastsq
y_init = norm(x, m, sd1) + norm(x, m + dm, sd2) # For final comparison plot
def res(p, y, x):
m, dm, sd1, sd2 = p
m1 = m
m2 = m1 + dm
y_fit = norm(x, m1, sd1) + norm(x, m2, sd2)
err = y - y_fit
return err
plsq = leastsq(res, p, args = (y_real, x))
y_est = norm(x, plsq[0][0], plsq[0][2]) + norm(x, plsq[0][0] + plsq[0][1], plsq[0][3])
plt.plot(x, y_real, label='Real Data')
plt.plot(x, y_init, 'r.', label='Starting Guess')
plt.plot(x, y_est, 'g.', label='Fitted')
plt.legend()
plt.show()
You can use Gaussian mixture models from scikit-learn:
from sklearn import mixture
import matplotlib.pyplot
import matplotlib.mlab
import numpy as np
clf = mixture.GMM(n_components=2, covariance_type='full')
clf.fit(yourdata)
m1, m2 = clf.means_
w1, w2 = clf.weights_
c1, c2 = clf.covars_
histdist = matplotlib.pyplot.hist(yourdata, 100, normed=True)
plotgauss1 = lambda x: plot(x,w1*matplotlib.mlab.normpdf(x,m1,np.sqrt(c1))[0], linewidth=3)
plotgauss2 = lambda x: plot(x,w2*matplotlib.mlab.normpdf(x,m2,np.sqrt(c2))[0], linewidth=3)
plotgauss1(histdist[1])
plotgauss2(histdist[1])
You can also use the function below to fit the number of Gaussian you want with ncomp parameter:
from sklearn import mixture
%pylab
def fit_mixture(data, ncomp=2, doplot=False):
clf = mixture.GMM(n_components=ncomp, covariance_type='full')
clf.fit(data)
ml = clf.means_
wl = clf.weights_
cl = clf.covars_
ms = [m[0] for m in ml]
cs = [numpy.sqrt(c[0][0]) for c in cl]
ws = [w for w in wl]
if doplot == True:
histo = hist(data, 200, normed=True)
for w, m, c in zip(ws, ms, cs):
plot(histo[1],w*matplotlib.mlab.normpdf(histo[1],m,np.sqrt(c)), linewidth=3)
return ms, cs, ws
coeffs 0 and 4 are degenerate - there is absolutely nothing in the data that can decide between them. you should use a single zero level parameter instead of two (ie remove one of them from your code). this is probably what is stopping your fit (ignore the comments here saying this is not possible - there are clearly at least two peaks in that data and you should certainly be able to fit to that).
(it may not be clear why i am suggesting this, but what is happening is that coeffs 0 and 4 can cancel each other out. they can both be zero, or one could be 100 and the other -100 - either way, the fit is just as good. this "confuses" the fitting routine, which spends its time trying to work out what they should be, when there is no single right answer, because whatever value one is, the other can just be the negative of that, and the fit will be the same).
in fact, from the plot, it looks like there may be no need for a zero level at all. i would try dropping both of those and seeing how the fit looks.
also, there is no need to fit coeffs 1 and 5 (or the zero point) in the least squares. instead, because the model is linear in those you could calculate their values each loop. this will make things faster, but is not critical. i just noticed you say your maths is not so good, so probably ignore this one.