I am trying to implement Complex Exponential Fourier Series for f(x) defined on [-L,L] using these formulas,
I want to be able to implement these without calling the Fourier functions in other libraries since I want to also understand what's going on. Here is my attempt,
import numpy as np
from matplotlib import pyplot as plt
steps = 100
dt = 1/steps
L = np.pi
t = np.linspace(-L, L, steps)
def constant(X, Y, n):
return (1/(2*L))*sum([y*np.exp((1j*n*np.pi*t)/L)*dt for t, y in zip(X, Y)])
def complex_fourier(X, Y, N):
_X, _Y = [], []
for t in X:
f = 0
for n in range(-N//2, N//2 + 1):
c = constant(X, Y, n)
f += c*np.exp((-1j*n*np.pi*t)/L)
_X += [f.real]
_Y += [f.imag]
return _X, _Y
X, Y = complex_fourier(t, np.sin(t), 50)
plt.plot(X, Y, 'k.')
# plt.plot(t, np.sin(t))
plt.show()
The plot seems to be almost random and does not improve with more c terms. Could someone point out exactly what am I doing wrong?
Just to answer the "random plot" part of the question for now - note the Y-scale of your plot!
>>> np.min(Y), np.max(Y)
(-6.1937063114043705e-18, 6.43002899658067e-18)
>>> np.min(X), np.max(X)
(-0.15754356079010426, 0.15754356079010395)
In other words, all of your coefficients are basically real valued. You probably wouldn't be interested in an plot of the imaginary part vs the real part, but rather the sum of squares vs the frequency or mode number.
I have written code to plot the average squared error of a linear function over a given dataset, to visualise progress during a gradient descent training for the optimum regression line.
The relevant bits are these:
def compute_error(f, X, Y):
e = lambda x, y : (y - f(x))**2
return sum(e(x, y) for (x, y) in zip(X, Y))/len(X)
mn, bn, density = abs(target_slope)*1.5, abs(target_intercept)*1.5, 20
M, B = map(list, zip(*[(m, b) for m in np.linspace(-mn, +mn, density)
for b in np.linspace(-bn, +bn, density)]))
E = [compute_error(lambda x : m*x+b, X, Y) for m, b in zip(M,B)]
This works, but is very messy. I suspect there might be a very succinct way to pull off the same thing with numpy. So far I have gotten this:
M, B = map(np.ndarray.flatten, np.mgrid[-mn:+mn:1/density, -bn:+bn:1/density])
I still don't know how to improve the instantiation of E, and for some reason right now it is a lot slower than the messy version.
So, what would be a good way to map over a plane like MXB with numpy?
If you want to run above code you can build X and Y like so:
import numpy as np
from numpy.random import normal
target_slope = 3
target_intercept = 15
def generate_random_data(slope=1, minx=0, maxx=100, n=200, intercept=0):
f = lambda x : normal(slope*x, maxx/5)+intercept
X = np.linspace(minx, maxx, n)
Y = [f(x) for x in X]
return X, Y
X, Y = generate_random_data(slope=target_slope, intercept=target_intercept)
def compute_error(f, X, Y):
return np.mean( (Y - f(X))**2 )
MB = np.mgrid[-mn:+mn:2*mn/density, -bn:+bn:2*bn/density]
MB = MB.reshape((2, -1)).T
E = [compute_error(lambda x : m*x+b, X, Y) for m, b in MB]
It is possible to write a full numpy solution:
Y = np.array(Y)
M, B = np.mgrid[-mn:+mn:2*mn/density, -bn:+bn:2*bn/density]
mx = M.reshape((-1,1))*X
b = B.reshape((-1,1))*np.ones_like(X)
E = np.mean( (mx+b - Y)**2, axis=1 )
It may also be possible to write a solution without using the need to flatten the arrays and obtain the error as a 2D array...
I don't fully follow what you're trying to achieve here. However, this may help get you started with a numpy solution:
X, Y = generate_random_data(slope=target_slope, intercept=target_intercept, n=180)
M, B = np.mgrid[-mn:+mn:1/density, -bn:+bn:1/density]
f = M.T*X + B.T
error = np.sum((f-Y)**2)
Note I've had to alter the default number of X,Y values
I'm trying to perform what are many iterations of Scipy's curve_fit at once in order to avoid loops and therefore increase speed.
This is very similar to this problem, which was solved. However, the fact that the functions are piece-wise (discontinuous) makes so that that solution isn't applicable here.
Consider this example:
import numpy as np
from numpy import random as rng
from scipy.optimize import curve_fit
rng.seed(0)
N=20
X=np.logspace(-1,1,N)
Y = np.zeros((4, N))
for i in range(0,4):
b = i+1
a = b
print(a,b)
Y[i] = (X/b)**(-a) #+ 0.01 * rng.randn(6)
Y[i, X>b] = 1
This yields these arrays:
Which as you can see are discontinuous at X==b. I can retrieve the original values of a and b by using curve_fit iteratively:
def plaw(r, a, b):
""" Theoretical power law for the shape of the normalized conditional density """
import numpy as np
return np.piecewise(r, [r < b, r >= b], [lambda x: (x/b)**-a, lambda x: 1])
coeffs=[]
for ix in range(Y.shape[0]):
print(ix)
c0, pcov = curve_fit(plaw, X, Y[ix])
coeffs.append(c0)
But this process can be very slow depending of the size of X, Y and the loop, so I'm trying to speed things up by trying to get coeffs without the need for a loop. So far I haven't had any luck.
Things that might be important:
X and Y only contain positive values
a and b are always positive
Although the data to fit in this example is smooth (for the sake of simplicity), the real data has noise
EDIT
This is as far as I've gotten:
y=np.ma.masked_where(Y<1.01, Y)
lX = np.log(X)
lY = np.log(y)
A = np.vstack([lX, np.ones(len(lX))]).T
m,c=np.linalg.lstsq(A, lY.T)[0]
print('a=',-m)
print('b=',np.exp(-c/m))
But even without any noise the output is:
a= [0.18978965578339158 1.1353633705997466 2.220234483915197 3.3324502660995714]
b= [339.4090881838179 7.95073481873057 6.296592007396107 6.402567167503574]
Which is way worse than I was hoping to get.
Here are three approaches to speeding this up. You gave no desired speed up or accuracies, or even vector sizes, so buyer beware.
TL;DR
Timings:
len 1 2 3 4
1000 0.045 0.033 0.025 0.022
10000 0.290 0.097 0.029 0.023
100000 3.429 0.767 0.083 0.030
1000000 0.546 0.046
1) Original Method
2) Pre-estimate with Subset
3) M Newville [linear log-log estimate](https://stackoverflow.com/a/44975066/7311767)
4) Subset Estimate (Use Less Data)
Pre-estimate with Subset (Method 2):
A decent speedup can be achieved by simply running the curve_fit twice, where the first time uses a short subset of the data to get a quick estimate. That estimate is then used to seed a curve_fit with the entire dataset.
x, y = current_data
stride = int(max(1, len(x) / 200))
c0 = curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])[0]
return curve_fit(power_law, x, y, p0=c0)[0]
M Newville linear log-log estimate (Method 3):
Using the log estimate proposed by M Newville, is also considerably faster. As the OP was concerned about the initial estimate method proposed by Newville, this method uses curve_fit with a subset to provide the estimate of the break point in the curve.
x, y = current_data
stride = int(max(1, len(x) / 200))
c0 = curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])[0]
index_max = np.where(x > c0[1])[0][0]
log_x = np.log(x[:index_max])
log_y = np.log(y[:index_max])
result = linregress(log_x, log_y)
return -result[0], np.exp(-result[1] / result[0])
return (m, c), result
Use Less Data (Method 4):
Finally the seed mechanism used for the previous two methods provides pretty good estimates on the sample data. Of course it is sample data so your mileage may vary.
stride = int(max(1, len(x) / 200))
c0 = curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])[0]
Test Code:
import numpy as np
from numpy import random as rng
from scipy.optimize import curve_fit
from scipy.stats import linregress
fit_data = {}
current_data = None
def data_for_fit(a, b, n):
key = a, b, n
if key not in fit_data:
rng.seed(0)
x = np.logspace(-1, 1, n)
y = np.clip((x / b) ** (-a) + 0.01 * rng.randn(n), 0.001, None)
y[x > b] = 1
fit_data[key] = x, y
return fit_data[key]
def power_law(r, a, b):
""" Power law for the shape of the normalized conditional density """
import numpy as np
return np.piecewise(
r, [r < b, r >= b], [lambda x: (x/b)**-a, lambda x: 1])
def method1():
x, y = current_data
return curve_fit(power_law, x, y)[0]
def method2():
x, y = current_data
return curve_fit(power_law, x, y, p0=method4()[0])
def method3():
x, y = current_data
c0, pcov = method4()
index_max = np.where(x > c0[1])[0][0]
log_x = np.log(x[:index_max])
log_y = np.log(y[:index_max])
result = linregress(log_x, log_y)
m, c = -result[0], np.exp(-result[1] / result[0])
return (m, c), result
def method4():
x, y = current_data
stride = int(max(1, len(x) / 200))
return curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])
from timeit import timeit
def runit(stmt):
print("%s: %.3f %s" % (
stmt, timeit(stmt + '()', number=10,
setup='from __main__ import ' + stmt),
eval(stmt + '()')[0]
))
def runit_size(size):
print('Length: %d' % size)
if size <= 100000:
runit('method1')
runit('method2')
runit('method3')
runit('method4')
for i in (1000, 10000, 100000, 1000000):
current_data = data_for_fit(3, 3, i)
runit_size(i)
Two suggestions:
Use numpy.where (and possibly argmin) to find the X value at which the Y data becomes 1, or perhaps just slightly larger than 1, and truncate the data to that point -- effectively ignoring the data where Y=1.
That might be something like:
index_max = numpy.where(y < 1.2)[0][0]
x = y[:index_max]
y = y[:index_max]
Use the hint shown in your log-log plot that the power law is now linear in log-log. You don't need curve_fit, but can use scipy.stats.linregress on log(Y) vs log(Y). For your real work, that will at the very least give good starting values for a subsequent fit.
Following up on this and trying to follow your question, you might try something like:
import numpy as np
from scipy.stats import linregress
np.random.seed(0)
npts = 51
x = np.logspace(-2, 2, npts)
YTHRESH = 1.02
for i in range(5):
b = i + 1.0 + np.random.normal(scale=0.1)
a = b + np.random.random()
y = (x/b)**(-a) + np.random.normal(scale=0.0030, size=npts)
y[x>b] = 1.0
# to model exponential decay, first remove the values
# where y ~= 1 where the data is known to not decay...
imax = np.where(y < YTHRESH)[0][0]
# take log of this truncated x and y
_x = np.log(x[:imax])
_y = np.log(y[:imax])
# use linear regression on the log-log data:
out = linregress(_x, _y)
# map slope/intercept to scale, exponent
afit = -out.slope
bfit = np.exp(out.intercept/afit)
print(""" === Fit Example {i:3d}
a expected {a:4f}, got {afit:4f}
b expected {b:4f}, got {bfit:4f}
""".format(i=i+1, a=a, b=b, afit=afit, bfit=bfit))
Hopefully that's enough to get you going.
I'm looking for a way to plot a curve through some experimental data. The data shows a small linear regime with a shallow gradient, followed by a steep linear regime after a threshold value.
My data is here: http://pastebin.com/H4NSbxqr
I could fit the data with two lines relatively easily, but I'd like to fit with a continuous line ideally - which should look like two lines with a smooth curve joining them around the threshold (~5000 in the data, shown above).
I attempted this using scipy.optimize curve_fit and trying a function which included the sum of a straight line and an exponential:
y = a*x + b + c*np.exp((x-d)/e)
although despite numerous attempts, it didn't find a solution.
If anyone has any suggestions please, either on the choice of fitting distribution / method or the curve_fit implementation, they would be greatly appreciated.
If you don't have a particular reason to believe that linear + exponential is the true underlying cause of your data, then I think a fit to two lines makes the most sense. You can do this by making your fitting function the maximum of two lines, for example:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def two_lines(x, a, b, c, d):
one = a*x + b
two = c*x + d
return np.maximum(one, two)
Then,
x, y = np.genfromtxt('tmp.txt', unpack=True, delimiter=',')
pw0 = (.02, 30, .2, -2000) # a guess for slope, intercept, slope, intercept
pw, cov = curve_fit(two_lines, x, y, pw0)
crossover = (pw[3] - pw[1]) / (pw[0] - pw[2])
plt.plot(x, y, 'o', x, two_lines(x, *pw), '-')
If you really want a continuous and differentiable solution, it occurred to me that a hyperbola has a sharp bend to it, but it has to be rotated. It was a bit difficult to implement (maybe there's an easier way), but here's a go:
def hyperbola(x, a, b, c, d, e):
""" hyperbola(x) with parameters
a/b = asymptotic slope
c = curvature at vertex
d = offset to vertex
e = vertical offset
"""
return a*np.sqrt((b*c)**2 + (x-d)**2)/b + e
def rot_hyperbola(x, a, b, c, d, e, th):
pars = a, b, c, 0, 0 # do the shifting after rotation
xd = x - d
hsin = hyperbola(xd, *pars)*np.sin(th)
xcos = xd*np.cos(th)
return e + hyperbola(xcos - hsin, *pars)*np.cos(th) + xcos - hsin
Run it as
h0 = 1.1, 1, 0, 5000, 100, .5
h, hcov = curve_fit(rot_hyperbola, x, y, h0)
plt.plot(x, y, 'o', x, two_lines(x, *pw), '-', x, rot_hyperbola(x, *h), '-')
plt.legend(['data', 'piecewise linear', 'rotated hyperbola'], loc='upper left')
plt.show()
I was also able to get the line + exponential to converge, but it looks terrible. This is because it's not a good descriptor of your data, which is linear and an exponential is very far from linear!
def line_exp(x, a, b, c, d, e):
return a*x + b + c*np.exp((x-d)/e)
e0 = .1, 20., .01, 1000., 2000.
e, ecov = curve_fit(line_exp, x, y, e0)
If you want to keep it simple, there's always a polynomial or spline (piecewise polynomials)
from scipy.interpolate import UnivariateSpline
s = UnivariateSpline(x, y, s=x.size) #larger s-value has fewer "knots"
plt.plot(x, s(x))
I researched this a little, Applied Linear Regression by Sanford, and the Correlation and Regression lecture by Steiger had some good info on it. They all however lack the right model, the piecewise function should be
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import lmfit
dfseg = pd.read_csv('segreg.csv')
def err(w):
th0 = w['th0'].value
th1 = w['th1'].value
th2 = w['th2'].value
gamma = w['gamma'].value
fit = th0 + th1*dfseg.Temp + th2*np.maximum(0,dfseg.Temp-gamma)
return fit-dfseg.C
p = lmfit.Parameters()
p.add_many(('th0', 0.), ('th1', 0.0),('th2', 0.0),('gamma', 40.))
mi = lmfit.minimize(err, p)
lmfit.printfuncs.report_fit(mi.params)
b0 = mi.params['th0']; b1=mi.params['th1'];b2=mi.params['th2']
gamma = int(mi.params['gamma'].value)
import statsmodels.formula.api as smf
reslin = smf.ols('C ~ 1 + Temp + I((Temp-%d)*(Temp>%d))' % (gamma,gamma), data=dfseg).fit()
print reslin.summary()
x0 = np.array(range(0,gamma,1))
x1 = np.array(range(0,80-gamma,1))
y0 = b0 + b1*x0
y1 = (b0 + b1 * float(gamma) + (b1 + b2)* x1)
plt.scatter(dfseg.Temp, dfseg.C)
plt.hold(True)
plt.plot(x0,y0)
plt.plot(x1+gamma,y1)
plt.show()
Result
[[Variables]]
th0: 78.6554456 +/- 3.966238 (5.04%) (init= 0)
th1: -0.15728297 +/- 0.148250 (94.26%) (init= 0)
th2: 0.72471237 +/- 0.179052 (24.71%) (init= 0)
gamma: 38.3110177 +/- 4.845767 (12.65%) (init= 40)
The data
"","Temp","C"
"1",8.5536,86.2143
"2",10.6613,72.3871
"3",12.4516,74.0968
"4",16.9032,68.2258
"5",20.5161,72.3548
"6",21.1613,76.4839
"7",24.3929,83.6429
"8",26.4839,74.1935
"9",26.5645,71.2581
"10",27.9828,78.2069
"11",32.6833,79.0667
"12",33.0806,71.0968
"13",33.7097,76.6452
"14",34.2903,74.4516
"15",36,56.9677
"16",37.4167,79.8333
"17",43.9516,79.7097
"18",45.2667,76.9667
"19",47,76
"20",47.1129,78.0323
"21",47.3833,79.8333
"22",48.0968,73.9032
"23",49.05,78.1667
"24",57.5,81.7097
"25",59.2,80.3
"26",61.3226,75
"27",61.9194,87.0323
"28",62.3833,89.8
"29",64.3667,96.4
"30",65.371,88.9677
"31",68.35,91.3333
"32",70.7581,91.8387
"33",71.129,90.9355
"34",72.2419,93.4516
"35",72.85,97.8333
"36",73.9194,92.4839
"37",74.4167,96.1333
"38",76.3871,89.8387
"39",78.0484,89.4516
Graph
I used #user423805 's answer (found via google groups thread: https://groups.google.com/forum/#!topic/lmfit-py/7I2zv2WwFLU ) but noticed it had some limitations when trying to use three or more segments.
Instead of applying np.maximum in the minimizer error function or adding (b1 + b2) in #user423805 's answer, I used the same linear spline calculation for both the minimizer and end-usage:
# least_splines_calc works like this for an example with three segments
# (four threshold params, three gamma params):
#
# for 0 < x < gamma0 : y = th0 + (th1 * x)
# for gamma0 < x < gamma1 : y = th0 + (th1 * x) + (th2 * (x - gamma0))
# for gamma1 < x : y = th0 + (th1 * x) + (th2 * (x - gamma0)) + (th3 * (x - gamma1))
#
def least_splines_calc(x, thresholds, gammas):
if(len(thresholds) < 2):
print("Error: expected at least two thresholds")
return None
applicable_gammas = filter(lambda gamma: x > gamma , gammas)
#base result
y = thresholds[0] + (thresholds[1] * x)
#additional factors calculated depending on x value
for i in range(0, len(applicable_gammas)):
y = y + ( thresholds[i + 2] * ( x - applicable_gammas[i] ) )
return y
def least_splines_calc_array(x_array, thresholds, gammas):
y_array = map(lambda x: least_splines_calc(x, thresholds, gammas), x_array)
return y_array
def err(params, x, data):
th0 = params['th0'].value
th1 = params['th1'].value
th2 = params['th2'].value
th3 = params['th3'].value
gamma1 = params['gamma1'].value
gamma2 = params['gamma2'].value
thresholds = np.array([th0, th1, th2, th3])
gammas = np.array([gamma1, gamma2])
fit = least_splines_calc_array(x, thresholds, gammas)
return np.array(fit)-np.array(data)
p = lmfit.Parameters()
p.add_many(('th0', 0.), ('th1', 0.0),('th2', 0.0),('th3', 0.0),('gamma1', 9.),('gamma2', 9.3)) #NOTE: the 9. / 9.3 were guesses specific to my data, you will need to change these
mi = lmfit.minimize(err_alt, p, args=(np.array(dfseg.Temp), np.array(dfseg.C)))
After minimization, convert the params found by the minimizer into an array of thresholds and gammas to re-use linear_splines_calc to plot the linear splines regression.
Reference: While there's various places that explain least splines (I think #user423805 used http://www.statpower.net/Content/313/Lecture%20Notes/Splines.pdf , which has the (b1 + b2) addition I disagree with in its sample code despite similar equations) , the one that made the most sense to me was this one (by Rob Schapire / Zia Khan at Princeton) : https://www.cs.princeton.edu/courses/archive/spring07/cos424/scribe_notes/0403.pdf - section 2.2 goes into linear splines. Excerpt below:
If you're looking to join what appears to be two straight lines with a hyperbola having a variable radius at/near the intersection of the two lines (which are its asymptotes), I urge you to look hard at Using an Hyperbola as a Transition Model to Fit Two-Regime Straight-Line Data, by Donald G. Watts and David W. Bacon, Technometrics, Vol. 16, No. 3 (Aug., 1974), pp. 369-373.
The formula is drop dead simple, nicely adjustable, and works like a charm. From their paper (in case you can't access it):
As a more useful alternative form we consider an hyperbola for which:
(i) the dependent variable y is a single valued function of the independent variable x,
(ii) the left asymptote has slope theta_1,
(iii) the right asymptote has slope theta_2,
(iv) the asymptotes intersect at the point (x_o, beta_o),
(v) the radius of curvature at x = x_o is proportional to a quantity delta. Such an hyperbola can be written y = beta_o + beta_1*(x - x_o) + beta_2* SQRT[(x - x_o)^2 + delta^2/4], where beta_1 = (theta_1 + theta_2)/2 and beta_2 = (theta_2 - theta_1)/2.
delta is the adjustable parameter that allows you to either closely follow the lines right to the intersection point or smoothly merge from one line to the other.
Just solve for the intersection point (x_o, beta_o), and plug into the formula above.
BTW, in general, if line 1 is y_1 = b_1 + m_1 *x and line 2 is y_2 = b_2 + m_2 * x, then they intersect at x* = (b_2 - b_1) / (m_1 - m_2) and y* = b_1 + m_1 * x*. So, to connect with the formalism above, x_o = x*, beta_o = y* and the two m_*'s are the two thetas.
There is a straightforward method (not iterative, no initial guess) pp.12-13 in https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf
The data comes from the scanning of the figure published by IanRoberts in his question. Scanning for the coordinates of the pixels in not accurate. So, don't be surprised by additional deviation.
Note that the abscisses and ordinates scales have been devised by 1000.
The equations of the two segments are
The approximate values of the five parameters are written on the above figure.