Using Python for loop to solve an equation with numerical method - python

I am learning Python and Probabilities.
I have as a given that the expression:
c = n! /((n-1)!1!) + 2*n! /((n-2)*2!) + 3*n!/((n-3)*3!) + ...+ n*n!/((n-n)*n!
where 0! = 1 and ! signifies 'factorial', i.e. n! = 1*2*3*...*(n-1)*n.
is equal to:
(a + n^b)*2^(c*n + d). (^ signifies exponent)
My goal is to determine the parameters a, b, c, d using 'brute force'.
I calculated using the above formula c for n=3 (12), n=4 (38), n=5 (80), n=6 (192), n=7 (448).
Then I expressed the parameters as ratios of two integers: i.e.
a = a1/a2, b=b1/b2, c=c1/c2, d=d1/d2.
Finally I defined the following function:
def com():
parms = []
for a1 in range(-10, 10):
for a2 in range(1,11):
for b1 in range(-10,10):
for b2 in range(1, 11):
for c1 in range(-10,10):
for c2 in range(1, 11):
for d1 in range(-10 , 10):
for d2 in range(1 , 11):
a = a1/a2
b = b1/b2
c = c1/c2
d = d1/d2
cr1 = ( 12 == (a + 3**b)*2**(c*3+d) )
cr2 = ( 38 == (a + 4**b)*2**(c*4+d) )
cr3 = ( 80 == (a + 5**b)*2**(c*5+d) )
cr4 = ( 192 == (a + 6**b)*2**(c*6+d) )
cr5 = ( 448 == (a + 7**b)*2**(c*7+d) )
criterion = cr1 & cr2 & cr3 & cr4 & cr5
if criterion == 1 :
parms = [a, b, c, d]
break
return parms
However, my function returns an empty list.
Could you explain that? Do you have any suggestions on how to achieve my objective?
Your advice will be appreciated.

Firstly, let's look at some bugs in your code:
criterion = cr1 & cr2 & cr3 & cr4 & cr5
This is performing bitwise operations on the values. You probably wanted:
criterion = cr1 and cr2 and cr3 and cr4 and cr5
Python also provides an all function, which you could check if all things are True:
criterion = all([cr1,cr2,cr3,cr4,cr5])
Now, let's look at your if statement:
if criterion == 1 :
Since you want to know if criterion is True or False, you can simply use if criterion:.
Now lastly, this approach is unlikely to ever be True. Take this line for example:
cr1 = ( 12 == (a + 3**b)*2**(c*3+d) )
Those numbers would have to add up to exactly 12, else it will be False, and then you will get an empty list.
Also, computers can't do decimal maths accurately.
To solve the actual values, you must use parameter substitution. That's not a programming question, but maths and programming go together well, so here's a starter:
12 = (a + 3**b)*2**(c*3+d)
12/(a+3**b) = 2**(c*3+d)
... and so on. Get a in terms of b, c,and d, then use your cr2 to substitute in a for the numbers you have, and get b in terms of c and d.
Repeat a few more times, and you have numbers for the four values.

You can use scipy curvefit and use non-linear least squares to fit a function, f, to data and find the parameters (a,b,c,d) values. The example data that you provided fit to the function (a + n**b)+2**(c*n + d) better. Notice that best fit for the values of a,b,c,d are floating point numbers, not integers.
import numpy as np
from scipy.optimize import curve_fit
xdata = np.array(range(3,8))
ydata = np.array([12,38,80,192,448])
def func(n, a, b, c, d):
return (a + n**b)+2**(c*n + d)
popt, pcov = curve_fit(func, xdata, ydata, p0=(1,1,1,1))
a, b, c, d = popt
print a, b, c, d # learnt parameters
# -5.62374782967 1.79345876905 1.29232902743 -0.328778229316
import matplotlib.pyplot as plt
plt.scatter(xdata, ydata)
plt.plot(xdata, func(xdata, a, b, c, d), '-r', label='np.poly')
plt.show()
In the above fitted curve, blue points are the data points and red line is the fitted function with the learnt parameters, notice that the function is continuous and defined for all values of n.

With brains instead of Python:
You recognize a modified Binomial sum, Sum k.Cnk instead of Sum Cnk.
Then notice that
k.n!/(k!(n-k)!) = n!/((k-1)!(n-k)!) = n.(n-1)!/(k-1)!(n-1-k+1)!) = n.Cn-1,k+1.
So your sum will be n.2^(n-1).

Before rushing into brute force, it may be wiser to attempt to estimate a tight range for the parameters.
The first values of the sum are
1, 4, 12, 32, 80, 192, 448, 1024, 2304, 5120, 11264
Taking the pairwise ratios, you observe that they get closer and closer to 2.
As the ratio (a + (n+1)^b)/(a + n^b) will tend to 1 and the ratio 2^(c(n+1)+d)/2^(cn+d) will tend to 2^c, you are hinted that c is probably close to 1. You can check this estimate by trying larger and larger values of n (For instance, n=1000 yields the ratio 2.002002... = 2^1.00144...).
Then to reduce the growth rate of the expression to make it more tractable, it can be interesting to look at the values of Sum/2^n, and we get
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Hum, hum.
Unfortunately, this case is too easy for a proper demonstration of the method. The general idea is to observe the asymptotic behavior to get a rough estimate of some of the parameters. And when you have such a rough estimate (in a small range), then you can somehow cancel its effect to better see the effect of the other ones.
For exhaustive searches, understanding the ranges is important.

Related

Inverse Fast Fourier Transform Implementation

I'm trying to program a FFT algorithm in Python and I've got the initial transform working. It takes in a polynomial in vector form and spits out the evaluation of that vector at the complex roots of unity. The issue arises in my Inverse FFT. This is supposed to take in the evaluation of a function at the complex roots and return the polynomial that passes through those points. This second half is nearly identical to the first, but it uses the inverse of the complex roots. The output of this is scaled by the number of input points, so each coefficient is divided by that number.
My code looks like:
import math
from cmath import sqrt
def getNthRoots(n):
# Determines how many roots are needed. Only works for powers of two.
targetLength = math.pow(2, math.ceil(math.log(n, 2)))
newOutput = [1.0 + 0j]
# Starting with just 1, it takes the sqrt of each term and appends it. Then it appends a negative copy of the list.
while len(newOutput) < targetLength:
previous = newOutput
newOutput = []
for item in range(len(previous)):
newOutput.append(sqrt(previous[item]))
for item in range(len(newOutput)):
newOutput.append(newOutput[item] * -1)
return newOutput
This works excellently.
def FFT(inputList, rootsOfUnity):
# End case for the loop
if len(inputList) == 1:
return [inputList[0]]
# Splits the input list into two parts, one even and one odd
evenList = [inputList[2 * x] for x in range((len(inputList) // 2))]
oddList = [inputList[(2 * x) + 1] for x in range(len(inputList) // 2)]
# Takes the "square" of the roots of unity. This is the same as just taking every other entry.
newRootsOfU = [rootsOfUnity[2 * x] for x in range((len(rootsOfUnity) // 2))]
# Calls itself with the even and odd halves and the shortened roots of unity
evenTransform = FFT(evenList, newRootsOfU)
oddTransform = FFT(oddList, newRootsOfU)
outputs = []
# Calculates the output for each root of unity
for x in range(len(rootsOfUnity)):
outputs.append(evenTransform[x % (int(len(rootsOfUnity) / 2))] + rootsOfUnity[x] * oddTransform[
x % (int(len(rootsOfUnity) / 2))])
return outputs
PolyVec1 = [1, 5, 3, 2]
# addZeros just attaches zeros until the length of the list is a power of two
PolyVec1 = addZeros(PolyVec1)
# Converts PolyVec1 into point form
rootsOfUnity = getNthRoots(len(PolyVec1))
PolyPoint1 = FFT(PolyVec1, rootsOfUnity)
toBeInverted = getNthRoots(len(PolyVec1))
InvertedRoots = [1 / i for i in toBeInverted]
reversedFFT1 = FFT(PolyPoint1, InvertedRoots)
print(reversedFFT1)
reversedFFT1Final = [abs(i) / len(reversedFFT1) for i in reversedFFT1]
print(reversedFFT1Final)
This code works fine for converting a polynomial into a series of points. However, when I try and use it to find the inverse it doesn't work for polynomials of a degree greater than 3.
Any idea why?
Edit:
I've had some new insights. The output for any polynomial
[a, b, c, d, e, f, g, h]
is
[a, y, c, z, e, y, g, z]
a, c, e, and g are all consistently interpolated, but b, d, f, and h are not. b and f are replaced by the same value and d and h are replaced by a different value. As long as b and f are identical and d and h are identical, the output will be correct.
Edit:
More insight. It has something to do with complex numbers. No matter the input, a, c, e, g always have no complex component and when b and f or d and h are identical, the output has no complex component. When there is a complex component, the output is incorrect.

What does r() function mean in the return value of SymPy's dsolve?

I want to evaluate the value of phi(+oo)
where phi(xi) is the solution of ODE
Eq(Derivative(phi(xi), (xi, 2)), (-K + xi**2)*phi(xi))
and K is a known real variable.
By dsolve, I got the solution:
Eq(phi(xi), -K*xi**5*r(3)/20 + C2*(K**2*xi**4/24 - K*xi**2/2 + xi**4/12 + 1) + C1*xi*(xi**4/20 + 1) + O(xi**6))
with an unknown function r() in the first term on the right-hand side.
Here is my code:
import numpy as np
import matplotlib.pyplot as plt
import sympy
from sympy import I, pi, oo
sympy.init_printing()
def apply_ics(sol, ics, x, known_params):
"""
Apply the initial conditions (ics), given as a dictionary on
the form ics = {y(0): y0, y(x).diff(x).subs(x, 0): yp0, ...},
to the solution of the ODE with independent variable x.
The undetermined integration constants C1, C2, ... are extracted
from the free symbols of the ODE solution, excluding symbols in
the known_params list.
"""
free_params = sol.free_symbols - set(known_params)
eqs = [(sol.lhs.diff(x, n) - sol.rhs.diff(x, n)).subs(x, 0).subs(ics)
for n in range(len(ics))]
sol_params = sympy.solve(eqs, free_params)
return sol.subs(sol_params)
K = sympy.Symbol('K', positive = True)
xi = sympy.Symbol('xi',real = True)
phi = sympy.Function('phi')
ode = sympy.Eq( phi(xi).diff(xi, 2), (xi**2-K)*phi(xi))
ode_sol = sympy.dsolve(ode)
ics = { phi(0):1, phi(xi).diff(xi).subs(xi,0): 0}
phi_xi_sol = apply_ics(ode_sol, ics, xi, [K])
Where ode_sol is the solution, phi_xi_sol is the solution after initial conditions are applied.
Since r() is undefined in NumPy I can't evaluate the results by
for g in [0.9, 0.95, 1, 1.05, 1.2]:
phi_xi = sympy.lambdify(xi, phi_xi_sol.rhs.subs({K:g}), 'numpy')
Does anyone know what this function r() mean and how should I deal with it?
As visible in the form of the result, the solver falls back to a power series solution (instead of searching the solution in terms of parabolic cylinder functions as WolframAlpha does).
So let's set phi(xi)=sum a[k]*xi^k leading to the coefficient equations (using a[k]=0 for k<0)
(k+2)(k+1)a[k+2] = -K*a[k] + a[k-2]
a[0] = C2
a[1] = C1
a[2] = -K/2*C2
a[3] = -K/6*C1
a[4] = (K^2/2 + 1)/12*C2
a[5] = (K^2/6 + 1)/20*C1
Inserting that the the power series solution should have been
C2*(1-K/2*xi**2+(K**2/24+1/12)*xi**4) + C1*xi*(1-K/6*xi**2+(K/120+1/20)*xi**4) + O(xi**6)
Comparing with the sympy solution, all terms containing both C1 and K are missing, especially the missing degree 3 term is not explainable. It seems that the solution process was prematurely ended, or some equation transformation was not correctly reversed.
Please note that the ODE solver routines in sympy are experimental and rudimentary. Also, the power series solution gives only valid information for small values of xi, there is no way to derive any exact value for the limit at +oo.
The sol_params is a list containing a single dictionary. Passing that dictionary instead of the list gives the solution phi_xi_sol without the r(3):
Eq(rho(s), (-K*s**2/2 + s**2*xi**2/2 + 1)*(-6*rho(s) + 6*C2*s - C2*K*s**3 + O(s**5))/
(3*(K*s**2 - 2)) + C2*(-K*s**3/6 + s**3*xi**2/6 + s) + O(s**5))

Difference in result between fmin and fminsearch in Matlab and Python

My objective is to perform an Inverse Laplace Transform on some decay data (NMR T2 decay via CPMG). For that, we were provided with the CONTIN algorithm. This algorithm was adapted to Matlab by Iari-Gabriel Marino, and it works very well. I want to adapt this code into Python. The core of the problem is with scipy.optimize.fmin, which is not minimizing the mean square deviation (MSD) in any way similar to Matlab's fminsearch. The latter results in a good minimization, while the former doesn't.
I have gone through line by line of my adapted code in Python, and the original Matlab. I checked every matrix and every output. I used this to identify that the critical point is in fmin. I also tried scipy.optimize.minimize and other minimization algorithms, but none gave even remotely satisfactory results.
I have made two MWE, for Python and Matlab, to make it reproducible to all. The example data were obtained from the documentation of the matlab function. Apologies if this is long code, but I don't really know how to shorten it without sacrificing readability and clarity. I tried to have the lines match as closely as possible. I am using Python 3.7.3, scipy v1.3.0, numpy 1.16.2, Matlab R2018b, on Windows 8.1. It's a relatively recent Anaconda install (<2 months).
My code:
import numpy as np
from scipy.optimize import fmin
import matplotlib.pyplot as plt
def msd(g, y, A, alpha, R, w, constraints):
""" msd: mean square deviation. This is the function to be minimized by fmin"""
if 'zero_at_extremes' in constraints:
g[0] = 0
g[-1] = 0
if 'g>0' in constraints:
g = np.abs(g)
r = np.diff(g, axis=0, n=2)
yfit = A # g
# Sum of weighted square residuals
VAR = np.sum(w * (y - yfit) ** 2)
# Regularizor
REG = alpha ** 2 * np.sum((r - R # g) ** 2)
# output to be minimized
return VAR + REG
# Objective: match this distribution
g0 = np.array([0, 0, 10.1625, 25.1974, 21.8711, 1.6377, 7.3895, 8.736, 1.4256, 0, 0]).reshape((-1, 1))
s0 = np.logspace(-3, 6, len(g0)).reshape((-1, 1))
t = np.linspace(0.01, 500, 100).reshape((-1, 1))
sM, tM = np.meshgrid(s0, t)
A = np.exp(-tM / sM)
np.random.seed(1)
# Creates data from the initial distribution with some random noise.
data = (A # g0) + 0.07 * np.random.rand(t.size).reshape((-1, 1))
# Parameters and function start
alpha = 1E-2 # regularization parameter
s = np.logspace(-3, 6, 20).reshape((-1, 1)) # x of the ILT
g0 = np.ones(s.size).reshape((-1, 1)) # guess of y of ILT
y = data # noisy data
options = {'maxiter':1e8, 'maxfun':1e8} # for the fmin function
constraints=['g>0', 'zero_at_extremes'] # constraints for the MSD function
R=np.zeros((len(g0) - 2, len(g0)), order='F') # Regularizor
w=np.ones(y.reshape(-1, 1).size).reshape((-1, 1)) # Weights
sM, tM = np.meshgrid(s, t, indexing='xy')
A = np.exp(-tM/sM)
g0 = g0 * y.sum() / (A # g0).sum() # Makes a "better guess" for the distribution, according to algorithm
print('msd of input data:\n', msd(g0, y, A, alpha, R, w, constraints))
for i in range(5): # Just for testing. If this is extremely high, ~1000, it's still bad.
g = fmin(func=msd,
x0 = g0,
args=(y, A, alpha, R, w, constraints),
**options,
disp=True)[:, np.newaxis]
msdfit = msd(g, y, A, alpha, R, w, constraints)
if 'zero_at_extremes' in constraints:
g[0] = 0
g[-1] = 0
if 'g>0' in constraints:
g = np.abs(g)
g0 = g
print('New guess', g)
print('Final msd of g', msdfit)
# Visualize the fit
plt.plot(s, g, label='Initial approximation')
plt.plot(np.logspace(-3, 6, 11), np.array([0, 0, 10.1625, 25.1974, 21.8711, 1.6377, 7.3895, 8.736, 1.4256, 0, 0]), label='Distribution to match')
plt.xscale('log')
plt.legend()
plt.show()
Matlab:
% Objective: match this distribution
g0 = [0 0 10.1625 25.1974 21.8711 1.6377 7.3895 8.736 1.4256 0 0]';
s0 = logspace(-3,6,length(g0))';
t = linspace(0.01,500,100)';
[sM,tM] = meshgrid(s0,t);
A = exp(-tM./sM);
rng(1);
% Creates data from the initial distribution with some random noise.
data = A*g0 + 0.07*rand(size(t));
% Parameters and function start
alpha = 1e-2; % regularization parameter
s = logspace(-3,6,20)'; % x of the ILT
g0 = ones(size(s)); % initial guess of y of ILT
y = data; % noisy data
options = optimset('MaxFunEvals',1e8,'MaxIter',1e8); % constraints for fminsearch
constraints = {'g>0','zero_at_the_extremes'}; % constraints for MSD
R = zeros(length(g0)-2,length(g0));
w = ones(size(y(:)));
[sM,tM] = meshgrid(s,t);
A = exp(-tM./sM);
g0 = g0*sum(y)/sum(A*g0); % Makes a "better guess" for the distribution
disp('msd of input data:')
disp(msd(g0, y, A, alpha, R, w, constraints))
for k = 1:5
[g,msdfit] = fminsearch(#msd,g0,options,y,A,alpha,R,w,constraints);
if ismember('zero_at_the_extremes',constraints)
g(1) = 0;
g(end) = 0;
end
if ismember('g>0',constraints)
g = abs(g);
end
g0 = g;
end
disp('New guess')
disp(g)
disp('Final msd of g')
disp(msdfit)
% Visualize the fit
semilogx(s, g)
hold on
semilogx(logspace(-3,6,11), [0 0 10.1625 25.1974 21.8711 1.6377 7.3895 8.736 1.4256 0 0])
legend('First approximation', 'Distribution to match')
hold off
function out = msd(g,y,A,alpha,R,w,constraints)
% msd: The mean square deviation; this is the function
% that has to be minimized by fminsearch
% Constraints and any 'a priori' knowledge
if ismember('zero_at_the_extremes',constraints)
g(1) = 0;
g(end) = 0;
end
if ismember('g>0',constraints)
g = abs(g); % must be g(i)>=0 for each i
end
r = diff(diff(g(1:end))); % second derivative of g
yfit = A*g;
% Sum of weighted square residuals
VAR = sum(w.*(y-yfit).^2);
% Regularizor
REG = alpha^2 * sum((r-R*g).^2);
% Output to be minimized
out = VAR+REG;
end
Here is the optimization in Python
Here is the optimization in Matlab
I have checked the output of MSD of g0 before starting, and both give the value of 2651. After minimization, Python goes up, to 4547, and Matlab goes down to 0.1381.
I think the problem is one of the following. It's in my implementation, that is, I am using fmin wrong, or there's some other passage I got wrong, but I can't figure out what. The fact the MSD increases when it should have decreased with a minimization function is damning. Reading the documentation, the scipy implementation is different from Matlab's (they use the Nelder Mead method described in Lagarias, per their documentation), while scipy uses the original Nelder Mead). Maybe that affects significantly? Or perhaps my initial guess is too bad for scipy's algorithm?
So, quite a long time since I posted this, but I wanted to share what I ended up learning and doing.
The Inverse Laplace Transform for CPMG data is a bit of a misnomer, and it's more properly called just inversion. The general problem is solving a Fredholm integral of the first kind. One way of doing this is the Tikhonov regularization method. Turns out, you can describe this problem quite easily using numpy, and solve it with a scipy package, so I don't have to "reinvent" the wheel with this.
I used the solution shown in this post, and the names here reflect that solution.
def tikhonov_regularized_inversion(
kernel: np.ndarray, alpha: float, data: np.ndarray
) -> np.ndarray:
data = data.reshape(-1, 1)
I = alpha * np.eye(*kernel.shape)
C = np.concatenate([kernel, I], axis=0)
d = np.concatenate([data, np.zeros_like(data)])
x, _ = nnls(C, d.flatten())
Here, kernel is a matrix containing all the possible exponential decay curves, and my solution judges the contribution of each decay curve in the data I received. First, I stack my data as a column, then pad it with zeros, creating the vector d. I then stack my kernel on top of a diagonal matrix containing the regularization parameter alpha along the diagonal, of the same size as the kernel. Last, I call the convenient nnls, a non negative least square solver in scipy.optimize. This is because there's no reason to have a negative contribution, only no contribution.
This solved my problem, it's quick and convenient.

Loops to minimize function of arrays in python

I have some large arrays each with i elements, call them X, Y, Z, for which I need to find some values a, b--where a and b are real numbers between 0 and 1--such that, for the following functions,
r = X - a*Y - b*Z
r_av = Sum(r)/i
rms = Sum((r - r_av)^2), summing over the i pixels
I want to minimize the rms. Basically I'm looking to minimize the scatter in r, and thus need to find the right a and b to do that. So far I have thought to do this in nested loops in one of two ways: either 1)just looping through a range of possible a,b and then selecting out the smallest rms, or 2)inserting a while statement so that the loop will terminate once rms stops decreasing with decreasing a,b for instance. Here's some pseudocode for these:
1) List
for a = 1
for b = 1
calculate m
b = b - .001
a = a - .001
loop 1000 times
sort m values, from smallest
print (a,b) corresponding to smallest m
2) Terminate
for a = 1
for b = 1
calculate m
while m > previous step,
b = b - .001
a = a - .001
Is one of these preferable? Or is there yet another, better way to go about this? Any tips would be greatly appreciated.
There is already a handy formula for least squares fitting.
I came up with two different ways to solve your problem.
For the first one, consider the matrix K:
L = len(X)
K = np.identity(L) - np.ones((L, L)) / L
In your case, A and B are defined as:
A = K.dot(np.array([Y, Z]).transpose())
B = K.dot(np.array([X]).transpose())
Apply the formula to find C that minimizes the error A * C - B:
C = np.linalg.inv(np.transpose(A).dot(A))
C = C.dot(np.transpose(A)).dot(B)
Then the result is:
a, b = C.reshape(2)
Also, note that numpy already provides linalg.lstsq that does the exact same thing:
a, b = np.linalg.lstsq(A, B)[0].reshape(2)
A simpler way is to define A as:
A = np.array([Y, Z, [1]*len(X)]).transpose()
Then solve it against X to get the coefficients and the mean:
a, b, mean = np.linalg.lstsq(A, X)[0]
If you need a proof of this result, have a look at this post.
Example:
>>> import numpy as np
>>> X = [5, 7, 9, 5]
>>> Y = [2, 0, 4, 1]
>>> Z = [7, 2, 4, 6]
>>> A = np.array([Y, Z, [1] * len(X)]).transpose()
>>> a, b, mean = np.linalg.lstsq(A, X)[0]
>>> print(a, b, mean)
0.860082304527 -0.736625514403 8.49382716049

Cubic Spline Python code producing linear splines

edit: I'm not looking for you to debug this code. If you are familiar with this well-known algorithm, then you may be able to help. Please note that the algorithm produces the coefficients correctly.
This code for cubic spline interpolation is producing linear splines and I can't seem to figure out why (yet). The algorithm comes from Burden's Numerical Analysis, which is just about identical to the pseudo code here, or you can find that book from a link in the comments (see chapter 3, it's worth having anyway). The code is producing the correct coefficients; I believe that I am misunderstanding the implementation. Any feedback is greatly appreciated. Also, i'm new to programming, so any feedback on how bad my coding is also welcome. I tried uploading pics of the linear system in terms of h, a, and c, but as a new user i can not. If you want a visual of the tridiagonal linear system that the algorithm solves, and which is set up by the var alpha, see the link in the comments for the book, see chap 3. The system is strictly diagonally dominant, so we know there exists a unique solution c0,...,cn. Once we know the ci values, the other coefficients follow.
import matplotlib.pyplot as plt
# need some zero vectors...
def zeroV(m):
z = [0]*m
return(z)
#INPUT: n; x0, x1, ... ,xn; a0 = f(x0), a1 =f(x1), ... , an = f(xn).
def cubic_spline(n, xn, a, xd):
"""function cubic_spline(n,xn, a, xd) interpolates between the knots
specified by lists xn and a. The function computes the coefficients
and outputs the ranges of the piecewise cubic splines."""
h = zeroV(n-1)
# alpha will be values in a system of eq's that will allow us to solve for c
# and then from there we can find b, d through substitution.
alpha = zeroV(n-1)
# l, u, z are used in the method for solving the linear system
l = zeroV(n+1)
u = zeroV(n)
z = zeroV(n+1)
# b, c, d will be the coefficients along with a.
b = zeroV(n)
c = zeroV(n+1)
d = zeroV(n)
for i in range(n-1):
# h[i] is used to satisfy the condition that
# Si+1(xi+l) = Si(xi+l) for each i = 0,..,n-1
# i.e., the values at the knots are "doubled up"
h[i] = xn[i+1]-xn[i]
for i in range(1, n-1):
# Sets up the linear system and allows us to find c. Once we have
# c then b and d follow in terms of it.
alpha[i] = (3./h[i])*(a[i+1]-a[i])-(3./h[i-1])*(a[i] - a[i-1])
# I, II, (part of) III Sets up and solves tridiagonal linear system...
# I
l[0] = 1
u[0] = 0
z[0] = 0
# II
for i in range(1, n-1):
l[i] = 2*(xn[i+1] - xn[i-1]) - h[i-1]*u[i-1]
u[i] = h[i]/l[i]
z[i] = (alpha[i] - h[i-1]*z[i-1])/l[i]
l[n] = 1
z[n] = 0
c[n] = 0
# III... also find b, d in terms of c.
for j in range(n-2, -1, -1):
c[j] = z[j] - u[j]*c[j+1]
b[j] = (a[j+1] - a[j])/h[j] - h[j]*(c[j+1] + 2*c[j])/3.
d[j] = (c[j+1] - c[j])/(3*h[j])
# This is my only addition, which is returning values for Sj(x). The issue I'm having
# is related to this implemention, i suspect.
for j in range(n-1):
#OUTPUT:S(x)=Sj(x)= aj + bj(x - xj) + cj(x - xj)^2 + dj(x - xj)^3; xj <= x <= xj+1)
return(a[j] + b[j]*(xd - xn[j]) + c[j]*((xd - xn[j])**2) + d[j]*((xd - xn[j])**3))
For the bored, or overachieving...
Here is code for testing, the interval is x: [1, 9], y:[0, 19.7750212]. The test function is xln(x), so we start 1 and increase by .1 up to 9.
ln = []
ln_dom = []
cub = []
step = 1.
X=[1., 9.]
FX=[0, 19.7750212]
while step <= 9.:
ln.append(step*log(step))
ln_dom.append(step)
cub.append(cubic_spline(2, x, fx, step))
step += 0.1
...and for plotting:
plt.plot(ln_dom, cub, color='blue')
plt.plot(ln_dom, ln, color='red')
plt.axis([1., 9., 0, 20], 'equal')
plt.axhline(y=0, color='black')
plt.axvline(x=0, color='black')
plt.show()
Ok, got this working. The problem was in my implementation. I got it working with a different approach, where the splines are constructed individually instead of continuously. This is fully functioning cubic spline interpolation by method of first constructing the coefficients of the spline polynomials (which is 99% of the work), then implementing them. Obviously this is not the only way to do it. I may work on a different approach and post that if there is interest. One thing that would clarify the code would be an image of the linear system that is solved, but i can't post pics until my rep gets up to 10. If you want to go deeper into the algorithm, see the text book link in the comments above.
import matplotlib.pyplot as plt
from pylab import arange
from math import e
from math import pi
from math import sin
from math import cos
from numpy import poly1d
# need some zero vectors...
def zeroV(m):
z = [0]*m
return(z)
#INPUT: n; x0, x1, ... ,xn; a0 = f(x0), a1 =f(x1), ... , an = f(xn).
def cubic_spline(n, xn, a):
"""function cubic_spline(n,xn, a, xd) interpolates between the knots
specified by lists xn and a. The function computes the coefficients
and outputs the ranges of the piecewise cubic splines."""
h = zeroV(n-1)
# alpha will be values in a system of eq's that will allow us to solve for c
# and then from there we can find b, d through substitution.
alpha = zeroV(n-1)
# l, u, z are used in the method for solving the linear system
l = zeroV(n+1)
u = zeroV(n)
z = zeroV(n+1)
# b, c, d will be the coefficients along with a.
b = zeroV(n)
c = zeroV(n+1)
d = zeroV(n)
for i in range(n-1):
# h[i] is used to satisfy the condition that
# Si+1(xi+l) = Si(xi+l) for each i = 0,..,n-1
# i.e., the values at the knots are "doubled up"
h[i] = xn[i+1]-xn[i]
for i in range(1, n-1):
# Sets up the linear system and allows us to find c. Once we have
# c then b and d follow in terms of it.
alpha[i] = (3./h[i])*(a[i+1]-a[i])-(3./h[i-1])*(a[i] - a[i-1])
# I, II, (part of) III Sets up and solves tridiagonal linear system...
# I
l[0] = 1
u[0] = 0
z[0] = 0
# II
for i in range(1, n-1):
l[i] = 2*(xn[i+1] - xn[i-1]) - h[i-1]*u[i-1]
u[i] = h[i]/l[i]
z[i] = (alpha[i] - h[i-1]*z[i-1])/l[i]
l[n] = 1
z[n] = 0
c[n] = 0
# III... also find b, d in terms of c.
for j in range(n-2, -1, -1):
c[j] = z[j] - u[j]*c[j+1]
b[j] = (a[j+1] - a[j])/h[j] - h[j]*(c[j+1] + 2*c[j])/3.
d[j] = (c[j+1] - c[j])/(3*h[j])
# Now that we have the coefficients it's just a matter of constructing
# the appropriate polynomials and graphing.
for j in range(n-1):
cub_graph(a[j],b[j],c[j],d[j],xn[j],xn[j+1])
plt.show()
def cub_graph(a,b,c,d, x_i, x_i_1):
"""cub_graph takes the i'th coefficient set along with the x[i] and x[i+1]'th
data pts, and constructs the polynomial spline between the two data pts using
the poly1d python object (which simply returns a polynomial with a given root."""
# notice here that we are just building the cubic polynomial piece by piece
root = poly1d(x_i,True)
poly = 0
poly = d*(root)**3
poly = poly + c*(root)**2
poly = poly + b*root
poly = poly + a
# Set up our domain between data points, and plot the function
pts = arange(x_i,x_i_1, 0.001)
plt.plot(pts, poly(pts), '-')
return
If you want to test it, here's some data you can use to get started, which comes from the
function 1.6e^(-2x)sin(3*pi*x) between 0 and 1:
# These are our data points
x_vals = [0, 1./6, 1./3, 1./2, 7./12, 2./3, 3./4, 5./6, 11./12, 1]
# Set up the domain
x_domain = arange(0,2, 1e-2)
fx = zeroV(10)
# Defines the function so we can get our fx values
def sine_func(x):
return(1.6*e**(-2*x)*sin(3*pi*x))
for i in range(len(x_vals)):
fx[i] = sine_func(x_vals[i])
# Run cubic_spline interpolant.
cubic_spline(10,x_vals,fx)
Comments on your coding style:
Where are your comments and documentation? At the very least, provide function documentation so that people can tell how your function is supposed to be used.
Instead of:
def cubic_spline(xx,yy):
Please write something like:
def cubic_spline(xx, yy):
"""function cubic_spline(xx,yy) interpolates between the knots
specified by lists xx and yy. The function returns the coefficients
and ranges of the piecewise cubic splines."""
You can make lists of repeated elements by using the * operator on a list.
Like this:
>>> [0] * 10
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
So that your zeroV function can be replaced by [0] * m.
Just don't do this with mutable types! (especially lists).
>>> inner_list = [1,2,3]
>>> outer_list = [inner_list] * 3
>>> outer_list
[[1, 2, 3], [1, 2, 3], [1, 2, 3]]
>>> inner_list[0] = 999
>>> outer_list
[[999, 2, 3], [999, 2, 3], [999, 2, 3]] # wut
Math should probably be done using numpy or scipy.
Apart from that, you should read Idiomatic Python by David Goodger.

Categories

Resources