I want to implement the composite Gaussian quadrature in Python to evaluate the integral ∫01 ex2 dx. Evaluting this using Python's quad command, I get ∫01 ex2 dx ≈ 1.46
Below is my attempt at implementing this in Python. What I expect is that as n gets larger, the closer the quadrature gets to the 'real' integral. However, as I vary n, the results gets smaller and smaller. What mistake am I making?
def gauss(f,a,b,n):
h = float(b-a)/n
[x,w] = np.polynomial.legendre.leggauss(n)
result = 0
for i in range(n):
result += w[i] * f(x[i])
result *= h
return result
for i in range(1,10):
print(gauss(exp,0,1,i))
2.0
1.3956124250860895
0.9711551112557443
0.731135234765899
0.5850529284514102
0.4875503086966867
0.41790049038666144
0.36566293624426005
0.32503372130693325
Here is the working code based on your attempt:
import numpy as np
def gauss(f,a,b,n):
half = float(b-a)/2.
mid = (a+b)/2.
[x,w] = np.polynomial.legendre.leggauss(n)
result = 0.
for i in range(n):
result += w[i] * f(half*x[i] + mid)
result *= half
return result
def fun(x):
return np.exp(x**2)
for i in range(1,10):
print(gauss(fun,0,1,i))
The problem was that the integral has to be rescaled to the domain of the Legendre polynomials you are using. Also, you have not used the quadrature rule correctly (it does not use the stepsize h that you have in your code.)
The correct rule is:
∫ab f(x) dx ≈ (b-a)/2 ∑i=1nwi f((b-a)/2ξi + (a+b)/2).
With this the code above outputs:
1.2840254166877414
1.4541678892391303
1.462409711477322
1.462646815656646
1.4626516680186825
1.4626517449041974
1.4626517458962964
1.4626517459070796
1.462651745907181
Related
I am trying to calculate the second-order autocorrelation function
g(lag) = < P2 (m(t) . m(t+lag)) >
for a given numpy array m where P2 is the second-order Legendre polynomial P2(x) = 0.5 * (3*x**2-1). The average is calculated over all initial points t for all possible lag times lag.
I have written a simple function to calculate this function g(lag)
def calc_tau_fast_build(vector):
maxLag = numpy.int(np.floor(len(vector)/2))
g = numpy.zeros((maxLag));
for tau in range(0, maxLag):
w = 0;
tmp = 0;
for t in range(0, len(vector) - tau):
theta = vector[t] * vector[tau + t]
tmp = tmp + 0.5*(3*theta**2-1)
w = w + 1
g[tau] = tmp/w
return g
This works just fine, however for large inputs the performance is rather poor.
I would like to replace the entire function with numpy.correlate if possible. In fact, without the P2 = 0.5*(3*theta**2-1) part, I can get the same results, upto a normalization constant, with
res = numpy.correlate(vector,vector,mode='full')
res[numpy.int(res.size/2):]
I basically couldn't think of a way to incorporate the P2 function into the numpy.correlate without playing around with the numpy source code. Could anyone please suggest a way to use numpy.correlate in this context?
I am currently trying to write some python code to solve an arbitrary system of first order ODEs, using a general explicit Runge-Kutta method defined by the values alpha, gamma (both vectors of dimension m) and beta (lower triangular matrix of dimension m x m) of the Butcher table which are passed in by the user. My code appears to work for single ODEs, having tested it on a few different examples, but I'm struggling to generalise my code to vector valued ODEs (i.e. systems).
In particular, I try to solve a Van der Pol oscillator ODE (reduced to a first order system) using Heun's method defined by the Butcher Tableau values given in my code, but I receive the errors
"RuntimeWarning: overflow encountered in double_scalars f = lambda t,u: np.array(... etc)" and
"RuntimeWarning: invalid value encountered in add kvec[i] = f(t+alpha[i]*h,y+h*sum)"
followed by my solution vector that is clearly blowing up. Note that the commented out code below is one of the examples of single ODEs that I tried and is solved correctly. Could anyone please help? Here is my code:
import numpy as np
def rk(t,y,h,f,alpha,beta,gamma):
'''Runga Kutta iteration'''
return y + h*phi(t,y,h,f,alpha,beta,gamma)
def phi(t,y,h,f,alpha,beta,gamma):
'''Phi function for the Runga Kutta iteration'''
m = len(alpha)
count = np.zeros(len(f(t,y)))
kvec = k(t,y,h,f,alpha,beta,gamma)
for i in range(1,m+1):
count = count + gamma[i-1]*kvec[i-1]
return count
def k(t,y,h,f,alpha,beta,gamma):
'''returning a vector containing each step k_{i} in the m step Runga Kutta method'''
m = len(alpha)
kvec = np.zeros((m,len(f(t,y))))
kvec[0] = f(t,y)
for i in range(1,m):
sum = np.zeros(len(f(t,y)))
for l in range(1,i+1):
sum = sum + beta[i][l-1]*kvec[l-1]
kvec[i] = f(t+alpha[i]*h,y+h*sum)
return kvec
def timeLoop(y0,N,f,alpha,beta,gamma,h,rk):
'''function that loops through time using the RK method'''
t = np.zeros([N+1])
y = np.zeros([N+1,len(y0)])
y[0] = y0
t[0] = 0
for i in range(1,N+1):
y[i] = rk(t[i-1],y[i-1], h, f,alpha,beta,gamma)
t[i] = t[i-1]+h
return t,y
#################################################################
'''f = lambda t,y: (c-y)**2
Y = lambda t: np.array([(1+t*c*(c-1))/(1+t*(c-1))])
h0 = 1
c = 1.5
T = 10
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
eff_rk = compute(h0,Y(0),T,f,alpha,beta,gamma,rk, Y,11)'''
#constants
mu = 100
T = 1000
h = 0.01
N = int(T/h)
#initial conditions
y0 = 0.02
d0 = 0
init = np.array([y0,d0])
#Butcher Tableau for Heun's method
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
#rhs of the ode system
f = lambda t,u: np.array([u[1],mu*(1-u[0]**2)*u[1]-u[0]])
#solving the system
time, sol = timeLoop(init,N,f,alpha,beta,gamma,h,rk)
print(sol)
Your step size is not small enough. The Van der Pol oscillator with mu=100 is a fast-slow system with very sharp turns at the switching of the modes, so rather stiff. With explicit methods this requires small step sizes, the smallest sensible step size is 1e-5 to 1e-6. You get a solution on the limit cycle already for h=0.001, with resulting velocities up to 150.
You can reduce some of that stiffness by using a different velocity/impulse variable. In the equation
x'' - mu*(1-x^2)*x' + x = 0
you can combine the first two terms into a derivative,
mu*v = x' - mu*(1-x^2/3)*x
so that
x' = mu*(v+(1-x^2/3)*x)
v' = -x/mu
The second equation is now uniformly slow close to the limit cycle, while the first has long relatively straight jumps when v leaves the cubic v=x^3/3-x.
This integrates nicely with the original h=0.01, keeping the solution inside the box [-3,3]x[-2,2], even if it shows some strange oscillations that are not present for smaller step sizes and the exact solution.
I am trying to find the elements of a matrix inverse for an ill-conditioned matrix
Consider the complex non-Hermitian matrix M, I know this matrix has one zero eigenvalue, and is therefor singular. However, I need to find the sum of the matrix elements: v#f(M)#u, where u and v are both vectors and f(x)=1/x (effectively the matrix inverse). I know that the zeroth eigenvalue does not contribute to this sum, so there is no explicit issue with the singularity. However, my code is very numerically unstable, I presume this is a consequence of an error in finding the eigenvalues of the system.
Starting by building the preliminary matrices:
import numpy as np
import scipy as sc
g0 = np.array([0,0,1])
g1 = np.array([0,1,0])
e0 = np.array([1,0,0])
sm = np.outer(g0, e0)
sp = np.outer(e0, g0)
def spre(op):
return np.kron(np.eye(op.shape[0]),op)
def spost(op):
return np.kron(op.T,np.eye(op.shape[0]))
def sprepost(op1,op2):
return np.kron(op1.T,op2)
sm_reg = spre(sm)
sp_reg = spre(sp)
spsm_reg=spre(sp#sm)
hil_dim = int(g0.shape[0])
cav_proj= np.eye(hil_dim).reshape(hil_dim**2,)
rho0 =(np.outer(e0,e0)).reshape(hil_dim**2,)
def ham(g):
return g * (np.outer(g1,e0) + np.outer(e0, g1))
def lind_op(A):
L = 2 * sprepost(A,A.conj().T) - spre(A.conj().T # A)
L += - spost(A.conj().T # A)
return L
def JC_lio(g, kappa, gamma):
unit = -1j * (spre(ham(g)) - spost(ham(g)))
lind = gamma * lind_op(np.outer(g0 , e0)) + kappa * lind_op(np.outer(g0 , g1))
return unit + lind
Now define a function that first finds the left and right eigenvalues, and then finds the sum of the matrix elements:
def power_int(g, kappa, gamma):
# Construct the non-Hermitian matrix of interest
lio = JC_lio(g,kappa,gamma)
#Find its left and right eigenvectors:
ev, left, right = scipy.linalg.eig(lio, left=True,right=True)
# Find the appropriate normalisation factors
norm = np.array([(left.conj().T[ii]).dot(right.conj().T[ii]) for ii in range(len(ev))])
#Find the similarity transformation for the problem
P = right
Pinv = (left/norm).conj().T
#find the projectors for the Eigenbasis
Proj = [np.outer(P.conj().T[ii],Pinv[ii]) for ii in range(len(ev))]
#Find the relevant matrix elements between the Eigenbasis and the projectors --- this is where the zero eigenvector gets removed
PowList = [(spsm_reg# Proj[ii] # rho0).dot(cav_proj) for ii in range(len(ev))]
#apply the function
Pow = 0
for ii in range(len(ev)):
if PowList[ii]==0:
Pow = Pow
else:
Pow += PowList[ii]/ev[ii]
return -np.pi * np.real(Pow)
#example run:
grange = np.linspace(0.001,10,40)
dat = np.array([power_int(g, 1, 1) for g in grange])
Running this code leads to extremely oscillatory results where I expect a smooth curve. I suspect this error is due to poor accuracy in determining the eigenvectors, but I can't seem to find any documentation on this. Any insights would be welcome.
I want to 1. express Simpson's Rule as a general function for integration in python and 2. use it to compute and plot the Fourier Series coefficients of the function .
I've stolen and adapted this code for Simpson's Rule, which seems to work fine for integrating simple functions such as ,
or
Given period , the Fourier Series coefficients are computed as:
where k = 1,2,3,...
I am having difficulty figuring out how to express . I'm aware that since this function is odd, but I would like to be able to compute it in general for other functions.
Here's my attempt so far:
import matplotlib.pyplot as plt
from numpy import *
def f(t):
k = 1
for k in range(1,10000): #to give some representation of k's span
k += 1
return sin(t)*sin(k*t)
def trapezoid(f, a, b, n):
h = float(b - a) / n
s = 0.0
s += f(a)/2.0
for j in range(1, n):
s += f(a + j*h)
s += f(b)/2.0
return s * h
print trapezoid(f, 0, 2*pi, 100)
This doesn't give the correct answer of 0 at all since it increases as k increases and I'm sure I'm approaching it with tunnel vision in terms of the for loop. My difficulty in particular is with stating the function so that k is read as k = 1,2,3,...
The problem I've been given unfortunately doesn't specify what the coefficients are to be plotted against, but I am assuming it's meant to be against k.
Here's one way to do it, if you want to run your own integration or fourier coefficient determination instead of using numpy or scipy's built in methods:
import numpy as np
def integrate(f, a, b, n):
t = np.linspace(a, b, n)
return (b - a) * np.sum(f(t)) / n
def a_k(f, k):
def ker(t): return f(t) * np.cos(k * t)
return integrate(ker, 0, 2*np.pi, 2**10+1) / np.pi
def b_k(f, k):
def ker(t): return f(t) * np.sin(k * t)
return integrate(ker, 0, 2*np.pi, 2**10+1) / np.pi
print(b_k(np.sin, 0))
This gives the result
0.0
On a side note, trapezoid integration is not very useful for uniform time intervals. But if you desire:
def trap_integrate(f, a, b, n):
t = np.linspace(a, b, n)
f_t = f(t)
dt = t[1:] - t[:-1]
f_ab = f_t[:-1] + f_t[1:]
return 0.5 * np.sum(dt * f_ab)
There's also np.trapz if you want to use pre-builtin functionality. Similarly, there's also scipy.integrate.trapz
I have been trying to solve integration with riemann sum. My function has 3 arguments a,b,d so a is lower limit b is higher limit and d is the part where a +(n-1)*d < b. This is my code so far but. My output is 28.652667999999572 what I should get is 28.666650000000388. Also if the input b is lower than a it has to calculate but I have solved that problem already.
def integral(a, b, d):
if a > b:
a,b = b,a
delta_x = float((b-a)/1000)
j = abs((b-a)/delta_x)
i = int(j)
n = s = 0
x = a
while n < i:
delta_A = (x**2+3*x+4) * delta_x
x += delta_x
s += delta_A
n += 1
return abs(s)
print(integral(1,3,0.01))
There is no fault here, neither with the algorithm nor with your code (or python). The Riemann sum is an approximation of the integral and per se not "exact". You approximate the area of a (small) stripe of width dx, say between x and x+dx, and f(x) with the area of an rectangle of the same width and the height of f(x) as it's left upper corner. If the function changes it's value when you go from x to x+dx then the area of the rectangle deviates from the true integral.
As you have noticed, you can make the approximation closer by making thinner and thinner slices, at the cost of more computational effort and time.
In your example, the function is f(x) = x^2 + 3*x + 4, and it's exact integral over x in [1.0,3.0) is 28 2/3 or 28.66666...
The approximation by rectangles is a crude one, you cannot change that. But what you could change is the time it takes for your code to evaluate, say, 10^8 steps instead of 10^3. Look at this code:
def riemann(a, b, dx):
if a > b:
a,b = b,a
# dx = (b-a)/n
n = int((b - a) / dx)
s = 0.0
x = a
for i in xrange(n):
f_i = (x + 3.0) * x + 4.0
s += f_i
x += dx
return s * dx
Here, I've used 3 tricks for speedup, and one for greater precision. First, if you write a loop and you know the number of repetions in advance then use a for-loop instead of a while-loop. It's faster. (BTW, loop variables conventionally are i, j, k ... whereas a limit or final value is n). Secondly, using xrange instead of range is faster for users of python 2.x. Thirdly, factorize polynoms when calculating them often. You should see from the code what I mean here. This way, the result is numerically stable. Last trick: operations within the loop which do not depend on the loop variable can be extracted and applied after the loop has ended. Here, the final multiplication with dx.