Motivation for the question
I'm trying to integrate a function f(x,y,z) over all space.
I have tried using scipy.integrate.tplquad & scipy.integrate.nquad for the integration, but both methods return the integral as 0 (when the integral should be finite). This is because, as the volume of integration increases, the region where the integrand is non-zero gets sampled less and less. The integral 'misses' this region of space. However, scipy.integrate.quad does seem to be able to cope with integrals from [-infinity, infinity] by performing a change of variables...
Question
Is it possible to use scipy.integrate.quad 3 times to perform a triple integral. The code I have in mind would look something like the following:
x_integral = quad(f, -np.inf, np.inf)
y_integral = quad(x_integral, -np.inf, np.inf)
z_integral = quad(y_integral, -np.inf, np.inf)
where f is the function f(x, y, z), x_integral should integrate from x = [- infinity, infinity], y_integral should integrate from y = [- infinity, infinity], and z_integral should integrate from z = [- infinity, infinity]. I am aware that quad wants to return a float, and so does not like integrating a function f(x, y, z) over x to return a function of y and z (as the x_integral = ... line from the code above is attempting to do). Is there a way of implementing the code above?
Thanks
Here is an example with nested call to quad performing the integration giving 1/8th of the sphere volume:
import numpy as np
from scipy.integrate import quad
def fz(x, y):
return quad( lambda z:1, 0, np.sqrt(x**2+y**2) )[0]
def fy(x):
return quad( fz, 0, np.sqrt(1-x**2), args=(x, ) )[0]
def fx():
return quad( fy, 0, 1 )[0]
fx()
>>> 0.5235987755981053
4/3*np.pi/8
>>> 0.5235987755982988
I'm trying to integrate a function f(x,y,z) over all space.
First of all you'll have to ask yourself why the integral should converge at all. Does it have a factor exp(-r) or exp(-r^2)? In both of these cases, quadpy (a project of mine has something for you), e.g.,
import quadpy
scheme = quadpy.e3r2.stroud_secrest_10a()
val = scheme.integrate(lambda x: x[0]**2)
print(val)
2.784163998415853
Related
I have a function that should compute an integral, taking in some function as input. I'd like the code to compute a definite integral of: <some function, in terms of x. e.g., 3*x or 3*x*(1-x), etc.> * np.sin(np.pi * x)). I'm using scipy for this:
import scipy.integrate as integrate
def calculate(a):
test = integrate.quad(a*np.sin(np.pi * x), 0, 1)
return test
a = lambda x: 3*x
calculate(a)
Now this implementation will fail because of the discrepancy between a and x. I tried defining x as x = lambda x: x, but that won't work because I get an error of multiplying a float by a function.
Any suggestions?
Since you are trying to combine two symbolic expressions before computing the definite integral numerically, I think this might be a good application for sympy's symbolic manipulation tools.
from sympy import symbols, Integral, sin, pi
def calculate(a_exp):
test = Integral(a_exp * sin(pi * x), (x, 0, 1)).evalf()
return test
x = symbols('x')
a_exp = 3*x
print(calculate(a_exp))
# 0.954929658551372
Note: I changed the name of a to a_exp to make it clear that this is an expression rather than a function.
If you decide to use sympy then note that you might also be able to compute the expression for the integral symbolically as well.
Update: Importing Sympy might be overkill for this
If computation speed is more important than precision, you can easily calculate the integral approximately using some simple discretized method.
For example, the functions below calculate the integral approximately with increasingly sophisticated methods. The accuracy of the first two will improve as n is increased and also depends on the nature of a_func etc.
import numpy as np
from scipy.integrate import trapz, quad
def calculate2(a_func, n=100):
dx = 1/n
x = np.linspace(0, 1-dx, n)
y = a_func(x) * np.sin(np.pi*x)
return np.sum(y) * dx
def calculate3(a_func, n=100):
x = np.linspace(0, 1, n+1)
y = a_func(x) * np.sin(np.pi*x)
return trapz(y, x)
def calculate4(a_func):
f = lambda x: a_func(x) * np.sin(np.pi*x)
return quad(f, 0, 1)
a_func = lambda x: 3*x
print(calculate2(a_func))
# 0.9548511174430737
print(calculate3(a_func))
# 0.9548511174430737
print(calculate4(a_func)[0])
# 0.954929658551372
I'm not an expert on numerical integration so there may be better ways to do this than these.
I am trying to use scipy to numerically solve the following differential equation
x''+x=\sum_{k=1}^{20}\delta(t-k\pi), y(0)=y'(0)=0.
Here is the code
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
from sympy import DiracDelta
def f(t):
sum = 0
for i in range(20):
sum = sum + 1.0*DiracDelta(t-(i+1)*np.pi)
return sum
def ode(X, t):
x = X[0]
y = X[1]
dxdt = y
dydt = -x + f(t)
return [dxdt, dydt]
X0 = [0, 0]
t = np.linspace(0, 80, 500)
sol = odeint(ode, X0, t)
x = sol[:, 0]
y = sol[:, 1]
plt.plot(t,x, t, y)
plt.xlabel('t')
plt.legend(('x', 'y'))
# phase portrait
plt.figure()
plt.plot(x,y)
plt.plot(x[0], y[0], 'ro')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
However what I got from python is zero solution, which is different from what I got from Mathematica. Here are the mathematica code and the graph
so=NDSolve[{x''(t)+x(t)=\sum _{i=1}^{20} DiraDelta (t-i \pi ),x(0)=0,x'(0)=0},x(t),{t,0,80}]
It seems to me that scipy ignores the Dirac delta function. Where am I wrong? Any help is appreciated.
Dirac delta is not a function. Writing it as density in an integral is still only a symbolic representation. It is, as mathematical object, a functional on the space of continuous functions. delta(t0,f)=f(t0), not more, not less.
One can approximate the evaluation, or "sifting" effect of the delta operator by continuous functions. The usual good approximations have the form N*phi(N*t) where N is a large number and phi a non-negative function, usually with a somewhat compact shape, that has integral one. Popular examples are box functions, tent functions, the Gauß bell curve, ... So you could take
def tentfunc(t): return max(0,1-abs(t))
N = 10.0
def rhs(t): return sum( N*tentfunc(N*(t-(i+1)*np.pi)) for i in range(20))
X0 = [0, 0]
t = np.linspace(0, 80, 1000)
sol = odeint(lambda x,t: [ x[1], rhs(t)-x[0]], X0, t, tcrit=np.pi*np.arange(21), atol=1e-8, rtol=1e-10)
x,v = sol.T
plt.plot(t,x, t, v)
which gives
Note that the density of the t array also influences the accuracy, while the tcrit critical points did not do much.
Another way is to remember that delta is the second derivative of max(0,x), so one can construct a function that is the twice primitive of the right side,
def u(t): return sum(np.maximum(0,t-(i+1)*np.pi) for i in range(20))
so that now the equation is equivalent to
(x(t)-u(t))'' + x(t) = 0
set y = x-u then
y''(t) + y(t) = -u(t)
which now has a continuous right side.
X0 = [0, 0]
t = np.linspace(0, 80, 1000)
sol = odeint(lambda y,t: [ y[1], -u(t)-y[0]], X0, t, atol=1e-8, rtol=1e-10)
y,v = sol.T
x=y+u(t)
plt.plot(t,x)
odeint :
does not handle sympy symbolic objects
it's unlikely it can ever handle Dirac Delta terms.
The best bet is probably to turn dirac deltas into boundary conditions: assume that the function is continuous at the location of the Dirac delta, but the first derivative jumps. Integrating over infinitesimal interval around the location of the delta function gives you the boundary condition for the derivative just left and just right from the delta.
I have a differential equation of the form
dy(x)/dx = f(y,x)
that I would like to solve for y.
I have an array xs containing all of the values of x for which I need ys.
For only those values of x, I can evaluate f(y,x) for any y.
How can I solve for ys, preferably in python?
MWE
import numpy as np
# these are the only x values that are legal
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# some made up function --- I don't actually have an analytic form like this
def f(y, x):
if not np.any(np.isclose(x, xs)):
return np.nan
return np.sin(y + x**2)
# now I want to know which array of ys satisfies dy(x)/dx = f(y,x)
Assuming you can use something simple like Forward Euler...
Numerical solutions will rely on approximate solutions at previous times. So if you want a solution at t = 1 it is likely you will need the approximate solution at t<1.
My advice is to figure out what step size will allow you to hit the times you need, and then find the approximate solution on an interval containing those times.
import numpy as np
#from your example, smallest step size required to hit all would be 0.0001.
a = 0 #start point
b = 1.5 #possible end point
h = 0.0001
N = float(b-a)/h
y = np.zeros(n)
t = np.linspace(a,b,n)
y[0] = 0.1 #initial condition here
for i in range(1,n):
y[i] = y[i-1] + h*f(t[i-1],y[i-1])
Alternatively, you could use an adaptive step method (which I am not prepared to explain right now) to take larger steps between the times you need.
Or, you could find an approximate solution over an interval using a coarser mesh and interpolate the solution.
Any of these should work.
I think you should first solve ODE on a regular grid, and then interpolate solution on your fixed grid. The approximate code for your problem
import numpy as np
from scipy.integrate import odeint
from scipy import interpolate
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# dy/dx = f(x,y)
def dy_dx(y, x):
return np.sin(y + x ** 2)
y0 = 0.0 # init condition
x = np.linspace(0, 10, 200)# here you can control an accuracy
sol = odeint(dy_dx, y0, x)
f = interpolate.interp1d(x, np.ravel(sol))
ys = f(xs)
But dy_dx(y, x) should always return something reasonable (not np.none).
Here is the drawing for this case
Can anyone provide an example of providing a Jacobian to a integrate.odeint function in SciPy?.
I try to run this code from SciPy tutorial odeint example but seems that Dfun() (the Jacobian function) is never called.
from numpy import * # added
from scipy.integrate import odeint
from scipy.special import gamma, airy
y1_0 = 1.0/3**(2.0/3.0)/gamma(2.0/3.0)
y0_0 = -1.0/3**(1.0/3.0)/gamma(1.0/3.0)
y0 = [y0_0, y1_0]
def func(y, t):
return [t*y[1],y[0]]
def gradient(y,t):
print 'jacobian' # added
return [[0,t],[1,0]]
x = arange(0,4.0, 0.01)
t = x
ychk = airy(x)[0]
y = odeint(func, y0, t)
y2 = odeint(func, y0, t, Dfun=gradient)
print y2 # added
Under the hood, scipy.integrate.odeint uses the LSODA solver from the ODEPACK FORTRAN library. In order to deal with situations where the function you are trying to integrate is stiff, LSODA switches adaptively between two different methods for computing the integral - Adams' method, which is faster but unsuitable for stiff systems, and BDF, which is slower but robust to stiffness.
The particular function you're trying to integrate is non-stiff, so LSODA will use Adams on every iteration. You can check this by returning the infodict (...,full_output=True) and checking infodict['mused'].
Since Adams' method does not use the Jacobian, your gradient function never gets called. However if you give odeint a stiff function to integrate, such as the Van der Pol equation:
def vanderpol(y, t, mu=1000.):
return [y[1], mu*(1. - y[0]**2)*y[1] - y[0]]
def vanderpol_jac(y, t, mu=1000.):
return [[0, 1], [-2*y[0]*y[1]*mu - 1, mu*(1 - y[0]**2)]]
y0 = [2, 0]
t = arange(0, 5000, 1)
y,info = odeint(vanderpol, y0, t, Dfun=vanderpol_jac, full_output=True)
print info['mused'] # method used (1=adams, 2=bdf)
print info['nje'] # cumulative number of jacobian evaluations
plot(t, y[:,0])
you should see that odeint switches to using BDF, and the Jacobian function now gets called.
If you want more control over the solver, you should look into scipy.integrate.ode, which is a much more flexible object-oriented interface to multiple different integrators.
I am a python beginner, currently using scipy's odeint to compute a coupled ODE system, however, when I run, python shell always tell me that
>>>
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
>>>
So, I have to change my time step and final time, in order to make it integratable. To do this, I need to try a different combinations, which is quite a pain. Could anyone tell me how can I ask odeint to automatically vary the time step and final time to successfully integrate this ode system?
and here is part of the code which has called odeint:
def main(t, init_pop_a, init_pop_b, *args, **kwargs):
"""
solve the obe for a given set of parameters
"""
# construct initial condition
# initially, rho_ee = 0
rho_init = zeros((16,16))*1j ########
rho_init[1,1] = init_pop_a
rho_init[2,2] = init_pop_b
rho_init[0,0] = 1 - (init_pop_a + init_pop_b)########
rho_init_ravel, params = to_1d(rho_init)
# perform the integration
result = odeint(wrapped_bloch3, rho_init_ravel, t, args=args)
# BUG: need to pass kwargs
# rewrap the result
return from_1d(result, params, prepend=(len(t),))
things = [2*pi, 20*pi, 0,0, 0,0, 0.1,100]
Omega_a, Omega_b, Delta_a, Delta_b, \
init_pop_a, init_pop_b, tstep, tfinal = things
args = ( Delta_a, Delta_b, Omega_a, Omega_b )
t = arange(0, tfinal + tstep, tstep)
data = main(t, init_pop_a, init_pop_b, *args)
plt.plot(t,abs(data[:,4,4]))
where wrapped_bloch3 is the function compute dy/dt.
EDIT: I note you already got an answer here: complex ODE systems in scipy
odeint does not work with complex-valued equations. I get
from scipy.integrate import odeint
import numpy as np
def func(t, y):
return 1 + 1j
t = np.linspace(0, 1, 200)
y = odeint(func, 0, t)
# -> This outputs:
#
# TypeError: can't convert complex to float
# odepack.error: Result from function call is not a proper array of floats.
You can solve your equation by the other ode solver:
from scipy.integrate import ode
import numpy as np
def myodeint(func, y0, t):
y0 = np.array(y0, complex)
func2 = lambda t, y: func(y, t) # odeint has these the other way :/
sol = ode(func2).set_integrator('zvode').set_initial_value(y0, t=t[0])
y = [sol.integrate(tp) for tp in t[1:]]
y.insert(0, y0)
return np.array(y)
def func(y, t, alpha):
return 1j*alpha*y
alpha = 3.3
t = np.linspace(0, 1, 200)
y = myodeint(lambda y, t: func(y, t, alpha), [1, 0, 0], t)