I'm using right now the scipy.integrate.quad to successfully integrate some real integrands. Now a situation appeared that I need to integrate a complex integrand. quad seems not be able to do it, as the other scipy.integrate routines, so I ask: is there any way to integrate a complex integrand using scipy.integrate, without having to separate the integral in the real and the imaginary parts?
What's wrong with just separating it out into real and imaginary parts? scipy.integrate.quad requires the integrated function return floats (aka real numbers) for the algorithm it uses.
import scipy
from scipy.integrate import quad
def complex_quadrature(func, a, b, **kwargs):
def real_func(x):
return scipy.real(func(x))
def imag_func(x):
return scipy.imag(func(x))
real_integral = quad(real_func, a, b, **kwargs)
imag_integral = quad(imag_func, a, b, **kwargs)
return (real_integral[0] + 1j*imag_integral[0], real_integral[1:], imag_integral[1:])
E.g.,
>>> complex_quadrature(lambda x: (scipy.exp(1j*x)), 0,scipy.pi/2)
((0.99999999999999989+0.99999999999999989j),
(1.1102230246251564e-14,),
(1.1102230246251564e-14,))
which is what you expect to rounding error - integral of exp(i x) from 0, pi/2 is (1/i)(e^i pi/2 - e^0) = -i(i - 1) = 1 + i ~ (0.99999999999999989+0.99999999999999989j).
And for the record in case it isn't 100% clear to everyone, integration is a linear functional, meaning that ∫ { f(x) + k g(x) } dx = ∫ f(x) dx + k ∫ g(x) dx (where k is a constant with respect to x). Or for our specific case ∫ z(x) dx = ∫ Re z(x) dx + i ∫ Im z(x) dx as z(x) = Re z(x) + i Im z(x).
If you are trying to do a integration over a path in the complex plane (other than along the real axis) or region in the complex plane, you'll need a more sophisticated algorithm.
Note: Scipy.integrate will not directly handle complex integration. Why? It does the heavy lifting in the FORTRAN QUADPACK library, specifically in qagse.f which explicitly requires the functions/variables to be real before doing its "global adaptive quadrature based on 21-point Gauss–Kronrod quadrature within each subinterval, with acceleration by Peter Wynn's epsilon algorithm." So unless you want to try and modify the underlying FORTRAN to get it to handle complex numbers, compile it into a new library, you aren't going to get it to work.
If you really want to do the Gauss-Kronrod method with complex numbers in exactly one integration, look at wikipedias page and implement directly as done below (using 15-pt, 7-pt rule). Note, I memoize'd function to repeat common calls to the common variables (assuming function calls are slow as if the function is very complex). Also only did 7-pt and 15-pt rule, since I didn't feel like calculating the nodes/weights myself and those were the ones listed on wikipedia, but getting reasonable errors for test cases (~1e-14)
import scipy
from scipy import array
def quad_routine(func, a, b, x_list, w_list):
c_1 = (b-a)/2.0
c_2 = (b+a)/2.0
eval_points = map(lambda x: c_1*x+c_2, x_list)
func_evals = map(func, eval_points)
return c_1 * sum(array(func_evals) * array(w_list))
def quad_gauss_7(func, a, b):
x_gauss = [-0.949107912342759, -0.741531185599394, -0.405845151377397, 0, 0.405845151377397, 0.741531185599394, 0.949107912342759]
w_gauss = array([0.129484966168870, 0.279705391489277, 0.381830050505119, 0.417959183673469, 0.381830050505119, 0.279705391489277,0.129484966168870])
return quad_routine(func,a,b,x_gauss, w_gauss)
def quad_kronrod_15(func, a, b):
x_kr = [-0.991455371120813,-0.949107912342759, -0.864864423359769, -0.741531185599394, -0.586087235467691,-0.405845151377397, -0.207784955007898, 0.0, 0.207784955007898,0.405845151377397, 0.586087235467691, 0.741531185599394, 0.864864423359769, 0.949107912342759, 0.991455371120813]
w_kr = [0.022935322010529, 0.063092092629979, 0.104790010322250, 0.140653259715525, 0.169004726639267, 0.190350578064785, 0.204432940075298, 0.209482141084728, 0.204432940075298, 0.190350578064785, 0.169004726639267, 0.140653259715525, 0.104790010322250, 0.063092092629979, 0.022935322010529]
return quad_routine(func,a,b,x_kr, w_kr)
class Memoize(object):
def __init__(self, func):
self.func = func
self.eval_points = {}
def __call__(self, *args):
if args not in self.eval_points:
self.eval_points[args] = self.func(*args)
return self.eval_points[args]
def quad(func,a,b):
''' Output is the 15 point estimate; and the estimated error '''
func = Memoize(func) # Memoize function to skip repeated function calls.
g7 = quad_gauss_7(func,a,b)
k15 = quad_kronrod_15(func,a,b)
# I don't have much faith in this error estimate taken from wikipedia
# without incorporating how it should scale with changing limits
return [k15, (200*scipy.absolute(g7-k15))**1.5]
Test case:
>>> quad(lambda x: scipy.exp(1j*x), 0,scipy.pi/2.0)
[(0.99999999999999711+0.99999999999999689j), 9.6120083407040365e-19]
I don't trust the error estimate -- I took something from wiki for recommended error estimate when integrating from [-1 to 1] and the values don't seem reasonable to me. E.g., the error above compared with truth is ~5e-15 not ~1e-19. I'm sure if someone consulted num recipes, you could get a more accurate estimate. (Probably have to multiple by (a-b)/2 to some power or something similar).
Recall, the python version is less accurate than just calling scipy's QUADPACK based integration twice. (You could improve upon it if desired).
I realize I'm late to the party, but perhaps quadpy (a project of mine) can help. This
import quadpy
import numpy
val, err = quadpy.quad(lambda x: numpy.exp(1j * x), 0, 1)
print(val)
correctly gives
(0.8414709848078964+0.4596976941318605j)
Related
I am trying to calculate an integral with mpmath.quad. I basically have to calculate three moments of a distribution, call it f (pseudocode):
integrate(f(x)/x, 0, infinity)
integrate(f(x)*x, 0, infinity)
integrate(f(x)ln(x)/x, 0, infinity)
As far as I understood, the tanh-sinh algorithm from mpmath applies a coordinate transformation to the integration interval, uses it to find suitable nodes xk-s and weights wk-s and returns (pseudocode):
sum f(xk)*wk
with N the number of nodes. Since in my original problem the calculation takes a long time, I was wishing to reuse the values of f calculated by the first integration for the other integrals, i.e., computing them from discrete samples with something like scipy.integrate.trapezoid or simpson. By employing a structure Simulator I simplified from an answer on this forum, I managed to cache the xk-s and f(xk)-s, but not the weights. I checked that the xk-s from different integrations are the same.
Now, if I naively apply a standard quadrature from the scipy module to my samples, say trapezoid(fs, xs), the result I get is different from that calculated by mpmath.quad, especially if the samples are few. While this fact does not surprise me, I would like to find out how to retrieve the weights mpmath.quad uses in the first calculation, say the one for f(x)/x, so I could avoid running the time consuming algorithm thrice.
I cannot understand the documentation as for this point. mpmath documentation gives a lot of examples, but none concerning the retrieval of calculated nodes, although it states several times the nodes are "cached". Where are they? mpmath.quad only returns the integral result!
So I would like to know: is what I'm trying to achieve sensible at all? And if it is, how could I accomplish the task?
Below is a code that reproduces the behaviour. Any help is very appreciated.
import numpy as np
import mpmath as mp
import matplotlib.pyplot as plt
from scipy.integrate import trapezoid, simpson
class Simulator:
def __init__(self, func, storex:np.ndarray=np.array([]), storef:np.ndarray=np.array([])):
self.func = func
self.storex = storex
self.storef = storef
def simulate(self, x, *args):
result = self.func(x, *args)
self.storex = np.append(self.storex, x)
self.storef = np.append(self.storef, result)
return result
def lorentz(x):
x = mp.mpf(x)
return mp.mpf(1)/(mp.power(x-mp.mpf(1), 2) + mp.mpf(1))
simratio = simproduct = Simulator(lorentz)
integralratio = mp.quad(lambda x: simratio.simulate(x)/x, [0, 1, mp.inf])
integralproduct = mp.quad(lambda x: simproduct.simulate(x)*x, [0, 1, mp.inf])
ratiox, ratioy = np.transpose(sorted([(x,y) for x, y in zip(simratio.storex, simratio.storef)])) #we get the sampled xs and f(x)s
integralproduct_trap = trapezoid(ratioy*ratiox, ratiox)
print("int[f(x)/x], quad: ", integralratio)
print("int[f(x)*x], quad: ", integralproduct)
print("int[f(x)*x], scipy.trapezoid: ", integralproduct_trap)
I'm using right now the scipy.integrate.quad to successfully integrate some real integrands. Now a situation appeared that I need to integrate a complex integrand. quad seems not be able to do it, as the other scipy.integrate routines, so I ask: is there any way to integrate a complex integrand using scipy.integrate, without having to separate the integral in the real and the imaginary parts?
What's wrong with just separating it out into real and imaginary parts? scipy.integrate.quad requires the integrated function return floats (aka real numbers) for the algorithm it uses.
import scipy
from scipy.integrate import quad
def complex_quadrature(func, a, b, **kwargs):
def real_func(x):
return scipy.real(func(x))
def imag_func(x):
return scipy.imag(func(x))
real_integral = quad(real_func, a, b, **kwargs)
imag_integral = quad(imag_func, a, b, **kwargs)
return (real_integral[0] + 1j*imag_integral[0], real_integral[1:], imag_integral[1:])
E.g.,
>>> complex_quadrature(lambda x: (scipy.exp(1j*x)), 0,scipy.pi/2)
((0.99999999999999989+0.99999999999999989j),
(1.1102230246251564e-14,),
(1.1102230246251564e-14,))
which is what you expect to rounding error - integral of exp(i x) from 0, pi/2 is (1/i)(e^i pi/2 - e^0) = -i(i - 1) = 1 + i ~ (0.99999999999999989+0.99999999999999989j).
And for the record in case it isn't 100% clear to everyone, integration is a linear functional, meaning that ∫ { f(x) + k g(x) } dx = ∫ f(x) dx + k ∫ g(x) dx (where k is a constant with respect to x). Or for our specific case ∫ z(x) dx = ∫ Re z(x) dx + i ∫ Im z(x) dx as z(x) = Re z(x) + i Im z(x).
If you are trying to do a integration over a path in the complex plane (other than along the real axis) or region in the complex plane, you'll need a more sophisticated algorithm.
Note: Scipy.integrate will not directly handle complex integration. Why? It does the heavy lifting in the FORTRAN QUADPACK library, specifically in qagse.f which explicitly requires the functions/variables to be real before doing its "global adaptive quadrature based on 21-point Gauss–Kronrod quadrature within each subinterval, with acceleration by Peter Wynn's epsilon algorithm." So unless you want to try and modify the underlying FORTRAN to get it to handle complex numbers, compile it into a new library, you aren't going to get it to work.
If you really want to do the Gauss-Kronrod method with complex numbers in exactly one integration, look at wikipedias page and implement directly as done below (using 15-pt, 7-pt rule). Note, I memoize'd function to repeat common calls to the common variables (assuming function calls are slow as if the function is very complex). Also only did 7-pt and 15-pt rule, since I didn't feel like calculating the nodes/weights myself and those were the ones listed on wikipedia, but getting reasonable errors for test cases (~1e-14)
import scipy
from scipy import array
def quad_routine(func, a, b, x_list, w_list):
c_1 = (b-a)/2.0
c_2 = (b+a)/2.0
eval_points = map(lambda x: c_1*x+c_2, x_list)
func_evals = map(func, eval_points)
return c_1 * sum(array(func_evals) * array(w_list))
def quad_gauss_7(func, a, b):
x_gauss = [-0.949107912342759, -0.741531185599394, -0.405845151377397, 0, 0.405845151377397, 0.741531185599394, 0.949107912342759]
w_gauss = array([0.129484966168870, 0.279705391489277, 0.381830050505119, 0.417959183673469, 0.381830050505119, 0.279705391489277,0.129484966168870])
return quad_routine(func,a,b,x_gauss, w_gauss)
def quad_kronrod_15(func, a, b):
x_kr = [-0.991455371120813,-0.949107912342759, -0.864864423359769, -0.741531185599394, -0.586087235467691,-0.405845151377397, -0.207784955007898, 0.0, 0.207784955007898,0.405845151377397, 0.586087235467691, 0.741531185599394, 0.864864423359769, 0.949107912342759, 0.991455371120813]
w_kr = [0.022935322010529, 0.063092092629979, 0.104790010322250, 0.140653259715525, 0.169004726639267, 0.190350578064785, 0.204432940075298, 0.209482141084728, 0.204432940075298, 0.190350578064785, 0.169004726639267, 0.140653259715525, 0.104790010322250, 0.063092092629979, 0.022935322010529]
return quad_routine(func,a,b,x_kr, w_kr)
class Memoize(object):
def __init__(self, func):
self.func = func
self.eval_points = {}
def __call__(self, *args):
if args not in self.eval_points:
self.eval_points[args] = self.func(*args)
return self.eval_points[args]
def quad(func,a,b):
''' Output is the 15 point estimate; and the estimated error '''
func = Memoize(func) # Memoize function to skip repeated function calls.
g7 = quad_gauss_7(func,a,b)
k15 = quad_kronrod_15(func,a,b)
# I don't have much faith in this error estimate taken from wikipedia
# without incorporating how it should scale with changing limits
return [k15, (200*scipy.absolute(g7-k15))**1.5]
Test case:
>>> quad(lambda x: scipy.exp(1j*x), 0,scipy.pi/2.0)
[(0.99999999999999711+0.99999999999999689j), 9.6120083407040365e-19]
I don't trust the error estimate -- I took something from wiki for recommended error estimate when integrating from [-1 to 1] and the values don't seem reasonable to me. E.g., the error above compared with truth is ~5e-15 not ~1e-19. I'm sure if someone consulted num recipes, you could get a more accurate estimate. (Probably have to multiple by (a-b)/2 to some power or something similar).
Recall, the python version is less accurate than just calling scipy's QUADPACK based integration twice. (You could improve upon it if desired).
I realize I'm late to the party, but perhaps quadpy (a project of mine) can help. This
import quadpy
import numpy
val, err = quadpy.quad(lambda x: numpy.exp(1j * x), 0, 1)
print(val)
correctly gives
(0.8414709848078964+0.4596976941318605j)
I am trying to define a piecewise function to be fitted by lmfit library in Python. The issue I am having is a parameter I have defined for the function will not evaluate alongside the data I am submitting.
I have one example of a case somewhat similar to mine here. However, the vectorize function the answer describes wasn't producing values I wanted, and when reading the documentation, it didn't seem to be the answer to my solution. I also used scipy.optimize.leastsq, but I got the same issue with lmfit described below.
I have a my residual function defined such as
from lmfit import minimize, Parameters, Model
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
if(param2 < x):
p = 1
else:
p = param1*x + param2
return p - y
params = Parameters()
params.add('one', value=1)
params.add('two', value=2)
out = minimize(residual, params,args=(y,x))
I also tried defining the function such that
def f(param1,param2,x):
if(param2 < x):
p = 1
else:
p = param1*x + param2
return p
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
return f(param1,param2,x) - y
I have also tried inline using a lambda function.
I am getting an error 'The truth value of an array with more than one element is ambiguous.' When I got the error, it made sense why it happened, because (param2 < x) would produce a logical array. However, I can't seem to find a way to define the function in a piecewise fashion with the given case to get it fitted with the lmfit.minimize() function. I have seen the answer done in Matlab, in which it's nlinfit function seems to evaluate the data element-wise without issue (I tried searching if Python has an equivalent operation to define element-wise computation such as .* or .+, but that doesn't seem to exist as explicitly).
lmfit also seems to operate a bit differently compared to nlinfit, because we have to always have our residuals return (model - y) while nlinfit outputs the result once the function is given, which I am not sure could be another issue.
So to reiterate, my main question is if there is a method of defining the piecewise function such that it can compare the parameter to the data set.
Any help or explanation would be appreciated, thank you!
In place of (param2 < x) (where param2 is a float and x is an numpy array), you want to use numpy.where. You might try:
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
p = param1 * x + param2
p[np.where(param2 < x)] = 1.0
return p - y
I should also warn you about a potential problem with this approach to having a variable be a boundary for a piecewise function.
In non-linear fits, variables are always floating point (continuous, non-discrete) values. As the fit proceeds, it will make small adjustments in the values and see how that small change alters the result. In your approach, the parameter 'two' is used as both the transition between pieces and the offset for the line -- that is good.
If a parameter is used only as the transition, it may not work. Consider, say, x=np.array([0, 1., 2., 3., 4., ..., 20.0]). Having two = 10.5 and two=10.4 would then give the same result. In that case, the fit would not be able to alter the value of two: it would try a very small change, see no change in the result and give up.
So, either make sure that two is also used elsewhere in your real model (assuming your real model is more complicated than the example given), or consider using a more gentle transition rather than a hard change in pieces. I find an error-function of width ~spacing between x points often works. Depending on the nature of your problem, you might try something like this:
from scipy.special import erf, erfc
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
dx = (max(x) - min(x))/(len(x)-1)
xhi = (erf((x-param2)/dx) + 1)/2.0
xlo = (erfc((x-param2)/dx) + 1)/2.0
p = xlo*1.0 + xhi*(param1*x + param2)
# note: did you really want?
# p = xlo*param + xhi*(param1*x + param2)
# p = param2 + xhi*param1*x
return p - y
Hope that helps.
For a program, I need an algorithm to very quickly compute the volume of a solid. This shape is specified by a function that, given a point P(x,y,z), returns 1 if P is a point of the solid and 0 if P is not a point of the solid.
I have tried using numpy using the following test:
import numpy
from scipy.integrate import *
def integrand(x,y,z):
if x**2. + y**2. + z**2. <=1.:
return 1.
else:
return 0.
g=lambda x: -2.
f=lambda x: 2.
q=lambda x,y: -2.
r=lambda x,y: 2.
I=tplquad(integrand,-2.,2.,g,f,q,r)
print I
but it fails giving me the following errors:
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The algorithm does not converge. Roundoff error is detected
in the extrapolation table. It is assumed that the requested tolerance
cannot be achieved, and that the returned result (if full_output = 1) is
the best which can be obtained.
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The occurrence of roundoff error is detected, which prevents
the requested tolerance from being achieved. The error may be
underestimated.
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The integral is probably divergent, or slowly convergent.
So, naturally, I looked for "special-purpose integrators", but could not find any that would do what I needed.
Then, I tried writing my own integration using the Monte Carlo method and tested it with the same shape:
import random
# Monte Carlo Method
def get_volume(f,(x0,x1),(y0,y1),(z0,z1),prec=0.001,init_sample=5000):
xr=(x0,x1)
yr=(y0,y1)
zr=(z0,z1)
vdomain=(x1-x0)*(y1-y0)*(z1-z0)
def rand((p0,p1)):
return p0+random.random()*(p1-p0)
vol=0.
points=0.
s=0. # sum part of variance of f
err=0.
percent=0
while err>prec or points<init_sample:
p=(rand(xr),rand(yr),rand(zr))
rpoint=f(p)
vol+=rpoint
points+=1
s+=(rpoint-vol/points)**2
if points>1:
err=vdomain*(((1./(points-1.))*s)**0.5)/(points**0.5)
if err>0:
if int(100.*prec/err)>=percent+1:
percent=int(100.*prec/err)
print percent,'% complete\n error:',err
print int(points),'points used.'
return vdomain*vol/points
f=lambda (x,y,z): ((x**2)+(y**2)<=4.) and ((z**2)<=9.) and ((x**2)+(y**2)>=0.25)
print get_volume(f,(-2.,2.),(-2.,2.),(-2.,2.))
but this works too slowly. For this program I will be using this numerical integration about 100 times or so, and I will also be doing it on larger shapes, which will take minutes if not an hour or two at the rate it goes now, not to mention that I want a better precision than 2 decimal places.
I have tried implementing a MISER Monte Carlo method, but was having some difficulties and I'm still unsure how much faster it would be.
So, I am asking if there are any libraries that can do what I am asking, or if there are any better algorithms which work several times faster (for the same accuracy). Any suggestions are welcome, as I've been working on this for quite a while now.
EDIT:
If I cannot get this working in Python, I am open to switching to any other language that is both compilable and has relatively easy GUI functionality. Any suggestions are welcome.
As the others have already noted, finding the volume of domain that's given by a Boolean function is hard. You could used pygalmesh (a small project of mine) which sits on top of CGAL and gives you back a tetrahedral mesh. This
import numpy
import pygalmesh
import meshplex
class Custom(pygalmesh.DomainBase):
def __init__(self):
super(Custom, self).__init__()
return
def eval(self, x):
return (x[0]**2 + x[1]**2 + x[2]**2) - 1.0
def get_bounding_sphere_squared_radius(self):
return 2.0
mesh = pygalmesh.generate_mesh(Custom(), cell_size=1.0e-1)
gives you
From there on out, you can use a variety of packages for extracting the volume. One possibility: meshplex (another one out of my zoo):
import meshplex
mp = meshplex.MeshTetra(mesh.points, mesh.cells["tetra"])
print(numpy.sum(mp.cell_volumes))
gives
4.161777
which is close enough to the true value 4/3 pi = 4.18879020478.... If want more precision, decrease the cell_size in the mesh generation above.
Your function is not a continuous function, I think it's difficult to do the integration.
How about:
import numpy as np
def sphere(x,y,z):
return x**2 + y**2 + z**2 <= 1
x, y, z = np.random.uniform(-2, 2, (3, 2000000))
sphere(x, y, z).mean() * (4**3), 4/3.0*np.pi
output:
(4.1930560000000003, 4.1887902047863905)
Or VTK:
from tvtk.api import tvtk
n = 151
r = 2.0
x0, x1 = -r, r
y0, y1 = -r, r
z0, z1 = -r, r
X,Y,Z = np.mgrid[x0:x1:n*1j, y0:y1:n*1j, z0:z1:n*1j]
s = sphere(X, Y, Z)
img = tvtk.ImageData(spacing=((x1-x0)/(n-1), (y1-y0)/(n-1), (z1-z0)/(n-1)),
origin=(x0, y0, z0), dimensions=(n, n, n))
img.point_data.scalars = s.astype(float).ravel()
blur = tvtk.ImageGaussianSmooth(input=img)
blur.set_standard_deviation(1)
contours = tvtk.ContourFilter(input = blur.output)
contours.set_value(0, 0.5)
mp = tvtk.MassProperties(input = contours.output)
mp.volume, mp.surface_area
output:
4.186006622559839, 12.621690438955586
It seems to be very difficult without giving a little hint on the boundary:
import numpy as np
from scipy.integrate import *
def integrand(z,y,x):
return 1. if x**2 + y**2 + z**2 <= 1. else 0.
g=lambda x: -2
h=lambda x: 2
q=lambda x,y: -np.sqrt(max(0, 1-x**2-y**2))
r=lambda x,y: np.sqrt(max(0, 1-x**2-y**2))
I=tplquad(integrand,-2.,2.,g,h,q,r)
print I
The (brief) documentation for scipy.integrate.ode says that two methods (dopri5 and dop853) have stepsize control and dense output. Looking at the examples and the code itself, I can only see a very simple way to get output from an integrator. Namely, it looks like you just step the integrator forward by some fixed dt, get the function value(s) at that time, and repeat.
My problem has pretty variable timescales, so I'd like to just get the values at whatever time steps it needs to evaluate to achieve the required tolerances. That is, early on, things are changing slowly, so the output time steps can be big. But as things get interesting, the output time steps have to be smaller. I don't actually want dense output at equal intervals, I just want the time steps the adaptive function uses.
EDIT: Dense output
A related notion (almost the opposite) is "dense output", whereby the steps taken are as large as the stepper cares to take, but the values of the function are interpolated (usually with accuracy comparable to the accuracy of the stepper) to whatever you want. The fortran underlying scipy.integrate.ode is apparently capable of this, but ode does not have the interface. odeint, on the other hand, is based on a different code, and does evidently do dense output. (You can output every time your right-hand-side is called to see when that happens, and see that it has nothing to do with the output times.)
So I could still take advantage of adaptivity, as long as I could decide on the output time steps I want ahead of time. Unfortunately, for my favorite system, I don't even know what the approximate timescales are as functions of time, until I run the integration. So I'll have to combine the idea of taking one integrator step with this notion of dense output.
EDIT 2: Dense output again
Apparently, scipy 1.0.0 introduced support for dense output through a new interface. In particular, they recommend moving away from scipy.integrate.odeint and towards scipy.integrate.solve_ivp, which as a keyword dense_output. If set to True, the returned object has an attribute sol that you can call with an array of times, which then returns the integrated functions values at those times. That still doesn't solve the problem for this question, but it is useful in many cases.
Since SciPy 0.13.0,
The intermediate results from the dopri family of ODE solvers can
now be accessed by a solout callback function.
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
backend = 'dopri5'
# backend = 'dop853'
solver = ode(logistic).set_integrator(backend)
sol = []
def solout(t, y):
sol.append([t, *y])
solver.set_solout(solout)
solver.set_initial_value(y0, t0).set_f_params(r)
solver.integrate(t1)
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
Result:
The result seems to be slightly different from Tim D's, although they both use the same backend. I suspect this having to do with FSAL property of dopri5. In Tim's approach, I think the result k7 from the seventh stage is discarded, so k1 is calculated afresh.
Note: There's a known bug with set_solout not working if you set it after setting initial values. It was fixed as of SciPy 0.17.0.
I've been looking at this to try to get the same result. It turns out you can use a hack to get the step-by-step results by setting nsteps=1 in the ode instantiation. It will generate a UserWarning at every step (this can be caught and suppressed).
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
import warnings
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
#backend = 'vode'
backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend, nsteps=1)
solver.set_initial_value(y0, t0).set_f_params(r)
# suppress Fortran-printed warning
solver._integrator.iwork[2] = -1
sol = []
warnings.filterwarnings("ignore", category=UserWarning)
while solver.t < t1:
solver.integrate(t1, step=True)
sol.append([solver.t, solver.y])
warnings.resetwarnings()
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
result:
The integrate method accepts a boolean argument step that tells the method to return a single internal step. However, it appears that the 'dopri5' and 'dop853' solvers do not support it.
The following code shows how you can get the internal steps taken by the solver when the 'vode' solver is used:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
backend = 'vode'
#backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend)
solver.set_initial_value(y0, t0).set_f_params(r)
sol = []
while solver.successful() and solver.t < t1:
solver.integrate(t1, step=True)
sol.append([solver.t, solver.y])
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
Result:
FYI, although an answer has been accepted already, I should point out for the historical record that dense output and arbitrary sampling from anywhere along the computed trajectory is natively supported in PyDSTool. This also includes a record of all the adaptively-determined time steps used internally by the solver. This interfaces with both dopri853 and radau5 and auto-generates the C code necessary to interface with them rather than relying on (much slower) python function callbacks for the right-hand side definition. None of these features are natively or efficiently provided in any other python-focused solver, to my knowledge.
Here's another option that should also work with dopri5 and dop853. Basically, the solver will call the logistic() function as often as needed to calculate intermediate values so that's where we store the results:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
sol = []
def logistic(t, y, r):
sol.append([t, y])
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
# Maximum number of steps that the integrator is allowed
# to do along the whole interval [t0, t1].
N = 10000
#backend = 'vode'
backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend, nsteps=N)
solver.set_initial_value(y0, t0).set_f_params(r)
# Single call to solver.integrate()
solver.integrate(t1)
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()