Here is the thing.
I am trying to use fsolve function in Python to find the root of a cubic function. This cubic function has a parameter, deltaW. What I do is change this parameter deltaW from -50 to 50, and find the root of the cubic function at the same time. Below is my script:
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
import numpy as np
import pylab
g = 5.61
gamma = 6.45
kappa = 6.45
J = 6.45
rs = 1.0 #There are just parameters
m = 5.0*10**(-11)
wm = 2*3.14*23.4
X = []
X1 = []
def func(x): #Define the cubic function I need to solve
A = 1j*g**2*(kappa + 1j*deltaW)*x*x/(m*wm**2)
B = J**2 + (1j*deltaW - gamma)*(1j*deltaW + kappa)
C = A + B
D = abs(C)*x - J*np.sqrt(2*kappa)*rs
return D
for deltaW in np.linspace(-50, 50, 1000):
x0 = fsolve(func, 0.0001)
X.append(x0)
deltaW = np.linspace(-50, 50, 1000)
plt.plot(deltaW, X)
plt.show()
When I run this script, I get these two messages:
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last five Jacobian evaluations.
warnings.warn(msg, RuntimeWarning)
/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py:152: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
I am sorry I do not have enough reputation to put the plot of this script here. My question is why do I get this message and why do my plot look so weird in the left part.
Is it because of my code is wrong?
As in almost all cases of finding roots, a good initial guess is imperative. Sometimes the best initial guess is, in fact, known to be wrong. That is the case here. The behavior of your script, which shows unexpected 'spikes' in the answer, can be looked at more deeply by both plotting up the function, and plotting up the found roots around those spikes (hey, you've got a Python console - this is really easy).
What you find is that the solution returned by the solver is jumping around, even though the function really doesn't look that different. The problem is that your initial guess of 0.0001 lies close to a tiny minimum of the function, and the solver can't figure out how to get out of there. Setting the initial guess to 1.0 (way far away, but on a nice, easy descending portion of the function that will head directly to the root), results instead in:
So, three things:
1. solvers need loving care and attention - they are rarely automagic.
Sometimes the 'right' initial guess can be well away from what you know is the right answer, but in such a way that the solver has an easy time of it.
the interactive Python console lets you look quickly at what is going on. Use the power of it!
Related
I have the following problem. I have a function f defined in python using numpy functions. The function is smooth and integrable on positive reals. I want to construct the double antiderivative of the function (assuming that both the value and the slope of the antiderivative at 0 are 0) so that I can evaluate it on any positive real smaller than 100.
Definition of antiderivative of f at x:
integrate f(s) with s from 0 to x
Definition of double antiderivative of f at x:
integrate (integrate f(t) with t from 0 to s) with s from 0 to x
The actual form of f is not important, so I will use a simple one for convenience. But please note that even though my example has a known closed form, my actual function does not.
import numpy as np
f = lambda x: np.exp(-x)*x
My solution is to construct the antiderivative as an array using naive numerical integration:
N = 10000
delta = 100/N
xs = np.linspace(0,100,N+1)
vs = f(xs)
avs = np.cumsum(vs)*delta
aavs = np.cumsum(avs)*delta
This of course works but it gives me arrays instead of functions. But this is not a big problem as I can interpolate aavs using a spline to get a function and get rid of the arrays.
from scipy.interpolate import UnivariateSpline
aaf = UnivariateSpline(xs, aavs)
The function aaf is approximately the double antiderivative of f.
The problem is that even though it works, there is quite a bit of overhead before I can get my function and precision is expensive.
My other idea was to interpolate f by a spline and take the antiderivative of that, however this introduces numerical errors that are too big for what I want to use the function.
Is there any better way to do that? By better I mean faster without sacrificing accuracy.
Edit: What I hope is possible is to use some kind of Fourier transform to avoid integrating twice. I hope that there is some convenient transform of vs that allows to multiply the values component-wise with xs and transform back to get the double antiderivative. I played with this a bit, but I got lost.
Edit: I figured out that by using the trapezoidal rule instead of a naive sum, increases the accuracy quite a bit. Using Simpson's rule should increase the accuracy further, but it's somewhat fiddly to do with numpy arrays.
Edit: As #user202729 rightfully complains, this seems off. The reason it seems off is because I have skipped some details. I explain here why what I say makes sense, but it does not affect my question.
My actual goal is not to find the double antiderivative of f, but to find a transformation of this. I have skipped that because I think it only confuses the matter.
The function f decays exponentially as x approaches 0 or infinity. I am minimizing the numerical error in the integration by starting the sum from 0 and going up to approximately the peak of f. This ensure that the relative error is approximately constant. Then I start from the opposite direction from some very big x and go back to the peak. Then I do the same for the antiderivative values.
Then I transform the aavs by another function which is sensitive to numerical errors. Then I find the region where the errors are big (the values oscillate violently) and drop these values. Finally I approximate what I believe are good values by a spline.
Now if I use spline to approximate f, it introduces an absolute error which is the dominant term in a rather large interval. This gets "integrated" twice and it ends up being a rather large relative error in aavs. Then once I transform aavs, I find that the 'good region' has shrunk considerably.
EDIT: The actual form of f is something I'm still looking into. However, it is going to be a generalisation of the lognormal distribution. Right now I am playing with the following family.
I start by defining a generalization of the normal distribution:
def pdf_n(params, center=0.0, slope=8):
scale, min, diff = params
if diff > 0:
r = min
l = min + diff
else:
r = min - diff
l = min
def retfun(m):
x = (m - center)/scale
E = special.expit(slope*x)*(r - l) + l
return np.exp( -np.power(1 + x*x, E)/2 )
return np.vectorize(retfun)
It may not be obvious what is happening here, but the result is quite simple. The function decays as exp(-x^(2l)) on the left and as exp(-x^(2r)) on the right. For min=1 and diff=0, this is the normal distribution. Note that this is not normalized. Then I define
g = pdf(params)
f = np.vectorize(lambda x:g(np.log(x))/x/area)
where area is the normalization constant.
Note that this is not the actual code I use. I stripped it down to the bare minimum.
You can compute the two np.cumsum (and the divisions) at once more efficiently using Numba. This is significantly faster since there is no need for several temporary arrays to be allocated, filled, read again and freed. Here is a naive implementation:
import numba as nb
#nb.njit('float64[::1](float64[::1], float64)') # Assume vs is contiguous
def doubleAntiderivative_naive(vs, delta):
res = np.empty(vs.size, dtype=np.float64)
sum1, sum2 = 0.0, 0.0
for i in range(vs.size):
sum1 += vs[i] * delta
sum2 += sum1 * delta
res[i] = sum2
return res
However, the sum is not very good in term of numerical stability. A Kahan summation is needed to improve the accuracy (or possibly the alternative Kahan–Babuška-Klein algorithm if you are paranoid about the accuracy and performance do not matter so much). Note that Numpy use a pair-wise algorithm which is quite good but far from being prefect in term of accuracy (this is a good compromise for both performance and accuracy).
Moreover, delta can be factorized during in the summation (ie. the result just need to be premultiplied by delta**2).
Here is an implementation using the more accurate Kahan summation:
#nb.njit('float64[::1](float64[::1], float64)')
def doubleAntiderivative_accurate(vs, delta):
res = np.empty(vs.size, dtype=np.float64)
delta2 = delta * delta
sum1, sum2 = 0.0, 0.0
c1, c2 = 0.0, 0.0
for i in range(vs.size):
# Kahan summation of the antiderivative of vs
y1 = vs[i] - c1
t1 = sum1 + y1
c1 = (t1 - sum1) - y1
sum1 = t1
# Kahan summation of the double antiderivative of vs
y2 = sum1 - c2
t2 = sum2 + y2
c2 = (t2 - sum2) - y2
sum2 = t2
res[i] = sum2 * delta2
return res
Here is the performance of the approaches on my machine (with an i5-9600KF processor):
Numpy cumsum: 51.3 us
Naive Numba: 11.6 us
Accutate Numba: 37.2 us
Here is the relative error of the approaches (based on the provided input function):
Numpy cumsum: 1e-13
Naive Numba: 5e-14
Accutate Numba: 2e-16
Perfect precision: 1e-16 (assuming 64-bit numbers are used)
If f can be easily computed using Numba (this is the case here), then vs[i] can be replaced by calls to f (inlined by Numba). This helps to reduce the memory consumption of the computation (N can be huge without saturating your RAM).
As for the interpolation, the splines often gives good numerical result but they are quite expensive to compute and AFAIK they require the whole array to be computed (each item of the array impact all the spline although some items may have a negligible impact alone). Regarding your needs, you could consider using Lagrange polynomials. You should be careful when using Lagrange polynomials on the edges. In your case, you can easily solve the numerical divergence issue on the edges by extending the array size with the border values (since you know the derivative on each edges of vs is 0). You can apply the interpolation on the fly with this method which can be good for both performance (typically if the computation is parallelized) and memory usage.
First, I created a version of the code I found more intuitive. Here I multiply cumulative sum values by bin widths. I believe there is a small error in the original version of the code related to the bin width issue.
import numpy as np
f = lambda x: np.exp(-x)*x
N = 1000
xs = np.linspace(0,100,N+1)
domainwidth = ( np.max(xs) - np.min(xs) )
binwidth = domainwidth / N
vs = f(xs)
avs = np.cumsum(vs)*binwidth
aavs = np.cumsum(avs)*binwidth
Next, for visualization here is some very simple plotting code:
import matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.scatter( xs, vs )
plt.figure()
plt.scatter( xs, avs )
plt.figure()
plt.scatter( xs, aavs )
plt.show()
The first integral matches the known result of the example expression and can be seen on wolfram
Below is a simple function that extracts an element from the second derivative. Note that int is a bad rounding function. I assume this is what you have implemented already.
def extract_double_antideriv_value(x):
return aavs[int(x/binwidth)]
singleresult = extract_double_antideriv_value(50.24)
print('singleresult', singleresult)
Whatever full computation steps are required, we need to know them before we can start optimizing. Do you have a million different functions to integrate? If you only need to query a single double anti-derivative many times, your original solution should be fairly ideal.
Symbolic Approximation:
Have you considered approximations to the original function f, which can have closed form integration solutions? You have a limited domain on which the function lives. Perhaps approximate f with a Taylor series (which can be constructed with known maximum error) then integrate exactly? (consider Pade, Taylor, Fourier, Cheby, Lagrange(as suggested by another answer), etc...)
Log Tricks:
Another alternative to dealing with spiky errors, would be to take the log of your original function. Is f always positive? Is the integration error caused because the neighborhood around the max is very small? If so, you can study ln(f) or even ln(ln(f)) instead. It would really help to understand what f looks like more.
Approximation Integration Tricks
There exist countless integration tricks in general, which can make approximate closed form solutions to undo-able integrals. A very common one when exponetnial functions are involved (I think yours is expoential?) is to use Laplace's Method. But which trick to pull out of the bag is highly dependent upon the conditions which f satisfies.
I've a system of ODEs which I so far solved via solve_ivp.
scipy.integrate.solve_ivp(fun=model, t_span=(0.0, t_end), y0=[s0])
My problem is, that I want to solve the ODEs in a running simulation, where new values are constantly injected into the simulation and results are displayed. The simulation may run for several hours. My approach was to call solve_ivp repeatedly, roughly as follows (this allows me to show intermediate results and fetch new data, which is not shown here):
t = 0.0
s = s0
while t < t_end:
result = scipy.integrate.solve_ivp(fun=model, t_span=(t, t + t_step), y0=[s])
s = result.y[0][-1]
t += t_step
I wrote a few test cases, where I solved the ODEs analytically and for reasons I do not fully understand yet, repeatedly calling solve_ivp is always closer to the analytical solution (without manually tweaking the different parameters of solve_ivp). My question is more along the lines: If there is something wrong in this approach, or if there is some functionality in scipy or another package which may be better suited to my needs?
A poorly documented option is to use the integrators that underly solve_ivp directly. For example, RK45. When an external event changes something, you would need to restart the integration as most (all?) of these integrators use multiple prior steps to construct the value for the next time point. If you change values or the derivative function mid-integration, you will introduce subtle hard to understand bugs.
Here is an example that changes the derivative function based on an external event to the integrator. In this case, it is simply based on the clock time. Note that this example will give different results on different machines depending on speed of execution.
import datetime
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
def fun1(t, y):
return -y
def fun2(t, y):
return 2 * y
def use_fun2():
return datetime.datetime.utcnow().time().microsecond >= 500000
time_bound = 20.
# max_step used to make sure this runs slow enough
# try changing this to see the difference
max_step = 0.0001
y0 = [5]
fun=fun1
rk45 = integrate.RK45(fun, 0, y0, time_bound, max_step=max_step)
t = []
y = []
while rk45.status == "running" and rk45.y[0] < 200:
if use_fun2():
if fun is fun1:
fun = fun2
rk45 = integrate.RK45(fun, rk45.t, rk45.y, time_bound, max_step=max_step)
else:
if fun is fun2:
fun = fun1
rk45 = integrate.RK45(fun, rk45.t, rk45.y, time_bound, max_step=max_step)
t.append(rk45.t)
y.append(rk45.y)
rk45.step()
import matplotlib.pyplot as plt
plt.plot(t, y)
plt.show()
I have never used python but Mathematica can't handle the equation I am trying to solve. I am trying to solve for the variable "a" of the following equations where s, c, mu, and delta t are known parameters.
I tried doing NSolve, Solve, etc in Mathematica but it has been running for an hour with no luck. Since I am not familiar with Python, is there a way I can use Python to solve this equation for a?
You're not going to find an analytic solution to these equations because they're transcendental, containing a both inside and outside of a trigonometric function.
I think the trouble you're having with numerical solutions is that the range of acceptable values for a is constrained by the arcsin. Since arcsin is only defined for arguments between -1 and 1 (assuming you want a to be real), your formulas for alpha and beta require that a > s/2 and a > (s-c)/2.
In Python, you can find a zero of your third equation (rewritten in the form f(a) = 0) using the brentq function:
import numpy as np
from scipy.optimize import brentq
s = 10014.6
c = 6339.06
mu = 398600.0
dt = 780.0
def f(a):
alpha = 2*np.arcsin(np.sqrt(s/(2*a)))
beta = 2*np.arcsin(np.sqrt((s-c)/(2*a)))
return alpha - beta - (np.sin(alpha)-np.sin(beta)) - np.sqrt(mu/a**3)*dt
a0 = max(s/2, (s-c)/2)
a = brentq(f, a0, 10*a0)
Edit:
To clarify the way brentq(f,a,b) works is that it searches for a zero of f on an interval [a,b]. Here, we know that a is at least max(s/2, (s-c)/2). I just guessed that 10 times that was a plausible upper bound, and that worked for the given parameters. More generally, you need to make sure that f changes sign between a and b. You can read more about how the function works in the SciPy docs.
I think its worth examining the behaviour of the function before atempting to solve it. Without doing that you dont know if there is a unique solution, many solutions, or no solution. (The biggest problem is many solutions, where numerical methods may not give you the solution you require/expect - and if you blindly use it "bad things" might happen). You examine the behaviour nicely using scipy and ipython. This is an example notebook that does that
# -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <codecell>
s = 10014.6
c = 6339.06
mu = 398600.0
dt = 780.0
# <codecell>
def sin_alpha_2(x):
return numpy.sqrt(s/(2*x))
def sin_beta_2(x):
return numpy.sqrt((s-c)/(2*x))
def alpha(x):
return 2*numpy.arcsin( numpy.clip(sin_alpha_2(x),-0.99,0.99) )
def beta(x):
return 2*numpy.arcsin( numpy.clip(sin_beta_2(x),-0.99,0.99) )
# <codecell>
def fn(x):
return alpha(x)-beta(x)-numpy.sin(alpha(x))+numpy.sin(beta(x)) - dt * numpy.sqrt( mu / numpy.power(x,3) )
# <codecell>
xx = numpy.arange(1,20000)
pylab.plot(xx, numpy.clip(fn(xx),-2,2) )
# <codecell>
xx=numpy.arange(4000,10000)
pylab.plot(xx,fn(xx))
# <codecell>
xx=numpy.arange(8000,9000)
pylab.plot(xx,fn(xx))
This shows that we expect to find a solution with a between 8000 and 9000.
The odd kink in the curve at about 5000 and earlier solution at about 4000 is due to
the clipping required to make arcsin behave. Really the equation does not make sense below about a=5000. (exact value is the a0 given in Rays solution). This then gives a nice range that can be used with the techniques in Rays solution.
Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?
Actually, my strategy is the following:
I evaluate my function on a given number of points
I detect whether there is a change of sign
I find the zero between the points that are changing sign
I verify if the zero found is really a zero, or if this is an asymptote
U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
c = f(U)
s = numpy.sign(c)
for i in range(100-1):
if s[i] + s[i+1] == 0: # oposite signs
u = scipy.optimize.brentq(f, U[i], U[i+1])
z = f(u)
if numpy.isnan(z) or abs(z) > 1e-3:
continue
print('found zero at {}'.format(u))
This algorithm seems to work, except I see two potential problems:
It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.
Do you have a better strategy (still efficient) to find all the zeros of a function?
I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):
def f(u):
w = numpy.sqrt(V**2 - u**2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.jnjn(ell-1, u)
kl = scipy.special.jnkn(ell, w)
kl1 = scipy.special.jnkn(ell-1, w)
return jl / (u*jl1) + kl / (w*kl1)
Why are you limited to numpy? Scipy has a package that does exactly what you want:
http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html
One lesson I've learned: numerical programming is hard, so don't do it :)
Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.
If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.
I found out it's relatively easy to implement your own root finder using the scipy.optimize.fsolve.
Idea: Find any zeroes from interval (start, stop) and stepsize step by calling the fsolve repeatedly with changing x0. Use relatively small stepsize to find all the roots.
Can only search for zeroes in one dimension (other dimensions must be fixed). If you have other needs, I would recommend using sympy for calculating the analytical solution.
Note: It may not always find all the zeroes, but I saw it giving relatively good results. I put the code also to a gist, which I will update if needed.
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
# Defined below
r = RootFinder(1, 20, 0.01)
args = (90, 5)
roots = r.find(f, *args)
print("Roots: ", roots)
# plot results
u = np.linspace(1, 20, num=600)
fig, ax = plt.subplots()
ax.plot(u, f(u, *args))
ax.scatter(roots, f(np.array(roots), *args), color="r", s=10)
ax.grid(color="grey", ls="--", lw=0.5)
plt.show()
Example output:
Roots: [ 2.84599497 8.82720551 12.38857782 15.74736542 19.02545276]
zoom-in:
RootFinder definition
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
class RootFinder:
def __init__(self, start, stop, step=0.01, root_dtype="float64", xtol=1e-9):
self.start = start
self.stop = stop
self.step = step
self.xtol = xtol
self.roots = np.array([], dtype=root_dtype)
def add_to_roots(self, x):
if (x < self.start) or (x > self.stop):
return # outside range
if any(abs(self.roots - x) < self.xtol):
return # root already found.
self.roots = np.append(self.roots, x)
def find(self, f, *args):
current = self.start
for x0 in np.arange(self.start, self.stop + self.step, self.step):
if x0 < current:
continue
x = self.find_root(f, x0, *args)
if x is None: # no root found.
continue
current = x
self.add_to_roots(x)
return self.roots
def find_root(self, f, x0, *args):
x, _, ier, _ = fsolve(f, x0=x0, args=args, full_output=True, xtol=self.xtol)
if ier == 1:
return x[0]
return None
Test function
The scipy.special.jnjn does not exist anymore, but I created similar test function for the case.
def f(u, V=90, ell=5):
w = np.sqrt(V ** 2 - u ** 2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.yn(ell - 1, u)
kl = scipy.special.kn(ell, w)
kl1 = scipy.special.kn(ell - 1, w)
return jl / (u * jl1) + kl / (w * kl1)
The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:
Brent's method is indeed a good choice here.
First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
I've also encountered this problem to solve equations like f(z)=0 where f was an holomorphic function. I wanted to be sure not to miss any zero and finally developed an algorithm which is based on the argument principle.
It helps to find the exact number of zeros lying in a complex domain. Once you know the number of zeros, it is easier to find them. There are however two concerns which must be taken into account :
Take care about multiplicity : when solving (z-1)^2 = 0, you'll get two zeros as z=1 is counting twice
If the function is meromorphic (thus contains poles), each pole reduce the number of zero and break the attempt to count them.
I'm currently trying to use SciPy's integrate.ode package to solve a pair of first-order ODEs that are coupled: say, the Lotka-Volterra predator-prey equation. However, this means during the integration loop I have to update the parameters I'm sending to the methods on every iteration, and simply keeping track of the previous value and calling set_f_params() on each iteration doesn't seem to be doing the trick.
hprev = Ho
pprev = Po
yh = np.zeros(0)
yp = np.zeros(0)
while dh.successful() and dp.successful() and dp.t < endtime and dh.t < endtime:
hparams = [alpha, beta, pprev]
pparams = [delta, gamma, hprev]
dh.set_f_params(hparams)
dp.set_f_params(pparams)
dh.integrate(dh.t + stepsize)
dp.integrate(dp.t + stepsize)
yh = np.append(yh, dh.y)
yp = np.append(yp, dp.y)
hprev = dh.y
pprev = dp.y
The values I'm setting at each iteration through set_f_params don't seem to be propagated to the callback methods, which wasn't terribly surprising given none of the examples on the web seem to involve "live" variable passing to the callbacks, but this was the only method by which I could think to get these values into the callback methods.
Does anyone have any advice on how to use SciPy to numerically integrate these ODEs?
I could be wrong, but this example seems very close to your problem. :) It uses odeint to solve the system of ODEs.
I had a similar issue. Turns out, the integrator doesn't re-evaluate the differential equation function for every call of integrate(), but does it at its own internal times. I changed max_step option of the integrator to be the same as stepsize and that worked for me.