Pass args for solve_ivp (new SciPy ODE API) - python

For solving simple ODEs using SciPy, I used to use the odeint function, with form:
scipy.integrate.odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0)[source]
where a simple function to be integrated could include additional arguments of the form:
def dy_dt(t, y, arg1, arg2):
# processing code here
In SciPy 1.0, it seems the ode and odeint funcs have been replaced by a newer solve_ivp method.
scipy.integrate.solve_ivp(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, vectorized=False, **options)
However, this doesn't seem to offer an args parameter, nor any indication in the documentation as to implementing the passing of args.
Therefore, I wonder if arg passing is possible with the new API, or is this a feature that has yet to be added? (It would seem an oversight to me if this features has been intentionally removed?)
Reference:
https://docs.scipy.org/doc/scipy/reference/integrate.html

Relatively recently there appeared a similar question on scipy's github. Their solution is to use lambda:
solve_ivp(fun=lambda t, y: fun(t, y, *args), ...)
And they argue that there is already enough overhead for this not to matter.

It doesn't seem like the new function has an args parameter. As a workaround you can create a wrapper like
def wrapper(t, y):
orig_func(t,y,hardcoded_args)
and pass that in.

Recently the 'args' option was added to solve_ivp, see here: https://github.com/scipy/scipy/issues/8352#issuecomment-535689344

According to Javier-Acuna's ultra-brief, ultra-useful answer, the feature that you (as well as I) desire has recently been added. This was announced on Github by none other than the great Warren Weckesser (See his Github, StackOverflow) himself.
Anyway, jokes aside the docstring of solve_ivp has an example using it in for the `Lotka-Volterra equations
solve_ivp(
fun,
t_span,
y0,
method='RK45',
t_eval=None,
dense_output=False,
events=None,
vectorized=False,
args=None,
**options,
)
So, just include args as a tuple. In your case
args = (arg1, arg2)
Please don't use my answer unless your scipy version >= 1.4 .
There is no args parameter in solve_ivp for versions below it. I have personally experienced my answer failing for version 1.2.1.
The implementation by zahabaz would probably still work fine in case your scipy version < 1.4

For completeness, I think you can also do this but I'm not sure why you would bother since the other two options posted here are perfectly fine.
from functools import partial
fun = partial(dy_dt, arg1=arg1, arg2=arg2)
scipy.integrate.solve_ivp(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, vectorized=False, **options)

Adding to Cleb's answer, here's an example for using the lambda t,y: fun(t,y,args) method. We set up the function handle that returns the rhs of a second order homogeneous ODE with two parameters. Then we feed it to our solver, along with a couple options.
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
def rhs_2nd_order_ode(t, y, a, b):
"""
2nd order ODE function handle for use with scipy.integrate.solve_ivp
Solves u'' + au'+ bu = 0 after reducing order with y[0]=u and y[1]=u'.
:param t: dependent variable
:param y: independent variables
:param a: a
:param b: b
:return: Returns the rhs of y[0]' = y[1] and y[1]' = -a*y[1] - b*y[0]
"""
return [y[1], -a*y[1] - b*y[0]]
if __name__ == "__main__":
t_span = (0, 10)
t_eval = np.linspace(t_span[0], t_span[1], 100)
y0 = [0, 1]
a = 1
b = 2
sol = integrate.solve_ivp(lambda t,y: rhs_2nd_order_ode(t,y,a,b), t_span, y0,
method='RK45', t_eval=t_eval)
fig, ax = plt.subplots(1, 1)
ax.plot(sol.t, sol.y[0])
ax.set(xlabel='t',ylabel='y')

Related

Method without arguments or parenthesis for Scipy odeint

help, please - I can't understand my own code! lol
I'm fairly new at python and after many trials and errors, I got my code to work, but there is one particular part of it I don't understand.
In the code below, I'm solving a fairly basic ODE through scipy's odeint-function. My goal is then to build on this blue-print for more complicated systems.
My question(s): How could I call the method .reaction_rate_simple without any arguments and without the closing parenthesis? What does this mean in python? Should I use a static method here somewhere?
If anyone has any feedback on this - maybe this is a crappy piece of code and there's a better way of solving it!
I am very thankful for any response and help!
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
class batch_reator:
def __init__(self, C_init, volume_reactor, time_init, time_end):
self.C_init = C_init
self.volume = volume_reactor
self.time_init = time_init
self.time_end = time_end
self.operation_time = (time_end - time_init)
def reaction_rate_simple(self, concentration_t, t, stoch_factor, order, rate_constant):
reaction_rate = stoch_factor * rate_constant * (concentration_t ** order)
return reaction_rate
def equations_system(self, kinetics):
dCdt = kinetics
return dCdt
C_init = 200
time_init, time_end = 0, 1000
rate_constant, volume_reactor, order, stoch_factor = 0.0001, 10, 1, -1
time_span = np.linspace(time_init, time_end, 100)
Batch_basic = batch_reator(C_init, volume_reactor, time_init, time_end)
kinetics = Batch_basic.reaction_rate_simple
sol = odeint(Batch_basic.equations_system(kinetics), Batch_basic.C_init, time_span, args=(stoch_factor, order, rate_constant))
plt.plot(time_span, sol)
plt.show()
I assume you are referring to the line
kinetics = Batch_basic.reaction_rate_simple
You are not calling it, you are saving the method as a variable and then passing that method to equations_system(...), which simply returns it. I am not familiar with odeint, but according to the documentation, it accepts a callable, which is what you are giving it.
In python functions, lambdas, classes are all callable and can be assigned to variable and passed to functions and called as needed.
In this particular case the callback definition from the odeint docs say
func callable(y, t, …) or callable(t, y, …)
Computes the derivative of y at t. If the signature is callable(t, y, ...), then the argument tfirst must be set True.
So the first two arguments are passed in by odeint, and the other three are coming from the arguments specified by you.

How to avoid multiple calls to a slow function when using scipy integration with complex valued functions? [duplicate]

I'm using right now the scipy.integrate.quad to successfully integrate some real integrands. Now a situation appeared that I need to integrate a complex integrand. quad seems not be able to do it, as the other scipy.integrate routines, so I ask: is there any way to integrate a complex integrand using scipy.integrate, without having to separate the integral in the real and the imaginary parts?
What's wrong with just separating it out into real and imaginary parts? scipy.integrate.quad requires the integrated function return floats (aka real numbers) for the algorithm it uses.
import scipy
from scipy.integrate import quad
def complex_quadrature(func, a, b, **kwargs):
def real_func(x):
return scipy.real(func(x))
def imag_func(x):
return scipy.imag(func(x))
real_integral = quad(real_func, a, b, **kwargs)
imag_integral = quad(imag_func, a, b, **kwargs)
return (real_integral[0] + 1j*imag_integral[0], real_integral[1:], imag_integral[1:])
E.g.,
>>> complex_quadrature(lambda x: (scipy.exp(1j*x)), 0,scipy.pi/2)
((0.99999999999999989+0.99999999999999989j),
(1.1102230246251564e-14,),
(1.1102230246251564e-14,))
which is what you expect to rounding error - integral of exp(i x) from 0, pi/2 is (1/i)(e^i pi/2 - e^0) = -i(i - 1) = 1 + i ~ (0.99999999999999989+0.99999999999999989j).
And for the record in case it isn't 100% clear to everyone, integration is a linear functional, meaning that ∫ { f(x) + k g(x) } dx = ∫ f(x) dx + k ∫ g(x) dx (where k is a constant with respect to x). Or for our specific case ∫ z(x) dx = ∫ Re z(x) dx + i ∫ Im z(x) dx as z(x) = Re z(x) + i Im z(x).
If you are trying to do a integration over a path in the complex plane (other than along the real axis) or region in the complex plane, you'll need a more sophisticated algorithm.
Note: Scipy.integrate will not directly handle complex integration. Why? It does the heavy lifting in the FORTRAN QUADPACK library, specifically in qagse.f which explicitly requires the functions/variables to be real before doing its "global adaptive quadrature based on 21-point Gauss–Kronrod quadrature within each subinterval, with acceleration by Peter Wynn's epsilon algorithm." So unless you want to try and modify the underlying FORTRAN to get it to handle complex numbers, compile it into a new library, you aren't going to get it to work.
If you really want to do the Gauss-Kronrod method with complex numbers in exactly one integration, look at wikipedias page and implement directly as done below (using 15-pt, 7-pt rule). Note, I memoize'd function to repeat common calls to the common variables (assuming function calls are slow as if the function is very complex). Also only did 7-pt and 15-pt rule, since I didn't feel like calculating the nodes/weights myself and those were the ones listed on wikipedia, but getting reasonable errors for test cases (~1e-14)
import scipy
from scipy import array
def quad_routine(func, a, b, x_list, w_list):
c_1 = (b-a)/2.0
c_2 = (b+a)/2.0
eval_points = map(lambda x: c_1*x+c_2, x_list)
func_evals = map(func, eval_points)
return c_1 * sum(array(func_evals) * array(w_list))
def quad_gauss_7(func, a, b):
x_gauss = [-0.949107912342759, -0.741531185599394, -0.405845151377397, 0, 0.405845151377397, 0.741531185599394, 0.949107912342759]
w_gauss = array([0.129484966168870, 0.279705391489277, 0.381830050505119, 0.417959183673469, 0.381830050505119, 0.279705391489277,0.129484966168870])
return quad_routine(func,a,b,x_gauss, w_gauss)
def quad_kronrod_15(func, a, b):
x_kr = [-0.991455371120813,-0.949107912342759, -0.864864423359769, -0.741531185599394, -0.586087235467691,-0.405845151377397, -0.207784955007898, 0.0, 0.207784955007898,0.405845151377397, 0.586087235467691, 0.741531185599394, 0.864864423359769, 0.949107912342759, 0.991455371120813]
w_kr = [0.022935322010529, 0.063092092629979, 0.104790010322250, 0.140653259715525, 0.169004726639267, 0.190350578064785, 0.204432940075298, 0.209482141084728, 0.204432940075298, 0.190350578064785, 0.169004726639267, 0.140653259715525, 0.104790010322250, 0.063092092629979, 0.022935322010529]
return quad_routine(func,a,b,x_kr, w_kr)
class Memoize(object):
def __init__(self, func):
self.func = func
self.eval_points = {}
def __call__(self, *args):
if args not in self.eval_points:
self.eval_points[args] = self.func(*args)
return self.eval_points[args]
def quad(func,a,b):
''' Output is the 15 point estimate; and the estimated error '''
func = Memoize(func) # Memoize function to skip repeated function calls.
g7 = quad_gauss_7(func,a,b)
k15 = quad_kronrod_15(func,a,b)
# I don't have much faith in this error estimate taken from wikipedia
# without incorporating how it should scale with changing limits
return [k15, (200*scipy.absolute(g7-k15))**1.5]
Test case:
>>> quad(lambda x: scipy.exp(1j*x), 0,scipy.pi/2.0)
[(0.99999999999999711+0.99999999999999689j), 9.6120083407040365e-19]
I don't trust the error estimate -- I took something from wiki for recommended error estimate when integrating from [-1 to 1] and the values don't seem reasonable to me. E.g., the error above compared with truth is ~5e-15 not ~1e-19. I'm sure if someone consulted num recipes, you could get a more accurate estimate. (Probably have to multiple by (a-b)/2 to some power or something similar).
Recall, the python version is less accurate than just calling scipy's QUADPACK based integration twice. (You could improve upon it if desired).
I realize I'm late to the party, but perhaps quadpy (a project of mine) can help. This
import quadpy
import numpy
val, err = quadpy.quad(lambda x: numpy.exp(1j * x), 0, 1)
print(val)
correctly gives
(0.8414709848078964+0.4596976941318605j)

Does scipy.integrate.ode.set_solout work?

The scipy.integrate.ode interface to integration routines provides a method for stopping the integration if a constraint is violated at any step, set_solout. However, I cannot get this method to work, even in the simplest examples. Here's one attempt:
import numpy as np
from scipy.integrate import ode
def f(t, y):
"""Exponential decay."""
return -y
def solout(t, y):
if y[0] < 0.5:
return -1
else:
return 0
y_initial = 1
t_initial = 0
r = ode(f).set_integrator('dopri5') # Integrator that supports solout
r.set_initial_value(y_initial, t_initial)
r.set_solout(solout)
# Integrate until t = 5, but stop when solout constraint violated
r.integrate(5)
# The time when solout should have terminated integration:
intersection_time = np.log(2)
The integration should have been stopped by solout when t = log(2) = 0.693..., but instead happily continues until t = 5, when y = 0.007.
Is this a bug in scipy, or am I not using set_solout correctly?
It turns out you need to call set_solout before calling set_initial_value. (I figured this out by studying the set_solout tests in the scipy test suite.) So, reversing the order of the two calls in my question code produces the correct result.
Even if this behavior is correct, it ought to be mentioned in the documentation for set_solout. I've posted an issue with SciPy on GitHub.
UPDATE: This issue is fixed in SciPy 0.17.0; set_solout will work even if called after set_initial_value, and the question code will produce the correct result.

How to call function for scipy.optimize.fmin_cg(func) in Python

I will simply explain the problem in short. This problem is exactly similar as shown in scipy.doc. The problem is on error occurance as float argument required, not numpy.ndarray
What I have:
Function: y = s*z^t
Variable length/dimensions
t - 1...m,
s - 1...m and 1...n. So, m is row number, n - col number.
z - 1...n.
y - this can be y1, y[2], y[3],..., y[m],
T - s[m,n] matrix
Like this:
y[1] = s[1][1]*z[1]^t[1]+s[1][2]*z[2]^t[1]+...s[1][n]*z[n]^t[1])
y[2] = s[2][1]*z[1]^t[2]+s[2][2]*z[2]^t[2]+...s[2][n]*z[n]^t[2])
...
y[m] = s[m][1]*z[1]^t[m]+s[m][2]*z[2]^t[2]+...s[m][n]*z[n]^t[m])
Problem: Error occured.
Optimization terminated successfully.
Traceback (most recent call last):
solution = optimize.fmin_cg(func, z, fprime=gradf, args=args)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 952, in fmin_cg
res = _minimize_cg(f, x0, args, fprime, callback=callback, **opts)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 1072, in _minimize_cg
print " Current function value: %f" % fval
TypeError: float argument required, not numpy.ndarray
Here is the code
import numpy as np
import scipy as sp
import scipy.optimize as optimize
def func(z, *args):
y,T,t = args[0]
return y - counter(T,z,t)
def counter(T, z, t):
rows,cols = np.shape(T)
res = np.zeros(rows)
for i,row_val in enumerate(T):
res[i] = np.dot(row_val, z**t[i])
return res
def gradf(z, *args):
y,T,t = args[0]
return np.dot(t,counter(T,z,t-1))
def main():
# Inputs
N = 30
M = 20
z0 = np.zeros(N) # initial guess
y = 30*np.random.random(M)
T = 10*np.random.random((M,N))
t = 5*np.random.random(M)
args = [y, T, t]
solution = optimize.fmin_cg(func, z0, fprime=gradf, args=args)
print 'solution: ', solution
if __name__ == '__main__':
main()
I also tried to find similar examples but could not find something very similar. Here is the code for your consideration. Thanks in advance.
The root of your problem is that fmin_cg expects the function to return a single scalar value for the misfit instead of an array.
Basically, you want something vaguely similar to:
def func(z, y, T, t):
return np.linalg.norm(y - counter(T,z,t))
I'm using np.linalg.norm here because there's no builtin function in numpy for the root-mean-square. The actual RMS would be norm(x) / sqrt(x.size), but for minimization the constant multiplier doesn't make any difference.
There are also other minor problems in your code (e.g. args[0] is going to be a single item. You want y, T, t = args or better yet, just func(z, y, T, t)). Your gradient function doesn't make any sense to me, but it's optional regardless. Also, there's no way the solution can produce reasonable values at the moment, as you're testing it against pure noise. I assume those are just meant to be placeholder values, though.
However, you have a larger problem. You're trying to minimize in 30-dimensional space. Most non-linear solvers aren't going to work well with that high of a dimensionality. It may work fine, but you're very likely to run into problems.
All that having been said, you may find it more intuitive to use the scipy.optimize.curve_fit interface rather than using the others, if you're okay with LM instead of CG (they're fairly similar methods).
One final thing: You're trying to solve for 30 model parameters with 20 observations. This is an underdetermined problem. This problem doesn't have a unique solution. You're going to need to apply some a-priori knowledge to get a reasonable answer.

Method signature for Jacobian of a least squares function in scipy

Can anyone provide an example of providing a Jacobian to a least squares function in scipy?
I can't figure out the method signature they want - they say it should be a function, yet it's very hard to figure out what input parameters in what order this function should accept.
Here's the exponential decay fitting that I got to work with this:
import numpy as np
from scipy.optimize import leastsq
def f(var,xs):
return var[0]*np.exp(-var[1]*xs)+var[2]
def func(var, xs, ys):
return f(var,xs) - ys
def dfunc(var,xs,ys):
v = np.exp(-var[1]*xs)
return [v,-var[0]*xs*v,np.ones(len(xs))]
xs = np.linspace(0,4,50)
ys = f([2.5,1.3,0.5],xs)
yn = ys + 0.2*np.random.normal(size=len(xs))
fit = leastsq(func,[10,10,10],args=(xs,yn),Dfun=dfunc,col_deriv=1)
If I wanted to use col_deriv=0, I think that I would have to basically take the transpose of what I return with dfunc. You're quite right though: the documentation on this isn't so great.

Categories

Resources