I am using Scipy's odeint (scipy.integrate.odeint) to solve some ODEs for me, and all is working nice and well. However, I'd now like to include another time-dependent set of data into my calculations, i.e. for t = [0, 1, 2, 3] I've got data z = [0.1, 0.2, 0.25, 0.22] to be included in the calculations. I can pass the vector as an argument, but that gives me the entire vector for every time step. Is there an efficient way of getting the current step (iterator) of the calculation? That way I can obtain z[i] for the i-th time step. Note that z has the length of t, and that both can contain several thousands of elements.
Thanks
A very simple example:
import numpy as np
from scipy.integrate import odeint
def func(y, t, z):
# I'd like to get the i-th element
# of z, corresponding to t[i]
return y+z[i]
result = odeint(func, [0], t, (z,))
The work-around solution for this problem is using the more generic scipy.integrate.ode function. This function has several integration schemes build-in, and you have more control of what happens during each iteration. See the example below:
import numpy as np
from scipy.integrate import ode
def func(t, y, z):
return y+z
t = np.linspace(0, 1.0, 100)
dt = t[1]-t[0]
z = np.random.rand(100)
output = np.empty_like(t)
r = ode(func).set_integrator("dop853")
r.set_initial_value(0, 0).set_f_params(z[0])
for i in xrange(len(t)):
r.set_f_params(z[i])
r.integrate(r.t+dt)
output[i] = r.y
During each iteration, the solver's value of z is updated accordingly.
For ode solvers, you can use an interpolation for time-varying inputs. In this case:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t_arr = np.array([0,1,2,3])
z_arr = np.array([0.1, 0.2, 0.25, 0.22])
finterp = interp1d(t_arr,z_arr,fill_value='extrapolate') #create interpolation function
def func(y,t,z):
print(t)
zt = finterp(t) # call interpolation at time t
return y+zt
result = odeint(func, [0], t_arr, (z_arr,))
However, the solver in odeint may request time instants beyond t_arr (in your case at 3.028), so you have to specify values for z beyond t=3 or allow for extrapolation with fill_value (as shown above). Just be careful when selecting a kind of interpolation, to reflect the behavior that you expect.
Related
I am trying to get a simple fit to my data of an exponential decay of the form a*(x-x0)**b, where I know a and b must be negative. So if you plot it on a log-log plot, I should see a linear trend for the obtained data.
As such, I'm giving scipy.optimize initial guesses where a and b are negative, but it keeps ignoring them and giving me the error,
OptimizeWarning: Covariance of the parameters could not be estimated
.. and giving me values for a and b that are positive. It then also does not give me an exponential decay, but a parabola that bottoms out and begins to increase.
I have tried many different guesses as to the initial parameters over a large range of values (one such is in the code below), but none worked without giving me the nonsensical return and the error. This has made me start to wonder if my code is wrong, or if there's just some obvious way to get good initial guesses into the code that won't be rejected.
import math
import numpy as np
import sys
import matplotlib.pyplot as plt
import scipy as sp
import scipy.optimize
from scipy.optimize import curve_fit
import numpy.polynomial.polynomial as poly
x= [1987, 1993.85, 2003, 2010.45, 2009.3, 2019.4]
t= [31, 8.6, 4.84, 1.96, 3.9, 1.875]
def model_func(x, a, b, x0):
return (a*(x-x0)**b)
# curve fit
p0 = (-.0005,-.0005,100)
opt, pcov = curve_fit(model_func, x, t,p0)
a, b, x0 = opt
# test result
x2 = np.linspace(1980, 2020, 100)
y2 = model_func(x2, a, b,x0)
coefs, cov = poly.polyfit(x, t, 2,full=True)
ffit = poly.polyval(x2, coefs)
plt.loglog(x,t,'.')
plt.loglog(x2, ffit,'--', color="#1f77b4")
print('S = (',coefs[0],'*(t-',coefs[2],')^',coefs[1])
I need to carry out the integral written in the figure:
I want to plot y(t,E) for a given value of E. In my code I get this message: "divide by zero encountered in double_scalars"
I don't understand in an integration process why would there a "divide by" come? What would be a better code to do this work?
My code:
from scipy.integrate import quad
import numpy as np
def integrand(Ea,A,t,E,tau0,n,alpha,mu,sig):
tau=tau0*np.exp(-np.power(E/Ea,alpha));
g=1-np.exp(-np.power((t/tau),n));
f=np.exp(-np.power(Ea - mu, 2.) / (2 * np.power(sig, 2.)));
return -A+2*A*g*f
#
A=25;
t=1e-6; #Calculating for one t value. For plotting, t would be an array.
E=1;
tau0=0.3*1e-6;
n=2;
alpha=5
mu=2;
sig=0.3;
Y = quad(integrand, 0, np.inf, args=(A,t,E,tau0,n,alpha,mu,sig))
print(Y)
I think you can safely ignore the warning. The integral is probably diverging at some point and the warning you get tells that the number of iterations fell short in finding a converged solution. If you print the error (y[1]), you will see that it is quite small as compared to the absolute value of the integral (y[0]). The dependence of the integral with time can be computed as following
time = np.logspace(-8, -5, 100)
Ylist = []
for t in time:
Y = quad(integrand, 0, np.inf, args=(A,t,E,tau0,n,alpha,mu,sig))
Ylist.append(Y[0])
plt.semilogx(time, Ylist, '-kx')
plt.xlabel('Time (log)')
plt.ylabel(r'$y(t, E)$')
I want to solve this differential equation:
y′′+2y′+2y=cos(2x) with initial conditions:
y(1)=2,y′(2)=0.5
y′(1)=1,y′(2)=0.8
y(1)=0,y(2)=1
and it's code is:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dU_dx(U, x):
return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]
U0 = [1,0]
xs = np.linspace(0, 10, 200)
Us = odeint(dU_dx, U0, xs)
ys = Us[:,0]
plt.xlabel("x")
plt.ylabel("y")
plt.title("Damped harmonic oscillator")
plt.plot(xs,ys);
how can I fulfill it?
Your initial conditions are not, as they give values at two different points. These are all boundary conditions.
def bc1(u1,u2): return [u1[0]-2.0,u2[1]-0.5]
def bc2(u1,u2): return [u1[1]-1.0,u2[1]-0.8]
def bc3(u1,u2): return [u1[0]-0.0,u2[0]-1.0]
You need a BVP solver to solve these boundary value problems.
You can either make your own solver using the shooting method, in case 1 as
def shoot(b): return odeint(dU_dx,[2,b],[1,2])[-1,1]-0.5
b = fsolve(shoot,0)
T = linspace(1,2,N)
U = odeint(dU_dx,[2,b],T)
or use the secant method instead of scipy.optimize.fsolve, as the problem is linear this should converge in 1, at most 2 steps.
Or you can use the scipy.integrate.solve_bvp solver (which is perhaps newer than the question?). Your task is similar to the documented examples. Note that the argument order in the ODE function is switched in all other solvers, even in odeint you can give the option tfirst=True.
def dudx(x,u): return [u[1], np.cos(2*x)-2*(u[1]+u[0])]
Solutions generated with solve_bvp, the nodes are the automatically generated subdivision of the integration interval, their density tells how "non-flat" the ODE is in that region.
xplot=np.linspace(1,2,161)
for k,bc in enumerate([bc1,bc2,bc3]):
res = solve_bvp(dudx, bc, [1.0,2.0], [[0,0],[0,0]], tol=1e-5)
print res.message
l,=plt.plot(res.x,res.y[0],'x')
c = l.get_color()
plt.plot(xplot, res.sol(xplot)[0],c=c, label="%d."%(k+1))
Solutions generated using the shooting method using the initial values at x=0 as unknown parameters to then obtain the solution trajectories for the interval [0,3].
x = np.linspace(0,3,301)
for k,bc in enumerate([bc1,bc2,bc3]):
def shoot(u0): u = odeint(dudx,u0,[0,1,2],tfirst=True); return bc(u[1],u[2])
u0 = fsolve(shoot,[0,0])
u = odeint(dudx,u0,x,tfirst=True);
l, = plt.plot(x, u[:,0], label="%d."%(k+1))
c = l.get_color()
plt.plot(x[::100],u[::100,0],'x',c=c)
You can use the scipy.integrate.ode function this is similar to scipy.integrate.odeint but allows a jac parameter which is df/dy or in the case of your given ODE df/dx
I am trying to plot an integration of a special (eg. Bessel) function and my minimal code is the following.
#!/usr/bin/env python
import matplotlib.pyplot as plt
import numpy as np
import scipy.integrate as integrate
import scipy.special as sp
from scipy.special import jn
#x = np.arange(0.0, 10.0, 0.1)
U = np.linspace(0,10,1000)
#Delta = U**2
#Delta = U-4+8*integrate.quad(lambda x: sp.jv(1,x)/(x*(1.0+np.exp(U*x*0.5))), 0, 100)
Delta = U-4+8*integrate.quad(lambda x: jn(1,x)/(x*(1.0+np.exp(U*x*0.5))), 0.1, 1000)
plt.plot(U,Delta)
plt.xlabel('U')
plt.ylabel('$\Delta$')
plt.show()
However, this gives me several an error messages saying quadpack.error: Supplied function does not return a valid float whereas the function gets easily plotted in Mathematica. Do Python's Bessel's functions have limitations?
I have used this documentation for my plotting.
It is difficult to provide an answer that solves the problem before understanding what exactly you are trying to do. However, let me list a number of issues and provide an example that may not achieve what you are trying to do but at least it will provide a path forward.
Because your lambda function multiplies x by an array U, it returns an array instead of a number. A function that needs to be integrated should return a single number. You could fix this, for example, by replacing U by u:
f = lambda x, u: jn(1,x)/(x*(1.0+np.exp(u*x*0.5)))
Make Delta a function of u AND make quad pass additional argument u to f (defined in the previous point) AND extract only the value of the integral from the returned tuple from quad (quad returns a tuple of several values: the integral, the error, etc.):
Delta = lambda u: -4+8*integrate.quad(f, 0.1, 1000, args=(u,))[0]
Compute Delta for each u:
deltas = np.array(map(Delta, U))
plot the data:
plt.plot(U, deltas)
I would like to solve the ODE dy/dt = -2y + data(t), between t=0..3, for y(t=0)=1.
I wrote the following code:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
When I run this code, I get several errors like this:
ValueError: A value in x_new is above the interpolation range.
odepack.error: Error occurred while calling the Python function named
func
My interpolation range is between 0.0 and 3.0.
Printing the value of t0 in func, I realized that t0 is actually sometimes above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, ... It's why linear_interpolation(t0) raises the ValueError exceptions.
I have a few questions:
how does integrate.ode makes t0 vary? Why does it make t0 exceed the infimum (3.0) of my interpolation range?
in spite of these errors, integrate.ode returns an array which seems to contain correct value. So, should I just catch and ignore these errors? Should I ignore them whatever the differential equation(s), the t range and the initial condition(s)?
if I shouldn't ignore these errors, what is the best way to avoid them? 2 suggestions:
in interp1d, I could set bounds_error=False and fill_value=data[-1] since the t0 outside of my interpolation range seem to be closed to t[-1]:
linear_interpolation = interp1d(t, data, bounds_error=False, fill_value=data[-1])
But first I would like to be sure that with any other func and any other data the t0 will always remain closed to t[-1]. For example, if integrate.ode chooses a t0 below my interpolation range, the fill_value would be still data[-1], which would not be correct. Maybe to know how integrate.ode makes t0 vary would help me to be sure of that (see my first question).
in func, I could enclose the linear_interpolation call in a try/except block, and, when I catch a ValueError, I recall linear_interpolation but with t0 truncated:
def func(y, t0):
try:
interpolated_value = linear_interpolation(t0)
except ValueError:
interpolated_value = linear_interpolation(int(t0)) # truncate t0
return -2*y + interpolated_value
At least this solution permits linear_interpolation to still raise an exception if integrate.ode makes t0 >= 4.0 or t0 <= -1.0. I can then be alerted of incoherent behavior. But it is not really readable and the truncation seems to me a little arbitrary by now.
Maybe I'm just over-thinking about these errors. Please let me know.
It is normal for the odeint solver to evaluate your function at time values past the last requested time. Most ODE solvers work this way--they take internal time steps with sizes determined by their error control algorithm, and then use their own interpolation to evaluate the solution at the times requested by the user. Some solvers (e.g. the CVODE solver in the Sundials library) allow you to specify a hard bound on the time, beyond which the solver is not allowed to evaluate your equations, but odeint does not have such an option.
If you don't mind switching from scipy.integrate.odeint to scipy.integrate.ode, it looks like the "dopri5" and "dop853" solvers don't evaluate your function at times beyond the requested time. Two caveats:
The ode solvers use a different convention for the order of the arguments that define the differential equation. In the ode solvers, t is the first argument. (Yeah, I know, grumble, grumble...)
The "dopri5" and "dop853" solvers are for non-stiff systems. If your problem is stiff, they should still give correct answers, but they will do a lot more work than a stiff solver would do.
Here's a script that shows how to solve your example. To emphasize the change in the arguments, I renamed func to rhs.
import numpy as np
from scipy.integrate import ode
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def rhs(t, y):
"""The "right-hand side" of the differential equation."""
#print 't', t
return -2*y + linear_interpolation(t)
# Initial condition
y0 = 1
solver = ode(rhs).set_integrator("dop853")
solver.set_initial_value(y0)
k = 0
soln = [y0]
while solver.successful() and solver.t < t[-1]:
k += 1
solver.integrate(t[k])
soln.append(solver.y)
# Convert the list to a numpy array.
soln = np.array(soln)
The rest of this answer looks at how you could continue to use odeint.
If you are only interested in linear interpolation, you could simply extend your data linearly, using the last two points of the data. A simple way to extend the data array is to append the value 2*data[-1] - data[-2] to the end of the array, and do the same for the t array. If the last time step in t is small, this might not be a sufficiently long extension to avoid the problem, so in the following, I've used a more general extension.
Example:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
# Slope of the last segment.
m = (data[-1] - data[-2]) / (t[-1] - t[-2])
# Amount of time by which to extend the interpolation.
dt = 3.0
# Extended final time.
t_ext = t[-1] + dt
# Extended final data value.
data_ext = data[-1] + m*dt
# Extended arrays.
extended_t = np.append(t, t_ext)
extended_data = np.append(data, data_ext)
linear_interpolation = interp1d(extended_t, extended_data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
If simply using the last two data points to extend the interpolator linearly is too crude, then you'll have to use some other method to extrapolate a little beyond the final t value given to odeint.
Another alternative is to include the final t value as an argument to func, and explicitly handle t values larger than it in func. Something like this, where extrapolation is something you'll have to figure out:
def func(y, t0, tmax):
if t0 > tmax:
f = -2*y + extrapolation(t0)
else:
f = -2*y + linear_interpolation(t0)
return f
soln = odeint(func, 1, t, args=(t[-1],))