Step by step time integrators in Python - python

I am solving a first order initial value problem of the form:
dy/dt = f(t,y(t)), y(0)=y0
I would like to obtain y(n+1) from a given numerical scheme, like for example :
using explicit Euler's scheme, we have
y(i) = y(i-1) + f(t-1,y(t-1)) * dt
Example code:
# Test code to evaluate different time integrators for the following equation:
# y' = (1/2) y + 2sin(3t) ; y(0) = -24/37
def dy_dt(y,t):
func = (1/2)*y + 2*np.sin(3*t)
return func
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
tmin = 0
tmax = 50
delt= 1e-2
t = np.arange(tmin,tmax,delt)
total_steps = len(t)
y_explicit=np.zeros(total_steps)
#y_ODEint=np.zeros(total_steps)
y0 = -24/37
y_explicit[0]=y0
#y_ODEint[0]=y0
# exact solution
y_exact = -(24/37)*np.cos(3*t)- (4/37)*np.sin(3*t) + (y0+24/37)*np.exp(0.5*t)
# Solution using ODEint Python
y_ODEint = odeint(dy_dt,y0,t)
for i in range(1,total_steps):
# Explicit scheme
y_explicit[i] = y_explicit[i-1] + (dy_dt(y_explicit[i-1],t[i-1]))*delt
# Update using ODEint
# y_ODEint[i] = odeint(dy_dt,y_ODEint[i-1],[0,delt])[-1]
plt.figure()
plt.plot(t,y_exact)
plt.plot(t,y_explicit)
# plt.plot(t,y_ODEint)
The current issue I am having is that the functions like ODEint in python provide the entire y(t) as opposed to y(i). like in the line "y_ODEint = odeint(dy_dt,y0,t)"
See in the code, how I have coded the explicit scheme, which gives y(i) for every time step. I want to do the same with ODEint, i tried something but didn't work (all commented lines)
I want to obtain y(i) rather than all ys using ODEint. Is that possible ?

Your system is time variant so you cannot translate the time step from (t[i-1], t[i]) to (0, delt).
The step by step integration will is unstable for your differential equation though
Here is what I get
def dy_dt(y,t):
func = (1/2)*y + 2*np.sin(3*t)
return func
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
tmin = 0
tmax = 40
delt= 1e-2
t = np.arange(tmin,tmax,delt)
total_steps = len(t)
y_explicit=np.zeros(total_steps)
#y_ODEint=np.zeros(total_steps)
y0 = -24/37
y_explicit[0]=y0
# exact solution
y_exact = -(24/37)*np.cos(3*t)- (4/37)*np.sin(3*t) + (y0+24/37)*np.exp(0.5*t)
# Solution using ODEint Python
y_ODEint = odeint(dy_dt,y0,t)
# To be filled step by step
y_ODEint_2 = np.zeros_like(y_ODEint)
y_ODEint_2[0] = y0
for i in range(1,len(y_ODEint_2)):
# update your code to run with the correct time interval
y_ODEint_2[i] = odeint(dy_dt,y_ODEint_2[i-1],[tmin+(i-1)*delt,tmin+i*delt])[-1]
plt.figure()
plt.plot(t,y_ODEint, label='single run')
plt.plot(t,y_ODEint_2, label='step-by-step')
plt.plot(t, y_exact, label='exact')
plt.legend()
plt.ylim([-20, 20])
plt.grid()
Important to notice that both methods are unstable, but the step-by-step explodes slightly before than the single odeint call.
With, for example dy_dt(y,t): -(1/2)*y + 2*np.sin(3*t) the integration becomes more stable, for instance, there is no noticeable error after integrating from zero to 200.

Related

How to include known parameter that changes over time in solve_bvp

I am trying to use scipy's solve_bvp in python to solve differential equations that depend on a known parameter that changes over time. I have this parameter saved in a numpy array. However, when I try to use this array in the derivatives function, I get the following error ValueError: operands could not be broadcast together with shapes (10,) (11,).
Below is a simplified version of my code. I want the variable d2 to take certain values at different times according to an array, d2_set_values. The differential equations for some of the 12 variables then depend on d2. I hope it's clear from this code what I'm trying to achieve.
import numpy as np
from scipy.integrate import solve_bvp
t = np.linspace(0, 10, 11)
# Known parameter that changes over time
d2_set_values = np.zeros(t.size)
d2_set_values[:4] = 0.1
d2_set_values[4:8] = 0.2
d2_set_values[8:] = 0.1
# Initialise y vector
y = np.zeros((12, t.size))
# ODEs
def fun(x, y):
S1, I1, R1, S2, I2, R2, lamS1, lamI1, lamR1, lamS2, lamI2, lamR2 = y
d1 = 0.5*(I1 + 0.1*I2)*(lamS1 - lamI1)
d2 = d2_set_values
dS1dt = -0.5*S1*(1-d1)*(I1 + 0.1*I2)
dS2dt = -0.5*S2*(1-d2)*(I2 + 0.1*I1)
dI1dt = 0.5*S1*(1-d1)*(I1 + 0.1*I2) - 0.2*I1
dI2dt = 0.5*S2*(1-d2)*(I2 + 0.1*I1) - 0.2*I2
dR1dt = 0.2*I1
dR2dt = 0.2*I2
dlamS1dt = 0.5*(1-d1)*S1*lamS1
dlamS2dt = 0.5*(1-d2)*S2*lamS2
dlamI1dt = 0.5*(1-d1)*I1*lamI1
dlamI2dt = 0.5*(1-d2)*I2*lamI2
dlamR1dt = lamR1
dlamR2dt = lamR2
return np.vstack((dS1dt, dI1dt, dR1dt, dS2dt, dI2dt, dR2dt, dlamS1dt, dlamI1dt, dlamR1dt, dlamS2dt, dlamI2dt, dlamR2dt))
# Boundary conditions
def bc(ya, yb):
return np.array([ya[0]-0.99, ya[1]-0.01, ya[2]-0., ya[3]-1.0, ya[4]-0., ya[5]-0.,
yb[6]-0., yb[7]-1., yb[8]-0., yb[9]-0, yb[10]-0, yb[11]-0])
# Run the solver
sol = solve_bvp(fun, bc, t, y)
I have even tried reducing the size of d2_set_values by one, but that doesn't solve the issue.
Any help I can get would be much appreciated!

Scipy.Curve fit struggling with exponential function

I'm trying to fit a curve of the equation:
y = ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
where A = (-np.log((k1/v)*Co))/k2
given to me by a supervisor to a dataset that looks like a rough exponential that flattens to a straight horizontal line at its top. When I fit the equation i am receiving only a straight line from the curve fit and a corresponding Warning:
<ipython-input-24-7e57039f2862>:36: RuntimeWarning: overflow encountered in exp
return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
the code I am using looks like:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
xData_forfit = [1.07683e+13, 1.16162e+13, 1.24611e+13, 1.31921e+13, 1.40400e+13, 2.65830e+13,
2.79396e+13, 2.86676e+13, 2.95155e+13, 3.03605e+13, 3.12055e+13, 3.20534e+13,
3.27814e+13, 3.36293e+13, 3.44772e+13, 3.53251e+13, 3.61730e+13, 3.77459e+13,
3.85909e+13, 3.94388e+13, 4.02838e+13, 4.11317e+13, 4.19767e+13, 4.27076e+13,
5.52477e+13, 5.64143e+13, 5.72622e+13, 5.81071e+13, 5.89550e+13, 5.98000e+13,
6.05280e+13, 6.13759e+13, 6.22209e+13, 6.30658e+13, 6.39137e+13, 6.46418e+13,
6.55101e+13, 6.63551e+13, 6.72030e+13, 6.80480e+13, 6.88929e+13, 6.97408e+13,
7.04688e+13, 7.13167e+13, 7.21617e+13, 8.50497e+13, 8.58947e+13, 8.67426e+13,
8.75876e+13, 8.83185e+13, 9.00114e+13, 9.08563e+13, 9.17013e+13]
yData_forfit = [1375.409524, 1378.095238, 1412.552381, 1382.904762, 1495.2, 1352.4,
1907.971429, 1953.52381, 1857.352381, 1873.990476, 1925.114286, 1957.085714,
2030.52381, 1989.8, 2042.733333, 2060.095238, 2134.361905, 2200.742857,
2342.72381, 2456.047619, 2604.542857, 2707.971429 ,2759.87619, 2880.52381,
3009.590476, 3118.771429, 3051.52381, 3019.771429, 3003.561905, 3083.0,
3082.885714, 2799.866667, 3012.419048, 3013.266667, 3106.714286, 3090.47619,
3216.638095, 3108.447619, 3199.304762, 3154.257143, 3112.419048, 3284.066667,
3185.942857, 3157.380952, 3158.47619, 3464.257143, 3434.67619, 3291.457143,
2851.371429, 3251.904762, 3056.152381, 3455.07619, 3386.942857]
def fnct_to_opt(t, k2, k1):
#EXPERIMENTAL CONSTANTS
v = 105
Co = 1500
A = (-np.log((k1/v)*Co))/k2
return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
initial_k2k1 = [100, 1*10**-3]
constants = curve_fit(fnct_to_opt, xData_forfit, yData_forfit, p0=initial_k2k1)
k2_fit = constants[0][0]
k1_fit = constants[0][1]
fit = []
for i in xData_forfit:
fit.append(fnct_to_opt(i,k2_fit,k1_fit))
plt.plot(xData_forfit, yData_forfit, 'or', ms='2')
plt.plot(xData_forfit, fit)
this is giving me this plot as a result:
As far as i can tell, the code isn't producing a useful output due to a too large value for the np.exp term, but i don't know how to go about diagnosing where this overflow is coming from or how to fix the issue. any help would be appreciated, thanks.
The overflow is happening exactly where the error message tells you: in the return expression of fnct_to_opt. I asked you to print the offending values just before the error point; this would show you the problem.
At the point of error, the values in A are in the range e+13 to e+14. t is insignificant; k2 is a bit under -10000.0
Thus, the values in your argument to np.exp are well out of the domain that the function can handle. Just add a line to you function and watch the results:
def fnct_to_opt(t, k2, k1):
#EXPERIMENTAL CONSTANTS
v = 105
Co = 1500
A = (-np.log((k1/v)*Co))/k2
print("TRACE", "\nk2", k2, "\nt", t, "\nA", A, "\nother", k1, v, Co)
return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
I think the problem maybe in the optimization function, in the sense that maybe a mistake.
For instance:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
xData_forfit = [1.07683e+13, 1.16162e+13, 1.24611e+13, 1.31921e+13, 1.40400e+13, 2.65830e+13,
2.79396e+13, 2.86676e+13, 2.95155e+13, 3.03605e+13, 3.12055e+13, 3.20534e+13,
3.27814e+13, 3.36293e+13, 3.44772e+13, 3.53251e+13, 3.61730e+13, 3.77459e+13,
3.85909e+13, 3.94388e+13, 4.02838e+13, 4.11317e+13, 4.19767e+13, 4.27076e+13,
5.52477e+13, 5.64143e+13, 5.72622e+13, 5.81071e+13, 5.89550e+13, 5.98000e+13,
6.05280e+13, 6.13759e+13, 6.22209e+13, 6.30658e+13, 6.39137e+13, 6.46418e+13,
6.55101e+13, 6.63551e+13, 6.72030e+13, 6.80480e+13, 6.88929e+13, 6.97408e+13,
7.04688e+13, 7.13167e+13, 7.21617e+13, 8.50497e+13, 8.58947e+13, 8.67426e+13,
8.75876e+13, 8.83185e+13, 9.00114e+13, 9.08563e+13, 9.17013e+13]
yData_forfit = [1375.409524, 1378.095238, 1412.552381, 1382.904762, 1495.2, 1352.4,
1907.971429, 1953.52381, 1857.352381, 1873.990476, 1925.114286, 1957.085714,
2030.52381, 1989.8, 2042.733333, 2060.095238, 2134.361905, 2200.742857,
2342.72381, 2456.047619, 2604.542857, 2707.971429 ,2759.87619, 2880.52381,
3009.590476, 3118.771429, 3051.52381, 3019.771429, 3003.561905, 3083.0,
3082.885714, 2799.866667, 3012.419048, 3013.266667, 3106.714286, 3090.47619,
3216.638095, 3108.447619, 3199.304762, 3154.257143, 3112.419048, 3284.066667,
3185.942857, 3157.380952, 3158.47619, 3464.257143, 3434.67619, 3291.457143,
2851.371429, 3251.904762, 3056.152381, 3455.07619, 3386.942857]
def fnct_to_opt(t, k2, k1):
#EXPERIMENTAL CONSTANTS
v = 105
Co = 1500
#A = (-np.log((k1/v)*Co))/k2
#return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
#A = (np.log((k1/v)*Co))/k2
return k2/np.log(t) + k1
initial_k2k1 = [10, 1]
constants = curve_fit(fnct_to_opt, xData_forfit, yData_forfit, p0=initial_k2k1)
k2_fit = constants[0][0]
k1_fit = constants[0][1]
#v_fit = constants[0][2]
#Co_fit = constants[0][3]
fit = []
for i in xData_forfit:
fit.append(fnct_to_opt(i,k2_fit,k1_fit))
plt.plot(xData_forfit, yData_forfit, 'or', ms='2')
plt.plot(xData_forfit, fit)
So I place a function simpler but with some clearer intuition behind. For instance in the original I do not think that with those signs and exponential the shape is going to be achieve at all. However looks to me that the exponential is misplaced so I change it for log. Add a constant and a scale parameter. I would suggest to check carefully the original function. Probably there is and issue with the derivation. I do not think is a computational problem.
This is something closer to what could be expected.

Combine two functions with conditional (if) switch in Gekko

I have two thermodynamic relationships for low (300-1000K) and high (1000-3000K) temperatures. If I want to use both of these in Gekko, how can I combine them into a single correlation that I can use in an optimization problem?
Here is a section of Python code that calculates either the low or high temperature relationship from 300K to 3000K.
import numpy as np
import matplotlib.pyplot as plt
T = np.linspace(300.0,3000.0,50)
a_lo = np.array([ 5.15,-1.37E-02,4.92E-05,-4.85E-08,1.67E-11])
a_hi = np.array([7.49E-02,1.34E-02,-5.73E-06,1.22E-09,-1.02E-13])
i_lo = np.where(np.logical_and(T>=300.0, T<1000.0))
i_hi = np.where(np.logical_and(T>=1000.0, T<=3000.0))
cp = np.zeros(50)
Rg = 8.314 # J/mol-K
cp[i_lo] = a_lo[0] + a_lo[1]*T[i_lo] + a_lo[2]*T[i_lo]**2.0 + \
a_lo[3]*T[i_lo]**3.0 + a_lo[4]*T[i_lo]**4.0
cp[i_hi] = a_hi[0] + a_hi[1]*T[i_hi] + a_hi[2]*T[i_hi]**2.0 + \
a_hi[3]*T[i_hi]**3.0 + a_hi[4]*T[i_hi]**4.0
cp *= Rg
plt.plot(T,cp,'k-',lw=5)
plt.plot(T[i_lo],cp[i_lo],'.',color='orange')
plt.plot(T[i_hi],cp[i_hi],'.',color='red')
plt.xlabel('Temperature (K)'); plt.grid()
plt.ylabel(r'$CH_4$ Heat Capacity $\left(\frac{J}{mol-K}\right)$')
plt.show()
I tried using a conditional (if) statement in building my model but it only uses the correlation that is selected from the initialized values. If temperature T is a variable in my model, I want it to switch to one or the other based on the temperature variable.
There are a few approaches to use a conditional function in your optimization or simulation problem. The first approach not exact but may be a suitable approximation by using a cubic spline that creates an interpolation between sampled points (see approach #1). The second approach is exact but requires either an Mathematical Program with Complementarity Constraints (MPCC) with if2() or an Integer Switch variable with if3() (see approach #2). These two approaches are discussed in the Design Optimization Course page on Logical Conditions in Optimization.
import numpy as np
import matplotlib.pyplot as plt
from gekko import GEKKO
# CH4 Heat capacity parameters (LO: 300-1000K, HI: 1000K-3000K)
a_lo = np.array([ 5.15,-1.37E-02,4.92E-05,-4.85E-08,1.67E-11])
a_hi = np.array([7.49E-02,1.34E-02,-5.73E-06,1.22E-09,-1.02E-13])
Rg = 8.314 # J/mol-K
m = GEKKO()
# Approach #1: Cubic Spline
def cp1(T):
if T>=300 and T<=1000:
a = a_lo
elif T>1000 and T<=3000:
a = a_hi
else:
raise Exception('Temperature ' + str(T) + ' out of range')
cp = (a[0]+a[1]*T+a[2]*T**2.0+a[3]*T**3.0+a[4]*T**4.0)*Rg
return cp
# Calculate cp at 50 pts
T = np.linspace(300.0,3000.0,50)
cp = [cp1(Ti) for Ti in T]
x1 = m.Var(lb=300,ub=3000); y1 = m.Var()
m.cspline(x1,y1,T,cp)
# Approach #2: Gekko conditional statements
def cp2(a,T):
return (a[0]+a[1]*T+a[2]*T**2.0+a[3]*T**3.0+a[4]*T**4.0)*Rg
x2 = m.Var(lb=300,ub=3000)
y2a = m.Intermediate(cp2(a_lo,x2));
y2b = m.Intermediate(cp2(a_hi,x2));
y2 = m.if3(x2-1000,y2a,y2b)
m.Equation(y1==80)
m.Equation(y2==80)
m.solve()
print('Find Temperature where cp=80 J/mol-K')
print(x1.value[0],x2.value[0])

mcint module Python-Monte Carlo integration

I am trying to run a code that outputs a Gaussian distribtuion by integrating the 1-D gaussian distribution equation using Monte Carlo integration. I am trying to use the mcint module. I defined the gaussian equation and the sampler function that is used in the mcint module. I am not sure what the 'measure' part in the mcint function does and what it should be set to. Does anyone know what measure is supposed to be? And how do I know what to set it as?
from matplotlib import pyplot as mp
import numpy as np
import mcint
import random
#f equation
def gaussian(x,x0,sig0,time,var):
[velocity,diffussion_coeffient] = var
mu = x0 + (velocity*time)
sig = sig0 + np.sqrt(2.0*diffussion_coeffient*time)
return (1/(np.sqrt(2.0*np.pi*(sig**2.0))))*(np.exp((-(x-mu)**2.0)/(2.0*(sig**2.0))))
#random variables that are generated during the integration
def sampler(varinterval):
while True:
velocity = random.uniform(varinterval[0][0],varinterval[0][1])
diffussion_coeffient = random.uniform(varinterval[1][0],varinterval[1][1])
yield (velocity,diffussion_coeffient)
if __name__ == "__main__":
x0 = 0
#ranges for integration
velocitymin = -3.0
velocitymax = 3.0
diffussion_coeffientmin = 0.01
diffussion_coeffientmax = 0.89
varinterval = [[velocitymin,velocitymax],[diffussion_coeffientmin,diffussion_coeffientmax]]
time = 1
sig0 = 0.05
x = np.linspace(-20, 20, 120)
res = []
for i in np.linspace(-10, 10, 120):
result, error = mcint.integrate(lambda v: gaussian(i,x0,sig0,time,v), sampler(varinterval), measure=1, n=1000)
res.append(result)
mp.plot(x,res)
mp.show()
Is this the module you are talking about? If that's the case the whole source is only 17 lines long (at the times of writing). The relevant line is the last one, which reads:
return (measure*sample_mean, measure*math.sqrt(sample_var/n))
As you can see, the measure argument (whose default value is unity) is used to scale the values returned by the integrate method.

Solving a linear-quadratic system of equations, both graphically and numerically, in numpy/matplotlib?

I have a system of a linear equation and a quadratic equation that I can set up with numpy and scipy so I can get a graphical solution. Consider the example code:
#!/usr/bin/env python
# Python 2.7.1+
import numpy as np #
import matplotlib.pyplot as plt #
# d is a constant;
d=3
# h is variable; depends on x, which is also variable
# linear function:
# condition for h: d-2x=8h; returns h
def hcond(x):
return (d-2*x)/8.0
# quadratic function:
# condition for h: h^2+x^2=d*x ; returns h
def hquad(x):
return np.sqrt(d*x-x**2)
# x indices data
xi = np.arange(0,3,0.01)
# function values in respect to x indices data
hc = hcond(xi)
hq = hquad(xi)
fig = plt.figure()
sp = fig.add_subplot(111)
myplot = sp.plot(xi,hc)
myplot2 = sp.plot(xi,hq)
plt.show()
That code results with this plot:
It's clear that the two functions intersect, thus there is a solution.
How could I automatically solve what is the solution (the intersection point), while keeping most of the function definitions intact?
It turns out one can use scipy.optimize.fsolve to solve this, just need to be careful that the functions in the OP are defined in the y=f(x) format; while fsolve will need them in the f(x)-y=0 format. Here is the fixed code:
#!/usr/bin/env python
# Python 2.7.1+
import numpy as np #
import matplotlib.pyplot as plt #
import scipy
import scipy.optimize
# d is a constant;
d=3
# h is variable; depends on x, which is also variable
# linear function:
# condition for h: d-2x=8h; returns h
def hcond(x):
return (d-2*x)/8.0
# quadratic function:
# condition for h: h^2+x^2=d*x ; returns h
def hquad(x):
return np.sqrt(d*x-x**2)
# for optimize.fsolve;
# note, here the functions must be equal to 0;
# we defined h=(d-2x)/8 and h=sqrt(d*x-x^2);
# now we just rewrite in form (d-2x)/16-h=0 and sqrt(d*x-x^2)-h=0;
# thus, below x[0] is (guess for) x, and x[1] is (guess for) h!
def twofuncs(x):
y = [ hcond(x[0])-x[1], hquad(x[0])-x[1] ]
return y
# x indices data
xi = np.arange(0,3,0.01)
# function values in respect to x indices data
hc = hcond(xi)
hq = hquad(xi)
fig = plt.figure()
sp = fig.add_subplot(111)
myplot = sp.plot(xi,hc)
myplot2 = sp.plot(xi,hq)
# start from x=0 as guess for both functions
xsolv = scipy.optimize.fsolve(twofuncs, [0, 0])
print(xsolv)
print("xsolv: {0}\n".format(xsolv))
# plot solution with red marker 'o'
myplot3 = sp.plot(xsolv[0],xsolv[1],'ro')
plt.show()
exit
... which results with:
xsolv: [ 0.04478625 0.36380344]
... or, on the plot image:
Refs:
Roots finding, Numerical integrations and differential equations - Scipy: Scientific Programming in Python
Is there a python module to solve linear equations? - Stack Overflow

Categories

Resources