python: Initial condition in solving differential equation - python

I want to solve this differential equation:
y′′+2y′+2y=cos(2x) with initial conditions:
y(1)=2,y′(2)=0.5
y′(1)=1,y′(2)=0.8
y(1)=0,y(2)=1
and it's code is:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dU_dx(U, x):
return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]
U0 = [1,0]
xs = np.linspace(0, 10, 200)
Us = odeint(dU_dx, U0, xs)
ys = Us[:,0]
plt.xlabel("x")
plt.ylabel("y")
plt.title("Damped harmonic oscillator")
plt.plot(xs,ys);
how can I fulfill it?

Your initial conditions are not, as they give values at two different points. These are all boundary conditions.
def bc1(u1,u2): return [u1[0]-2.0,u2[1]-0.5]
def bc2(u1,u2): return [u1[1]-1.0,u2[1]-0.8]
def bc3(u1,u2): return [u1[0]-0.0,u2[0]-1.0]
You need a BVP solver to solve these boundary value problems.
You can either make your own solver using the shooting method, in case 1 as
def shoot(b): return odeint(dU_dx,[2,b],[1,2])[-1,1]-0.5
b = fsolve(shoot,0)
T = linspace(1,2,N)
U = odeint(dU_dx,[2,b],T)
or use the secant method instead of scipy.optimize.fsolve, as the problem is linear this should converge in 1, at most 2 steps.
Or you can use the scipy.integrate.solve_bvp solver (which is perhaps newer than the question?). Your task is similar to the documented examples. Note that the argument order in the ODE function is switched in all other solvers, even in odeint you can give the option tfirst=True.
def dudx(x,u): return [u[1], np.cos(2*x)-2*(u[1]+u[0])]
Solutions generated with solve_bvp, the nodes are the automatically generated subdivision of the integration interval, their density tells how "non-flat" the ODE is in that region.
xplot=np.linspace(1,2,161)
for k,bc in enumerate([bc1,bc2,bc3]):
res = solve_bvp(dudx, bc, [1.0,2.0], [[0,0],[0,0]], tol=1e-5)
print res.message
l,=plt.plot(res.x,res.y[0],'x')
c = l.get_color()
plt.plot(xplot, res.sol(xplot)[0],c=c, label="%d."%(k+1))
Solutions generated using the shooting method using the initial values at x=0 as unknown parameters to then obtain the solution trajectories for the interval [0,3].
x = np.linspace(0,3,301)
for k,bc in enumerate([bc1,bc2,bc3]):
def shoot(u0): u = odeint(dudx,u0,[0,1,2],tfirst=True); return bc(u[1],u[2])
u0 = fsolve(shoot,[0,0])
u = odeint(dudx,u0,x,tfirst=True);
l, = plt.plot(x, u[:,0], label="%d."%(k+1))
c = l.get_color()
plt.plot(x[::100],u[::100,0],'x',c=c)

You can use the scipy.integrate.ode function this is similar to scipy.integrate.odeint but allows a jac parameter which is df/dy or in the case of your given ODE df/dx

Related

Solving coupled differential equations with sympy

I am trying to solve the following system of first order coupled differential equations:
- dR/dt=k2*Y(t)-k1*R(t)*L(t)+k4*X(t)-k3*R(t)*I(t)
- dL/dt=k2*Y(t)-k1*R(t)*L(t)
- dI/dt=k4*X(t)-k3*R(t)*I(t)
- dX/dt=k1*R(t)*L(t)-k2*Y(t)
- dY/dt=k3*R(t)*I(t)-k4*X(t)
The knowed initial conditions are: X(0)=0, Y(0)=0
The knowed constants values are: k1=6500, k2=0.9
This equations defines a kinetick model and I need to solve them to get the Y(t) function to fit my data and find k3 and k4 values. In order to that, I have tried to solve the system simbologically with sympy. There is my code:
import matplotlib.pyplot as plt
import numpy as np
import sympy
from sympy.solvers.ode.systems import dsolve_system
from scipy.integrate import solve_ivp
from scipy.integrate import odeint
k1 = sympy.Symbol('k1', real=True, positive=True)
k2 = sympy.Symbol('k2', real=True, positive=True)
k3 = sympy.Symbol('k3', real=True, positive=True)
k4 = sympy.Symbol('k4', real=True, positive=True)
t = sympy.Symbol('t',real=True, positive=True)
L = sympy.Function('L')
R = sympy.Function('R')
I = sympy.Function('I')
Y = sympy.Function('Y')
X = sympy.Function('X')
f1=k2*Y(t)-k1*R(t)*L(t)+k4*X(t)-k3*R(t)*I(t)
f2=k2*Y(t)-k1*R(t)*L(t)
f3=k4*X(t)-k3*R(t)*I(t)
f4=-f2
f5=-f3
eq1=sympy.Eq(sympy.Derivative(R(t),t),f1)
eq2=sympy.Eq(sympy.Derivative(L(t),t),f2)
eq3=sympy.Eq(sympy.Derivative(I(t),t),f3)
eq4=sympy.Eq(sympy.Derivative(Y(t),t),f4)
eq5=sympy.Eq(sympy.Derivative(X(t),t),f5)
Sys=(eq1,eq2,eq3,eq4,eq5])
solsys=dsolve_system(eqs=Sys,funcs=[X(t),Y(t),R(t),L(t),I(t)], t=t, ics={Y(0):0, X(0):0})
There is the answer:
NotImplementedError:
The system of ODEs passed cannot be solved by dsolve_system.
I have tried with dsolve too, but I get the same.
Is there any other solver I can use or some way of doing this that will allow me to get the function for the fitting? I'm using python 3.8 in Spider with Anaconda in windows64.
Thank you!
# Update
Following
You are saying "experiment". So you have data and want to fit the model to them, find appropriate values for k3 and k4 at least, and perhaps for all coefficients and the initial conditions (the first measured data point might not be the initial condition for the best fit)? See stackoverflow.com/questions/71722061/… for a recent attempt on such a task. –
Lutz Lehmann
23 hours
There is my new code:
t=[0,0.25,0.5,0.75,1.5,2.27,3.05,3.82,4.6,5.37,6.15,6.92,7.7,8.47,13.42,18.42,23.42,28.42,33.42,38.42,43.42,48.42,53.42,58.42,63.42,68.42,83.4,98.4,113.4,128.4,143.4,158.4,173.4,188.4,203.4,218.4,233.4,248.4]
yexp=[832.49,1028.01,1098.12,1190.08,1188.97,1377.09,1407.47,1529.35,1431.72,1556.66,1634.59,1679.09,1692.05,1681.89,1621.88,1716.77,1717.91,1686.7,1753.5,1722.98,1630.14,1724.16,1670.45,1677.16,1614.98,1671.16,1654.03,1661.84,1675.31,1626.76,1638.29,1614.41,1594.31,1584.73,1599.22,1587.85,1567.74,1602.69]
def derivative(S, t, k3, k4):
k1=1798931
k2=0.2629
x, y,r,l,i = S
doty = k1*r*l+k2*y
dotx = k3*r*i-k4*x
dotr = k2*y-k1*r*l+k4*x-k3*r*i
dotl = k2*y-k1*r*l
doti = k4*x-k3*r*i
return np.array([doty, dotx, dotr, dotl, doti])
def solver(XY,t,para):
return odeint(derivative, XY, t, args = para, atol=1e-8, rtol=1e-11)
def integration(XY_arr,*para):
XY0 = para[:5]
para = para[5:]
T = np.arange(len(XY_arr))
res0 = solver(XY0,T, para)
res1 = [ solver(XY0,[t,t+1],para)[-1]
for t,XY in enumerate(XY_arr[:-1]) ]
return np.concatenate([res0,res1]).flatten()
XData =yexp
YData = np.concatenate([ yexp,yexp,yexp,yexp,yexp,yexp[1:],yexp[1:],yexp[1:],yexp[1:],yexp[1:]]).flatten()
p0 =[0,0,100,10,10,1e8,0.01]
params, info = curve_fit(integration,XData,YData,p0=p0, maxfev=5000)
XY0, para = params[:5], params[5:]
print(XY0,tuple(para))
t_plot = np.linspace(0,len(t),500)
x_plot = solver(XY0, t_plot, tuple(para))
But the output are not correct, as are the same as initial condition p0:
[ 0. 0. 100. 10. 10.] (100000000.0, 0.01)
Graph
I understand that the function 'integration' gives packed values of y for each function at each instant of time, but I don't know how to unpack them to make the curve_fitt separately. Maybe I don't quite understand how it works.
Thank you!
As you observed, sympy is not able to solve this system. This might mean that
the procedure to classify ODE in sympy is not complete enough, or
some trick/method is needed above the standard set of methods that is implemented in sympy, or
that there just is no symbolic solution.
The last case is the generic one, take a symbolically solvable ODE, add some random term, and almost certainly the resulting ODE is no longer symbolically solvable.
As I understand with the comments, you have an model via ODE system with state space (cX,cY,cR,cL,cI) with equations with 4 parameters k1,k2,k3,k4 and, by the structure of a reaction system R+I <-> X, R+L <-> Y, the sums cR+cX+cY, cL+cY, cI+cX are all constant.
For some other process that is approximately represented by the model, you have time series data t[k],y[k] for the Y component. Also you have partial information on the initial state and the parameter set. If there are sufficiently many data points one could also forget about these, fit for all parameters, and compare how far away the given parameters are to the computed ones.
There are several modules and packages that solve this fitting task in a more or less abstract fashion. I think pyomo and gekko can both be used. More directly one can use the facilities of scipy.odr or scipy.optimize.
Define the forward function that transforms time and parameters
def model(t,u,k1,k2,k3,k4):
X,Y,R,L,I = u
dL = k2*Y - k1*R*L
dI = k4*X - k3*R*I
dR = dL+dI
dX = -dI
dY = -dL
return dX,dY,dR,dL,dI
def solver(t,u0,k):
res = solve_ivp(model, [0, t[-1]], u0, args=tuple(k), t_eval=t,
method="DOP853", atol=1e-7, rtol=1e-10)
return res.y
Prepare some data plus noise
k1o = 6.500; k2o=0.9
T = np.linspace(0,0.05,21)
U = solver(T, [0,0,50,40,25], [k1o, k2o, 5.400, 0.7])
Y = U[1] # equilibrium slightly above 30
Y += np.random.uniform(high=0.05, size=Y.shape)
Prepare the function that splits the combined parameter vector in initial state and coefficients, call the curve fitting function
from scipy.optimize import curve_fit
def partial(t,R,L,I,k3,k4):
print(R,L,I,k3,k4)
U = solver(t,[0,0,R,L,I],[k1o,k2o,k3,k4])
return U[1]
params, info = curve_fit(partial,T,Y, p0=[30,20,10, 0.3,3.000])
R,L,I, k3,k4 = params
print(R,L,I, k3,k4)
It turns out that curve_fit goes into strange regions with large negative values. A likely reason is that the Y component is, in the end, not coupled strongly enough to all the other components, meaning that large changes in some of the parameters have minimal influence on Y, so that minimal noise in Y can lead to large deviations in these parameters. Here this apparently happens (first) to k3.

System of First Order ODEs in Python

I have seen how to solve systems of ODEs in Python, but all of the examples I have seen were "standard" equations. What I mean by standard is that the equations do not say "derivative of one function = expression that contains derivative of another function".
Here is a sample system I am trying to solve numerically. Initial conditions are x(0) = 5, y(0) = 3, z(0) = 2, and all initial derivatives are 0:
x'(t) + 4y(t) = -3y'(t)
y'(t) + ty(t) = -2z'(t)
z'(t) = -2y(t) + x'(t)
I am not 100% sure how to code this. Here is what I have tried:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import math
def ODESystem(f,t):
x = f[0]
y = f[1]
z = f[2]
Now, what do I define first: dydt, dxdt or dzdt. Is there a way for me to define one expression that "hangs around" before I use it to define another expression?
You do not need to solve anything manually, you can just as well do
def ODESystem(f,t):
x,y,z = f
return np.linalg.solve([[1,3,0],[0,1,2],[-1,0,1]], [-4, -t, -2])*y
Nevermind; I am stupid. I can keep on substituting into the third equation until I get an equation for z'(t) that does not include any other derivatives.

How to fit experimental data in Python to inverse trigonometric function with limited definition area using scipy.curve_fit?

I am trying to fit some experimental data to a nonlinear function with one parameter that includes an arcus cosine function which therefore is limited in its area of definition from -1 to 1. I use scipy's curve_fit to find the parameter of the function, but it returns the following error:
RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 400.
The function I want to fit is this one:
def fitfunc(x, a):
y = np.rad2deg(np.arccos(x*np.cos(np.deg2rad(a))))
return y
For the fitting, I provid a numpy array for x and y respectively which contain values in degree (which is why the function contains conversion to and from radians).
param, param_cov = curve_fit(fitfunc, xs, ys)
When I use other fit functions like for example a polynomial, the curve_fit returns some values, the error mentioned above only occurs when I use this function which includes an arcus cosine.
I suspect that it cannot fit the data points because depending on the parameter of the arcus cosine function, some data points do not lie inside the area of definition of the arcus cosine. I have tried raising the number iterations (maxfev) but without success.
Sample data:
ys = np.array([113.46125, 129.4225, 140.88125, 145.80375, 145.4425,
146.97125, 97.8025, 112.91125, 114.4325, 119.16125,
130.13875, 134.63125, 129.4375, 141.99, 139.86,
138.77875, 137.91875, 140.71375])
xs = np.array([2.786427013, 3.325624466, 3.473013087, 3.598247534, 4.304280248,
4.958273121, 2.679526725, 2.409388637, 2.606306639, 3.661558062,
4.569923009, 4.836843789, 3.377013596, 3.664550526, 4.335401233,
3.064199519, 3.97155254, 4.100567011])
As HS-nebula mentioned in his comments, you need to define an initial value a0 of a as a start guess for the curve-fitting. Moreover, you need to be careful when choosing a0 as your np.arcos() is only defined in [-1,1] and choosing the wrong a0 results in error.
import numpy as np
from scipy.optimize import curve_fit
ys = np.array([113.46125, 129.4225, 140.88125, 145.80375, 145.4425, 146.97125,
97.8025, 112.91125, 114.4325, 119.16125, 130.13875, 134.63125,
129.4375, 141.99, 139.86, 138.77875, 137.91875, 140.71375])
xs = np.array([2.786427013, 3.325624466, 3.473013087, 3.598247534, 4.304280248, 4.958273121,
2.679526725, 2.409388637, 2.606306639, 3.661558062, 4.569923009, 4.836843789,
3.377013596, 3.664550526, 4.335401233, 3.064199519, 3.97155254, 4.100567011])
def fit_func(x, a):
a_in_rad = np.deg2rad(a)
cos_a_in_rad = np.cos(a_in_rad)
arcos_xa_product = np.arccos( x * cos_a_in_rad )
return np.rad2deg(arcos_xa_product)
a0 = 80
param, param_cov = curve_fit(fit_func, xs, ys, a0, bounds = (0, 360))
print('Using curve we retrieve a value of a = ', param[0])
Output:
Using curve we retrieve a value of a = 100.05275506147824
However if you choose a0=60, you get the following error:
ValueError: Residuals are not finite in the initial point.
To be able to use the data with all possible values of a, a normalization as HS-nebula suggested is good idea.

How to minimize a multivariable function using scipy

So I have the function
f(x) = I_0(exp(Q*x/nKT)
Where Q, K and T are constants, for the sake of clarity I'll add the values
Q = 1.6x10^(-19)
K = 1.38x10^(-23)
T = 77.6
and n and I_0 are the two constraints that I'm trying to minimize.
my xdata is a list of 50 datapoints and as is my ydata. So as of yet this is my code:
from __future__ import division
import scipy.optimize as optimize
import numpy
xdata = numpy.array([1.07,1.07994,1.08752,1.09355,
1.09929,1.10536,1.10819,1.11321,
1.11692,1.12099,1.12435,1.12814,
1.13181,1.13594,1.1382,1.14147,
1.14443,1.14752,1.15023,1.15231,
1.15514,1.15763,1.15985,1.16291,1.16482])
ydata = [0.00205,
0.004136,0.006252,0.008252,0.010401,
0.012907,0.014162,0.016498,0.018328,
0.020426,0.022234,0.024363,0.026509,
0.029024,0.030457,0.032593,0.034576,
0.036725,0.038703,0.040223,0.042352,
0.044289,0.046043,0.048549,0.050146]
#data and ydata is experimental data, xdata is voltage and ydata is current
def f(x,I0,N):
# I0 = 7.85E-07
# N = 3.185413895
Q = 1.66E-19
K = 1.38065E-23
T = 77.3692
return I0*(numpy.e**((Q*x)/(N*K*T))-1)
result = optimize.curve_fit(f, xdata,ydata) #trying to minize I0 and N
But the answer doesn't give suitably optimized constraints
Any help would be hugely appreciated I realize there may be something obvious I am missing, I just can't see what it is!
I have tried this, but for some reason if you throw out those constants so function becomes
def f(x,I0,N):
return I0*(numpy.exp(x/N)-1)
you get something reasonable.
1.86901114e-13, 4.41838309e-02
Its true, that when we get rid off constants its better. Define function as:
def f(x,A,B):
return A*(np.e**(B*x)-1)
and fit it by curve_fit, you'll be able to get A that is explicitly I0 (A=I0) and B (you can obtain N simply by N=Q/(BKT) ). I managed to get pretty good fit.
I think if there is too much constants, algorithm gets confused some way.

Method signature for Jacobian of a least squares function in scipy

Can anyone provide an example of providing a Jacobian to a least squares function in scipy?
I can't figure out the method signature they want - they say it should be a function, yet it's very hard to figure out what input parameters in what order this function should accept.
Here's the exponential decay fitting that I got to work with this:
import numpy as np
from scipy.optimize import leastsq
def f(var,xs):
return var[0]*np.exp(-var[1]*xs)+var[2]
def func(var, xs, ys):
return f(var,xs) - ys
def dfunc(var,xs,ys):
v = np.exp(-var[1]*xs)
return [v,-var[0]*xs*v,np.ones(len(xs))]
xs = np.linspace(0,4,50)
ys = f([2.5,1.3,0.5],xs)
yn = ys + 0.2*np.random.normal(size=len(xs))
fit = leastsq(func,[10,10,10],args=(xs,yn),Dfun=dfunc,col_deriv=1)
If I wanted to use col_deriv=0, I think that I would have to basically take the transpose of what I return with dfunc. You're quite right though: the documentation on this isn't so great.

Categories

Resources