Python complex coupled ODEs error - python

At the moment, I am trying to solve a system of coupled ODEs with complex terms. I am using scipy.integrate.ODE, I have successfully solved a previous problem involving a coupled ODE system with only real terms. In that case I used odeint, which is not suitable for the problem I am facing now. The system has 9 coupled ODEs, and this is my code:
from pylab import *
import numpy as np
from scipy.integrate import ode
#Here I am calculating a parameter H of the system
h=1
w1=0.1
w2=0.12
g=1
H=array([[h/2*(w1+w2),h/2*w1+h*g,h*w2/2,0], [h/2*w1+h*g,h/2*(w1-w2),0,-h*w2/2], [h*w2/2,0,-h/2*(w1-w2),-h/2*w1-h*g],[0,-h*w2/2,-h/2*w1-h*g,-h/2*(w1+w2)]])
#this is the system of ODEs
def resol(p1111, p1212, p2121, p1112, p1121, p1122, p1221, p1222, p2122,t,par):
gamma, h, H = par
i=1j
dp1111 = (H[0,1]*conjugate(p1112)+H[0,2]*conjugate(p1121)+H[0,3]*conjugate(p1122)-H[1,0]*p1112-H[2,0]*p1121-H[3,0]*p1122)*i+delta*(p1212-p1111)
dp1112 = (H[0,0]*p1112+H[0,1]*p1212+H[0,2]*conjugate(p1221)-H[0,1]*p1111-H[1,1]*p1112-H[3,1]*p1122)*i+delta*(conjugate(p1112)-p1112)
dp1121 = (H[0,0]*p1121+H[0,1]*p1221+H[0,2]*p2121-H[0,2]*p1111-H[2,2]*p1121-H[3,2]*p1122)*i+delta*(p1222-p1121)
dp1122 = (H[0,0]*p1122+H[0,1]*p1222+H[0,2]*p2122-H[1,3]*p1112-H[2,3]*p1121-H[3,3]*p1122)*i+delta*(p1221-p1122)
dp1212 = (H[1,0]*p1112+H[1,3]*conjugate(p1222)-H[0,1]*conjugate(p1112)-H[3,1]*p1222)*i+delta*(p1111-p1212)
dp1221 = (H[1,0]*p1121+H[1,1]*p1221+H[1,3]*conjugate(p2122)-H[0,2]*conjugate(p1112)-H[2,2]*p1221-H[3,2]*p1222)*i+delta*(p1122-p1221)
dp1222 = (H[1,0]*p1122+H[1,1]*p1222+H[1,3]*p2122-H[1,3]*p1212-H[2,3]*p1221-H[3,3]*p1222)*i+delta*(p1121-p1222)
dp2121 = (H[2,0]*p1121+H[2,3]*conjugate(p2122)-H[0,2]*conjugate(p1121)-H[3,2]*p2122)*i+delta*(1-p1111-p1212-2*p2121)
dp2122 = (H[2,0]*p1122+H[2,2]*p2122+H[2,3]*(1-p1111-p1212-p2121)-H[1,3]*conjugate(p1221)-H[2,3]*p2121-H[3,3]*p2122)*i+delta*(conjugate(p2122)-p2122)
sol = [dp1111, dp1212, dp2121, dp1112, dp1121, dp1122, dp1221, dp1222, dp2122]
return sol
#I set the Initial values of echa term, note that there are 9 because I have 9 terms
condin=np.array([1/3,0,2/3,0,sqrt(2)/3,0,0,0,0])
#I set the parameters
par=[g, h, H]
#I set the integrator, I don't use jacobian because I it is tedious to calculate it for my system
r=ode(resol).set_integrator('zvode', method='bdf',with_jacobian=False)
#Just defining the time and the steps
t_start = 0.0
t_final = 10.0
delta_t = 0.1
num_steps = np.floor((t_final - t_start)/delta_t) + 1
r.set_initial_value(condin, t_start) #I give initial values to the integrator
r.set_f_params(par) #the parameter aswell
t = np.zeros((num_steps, 1)) #I define a (x,1) dim matrix to save each term of the time
resultados = np.zeros((num_steps, 9)) # the same to save the results but with a (x,9) dim matrix
t[0] = t_start
resultados[0] = condin
k = 1
while r.successful() and k < num_steps:
r.integrate(r.t + delta_t)
t[k] = r.t #save the time and the results
resultados[k] = r.y
k += 1
As you have noticed I define my function as resol(p1111, p1212, p2121, p1112, p1121, p1122, p1221, p1222, p2122,t,par):. When using odeint I wrote it in a different way:
def resol(var,t,par):
p1111, p1212, p2121, p1112, p1121, p1122, p1221, p1222, p2122 = var
gamma, h, H = par
But in that way when using integrate.ode it gives me this error:
p1111, p1212, p2121, p1112, p1121, p1122, p1221, p1222, p2122 = var
TypeError: 'float' object is not iterable
Anyway I solved it by defining the function in the way I posted. The problem is that, when I run the code, I always get this error, and I don't have a clue why:
create_cb_arglist: Failed to build argument list (siz) with enough arguments (tot-opt) required by user-supplied function (siz,tot,opt=3,11,0).
Traceback (most recent call last):
File "Untitled.py", line 48, in <module>
r.integrate(r.t + delta_t)
File "/Users/kilian/src/scipy/scipy/integrate/_ode.py", line 333, in integrate
self.f_params, self.jac_params)
File "/Users/kilian/src/scipy/scipy/integrate/_ode.py", line 760, in run
args[5:]))
vode.error: failed in processing argument list for call-back f.
I don't know if this error is because I am using the function conjugate() to calculate the complex conjugates of the therms inside my function. But I feel that this is not the cause.
If anyone can help me with this I would appreciate it. Thank you in advance

This should solve this issue:
def resol(t, y, par):
p1111, p1212, p2121, p1112, p1121, p1122, p1221, p1222, p2122 = y
gamma, h, H = par
You will get more problems such as that delta needs to be called delta_t (needs renaming) and ComplexWarning: Casting complex values to real discards the imaginary part but it runs.

Related

How to include known parameter that changes over time in solve_bvp

I am trying to use scipy's solve_bvp in python to solve differential equations that depend on a known parameter that changes over time. I have this parameter saved in a numpy array. However, when I try to use this array in the derivatives function, I get the following error ValueError: operands could not be broadcast together with shapes (10,) (11,).
Below is a simplified version of my code. I want the variable d2 to take certain values at different times according to an array, d2_set_values. The differential equations for some of the 12 variables then depend on d2. I hope it's clear from this code what I'm trying to achieve.
import numpy as np
from scipy.integrate import solve_bvp
t = np.linspace(0, 10, 11)
# Known parameter that changes over time
d2_set_values = np.zeros(t.size)
d2_set_values[:4] = 0.1
d2_set_values[4:8] = 0.2
d2_set_values[8:] = 0.1
# Initialise y vector
y = np.zeros((12, t.size))
# ODEs
def fun(x, y):
S1, I1, R1, S2, I2, R2, lamS1, lamI1, lamR1, lamS2, lamI2, lamR2 = y
d1 = 0.5*(I1 + 0.1*I2)*(lamS1 - lamI1)
d2 = d2_set_values
dS1dt = -0.5*S1*(1-d1)*(I1 + 0.1*I2)
dS2dt = -0.5*S2*(1-d2)*(I2 + 0.1*I1)
dI1dt = 0.5*S1*(1-d1)*(I1 + 0.1*I2) - 0.2*I1
dI2dt = 0.5*S2*(1-d2)*(I2 + 0.1*I1) - 0.2*I2
dR1dt = 0.2*I1
dR2dt = 0.2*I2
dlamS1dt = 0.5*(1-d1)*S1*lamS1
dlamS2dt = 0.5*(1-d2)*S2*lamS2
dlamI1dt = 0.5*(1-d1)*I1*lamI1
dlamI2dt = 0.5*(1-d2)*I2*lamI2
dlamR1dt = lamR1
dlamR2dt = lamR2
return np.vstack((dS1dt, dI1dt, dR1dt, dS2dt, dI2dt, dR2dt, dlamS1dt, dlamI1dt, dlamR1dt, dlamS2dt, dlamI2dt, dlamR2dt))
# Boundary conditions
def bc(ya, yb):
return np.array([ya[0]-0.99, ya[1]-0.01, ya[2]-0., ya[3]-1.0, ya[4]-0., ya[5]-0.,
yb[6]-0., yb[7]-1., yb[8]-0., yb[9]-0, yb[10]-0, yb[11]-0])
# Run the solver
sol = solve_bvp(fun, bc, t, y)
I have even tried reducing the size of d2_set_values by one, but that doesn't solve the issue.
Any help I can get would be much appreciated!

Pyomo.dae - Solving a system of DAEs with Casadi solver

i am trying to solve a system of DAE using pyomo.
This is a toy example
from pyomo.environ import *
from pyomo.dae import *
m = ConcreteModel()
m.r = ContinuousSet(bounds = (0., 1.))
m.t = ContinuousSet(bounds = (0., 5.))
m.c = Var(m.r, m.t)
m.dcdt = DerivativeVar(m.c, wrt = m.t)
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m, nfe=20, wrt = m.r, scheme = 'BACKWARD')
# setting initial conditions
m.c[:, 0].fix(5)
def _dae_rule(m, r, t):
return 0 == - m.c[r, t] - m.dcdt[r, t] # note that rewriting to ODE is not desired
m.ode = Constraint(m.r, m.t, rule = _dae_rule)
sim = Simulator(m, package = "casadi")
tsim, profiles = sim.simulate(numpoints=100, integrator="idas")
Unfortunately, execution leads to the error message
DAE_Error: Currently the simulator may only be applied to Pyomo models with a single ContinuousSet
How so? Only m.t is a ContinuousSet?
Manually deleting the ContinuousSet, instead using a DiscreteSet in the first place yields the error message
DAE_Error: Cannot simulate a differential equation with multiple DerivativeVars
I don't understand. Every equation only depends on its own derivative?
Also, if i were to also discretize m.t can i then use any alternative solver that might work?
Thank you very much :)
according to the documentation on Simulator, it only supports models with 1 ContinuousSet and you have m.r and m.t. Maybe you can define a system of DAEs as a function of t, at discrete values of r, or vice versa.

optimization with python cvxopt

I am trying to minimize the portfolio variance using Python's cvxopt. However, after lots of trying, it doesn't seem to work. The function and my code and the error are pasted below. Thanks for helping!
the minimize problem
objective function: min x.dot(sigma_mv).dot(x.T)
the constraint condition is all x>=0, sum(X) = 1
sigma_mv is the covariance matrix of 800*800, dim = 800
code
dim = sigma_mv.shape[0]
P = 2*sigma_mv
q = np.matrix([0.0])
G = -1*np.identity(dim)
h = np.matrix(np.zeros((dim,1)))
sol = solvers.qp(P,q,G,h)
Traceback (most recent call last):
File "<ipython-input-47-a077fa141ad2>", line 6, in <module>
sol = solvers.qp(P,q)
File "D:\spyder\lib\site-packages\cvxopt\coneprog.py", line 4470, in qp
return coneqp(P, q, G, h, None, A, b, initvals, kktsolver = kktsolver, options = options)
File "D:\spyder\lib\site-packages\cvxopt\coneprog.py", line 1822, in coneqp
raise ValueError("use of function valued P, G, A requires a "\
ValueError: use of function valued P, G, A requires a user-provided kktsolver
You have both equality and inequality constraints so you need to provide all the arguments to the built-in qp solver
Gx <=h
Ax=b
Here x>=0 can be written as -x<=0 So, G matrix will look like -1*(Identity matrix)
and h will a 0 vector
Similarly, your A will be an Identity matrix and b will be a unity vector(all elements =1)
Finally, solve expression should look like :
sol=solvers.qp(P, q, G, h, A, b)

odeint for an differential system

I have a problem with odeint. I have to solve an first order differential system and then a second order system but I am a little confused with the first order one. Can you explain what I have marked as wrong? Thank you :)
import scipy.integrate as integrate
import numpy as np
def fun(t,y):
ys = np.array([y[1], (1-y[0]**2)*y[1]-y[0]])
return(ys)
N = 3
x0 = np.array([2.00861986087484313650940188,0])
t0tf = [0, 17.0652165601579625588917206249]
T=([0 for i in range (N+1)])
T[0]= t0tf[0]
Pas = (t0tf[1]-t0tf[0])/N
for i in range (1,N+1):
T[i]= t0tf[0] + i*Pas
X = integrate.odeint(fun, x0,T,Dfun=None, col_deriv=0,full_output=True)
T = np.array(T)
T = T.reshape(N+1,1)
S = np.append(X,T,axis=1)
print(S)
The returned error is:
ys = np.array([y[1], (1-y[0]**2)*y[1]-y[0]])
TypeError: 'float' object is not subscriptable
You need to reverse the order of the arguments to your derivative function - it should be f(y, t), not f(t, y). This is the opposite order to that used by the scipy.integrate.ode class.
Also, the concatenation S = np.append(X,T,axis=1) will fail because X is a tuple containing your integrals and a dict. Use S = np.append(X[0],T,axis=1) instead.

fitting an ODE with python leastsq gives a cast error when initial conditions is passed as parameter

I have a set of data that I am trying to fit to an ODE model using scipy's leastsq function. My ODE has parameters beta and gamma, so that it looks for example like this:
# dS/dt = -betaSI
# dI/dt = betaSI - gammaI
# dR/dt = gammaI
# with y0 = y(t=0) = (S(0),I(0),R(0))
The idea is to find beta and gamma so that the numerical integration of my system of ODE's best approximates the data. I am able to do this just fine using leastsq if I know all the points in my initial condition y0.
Now, I am trying to do the same thing but to pass now one of the entries of y0 as an extra parameter. Here is where the Python and me stop communicating...
I did a function so that now the first entry of the parameters that I pass to leastsq is the initial condition of my variable R.
I get the following message:
*Traceback (most recent call last):
File "/Users/Laura/Dropbox/SHIV/shivmodels/test.py", line 73, in <module>
p1,success = optimize.leastsq(errfunc, initguess, args=(simpleSIR,[y0[0]],[Tx],[mydata]))
File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 283, in leastsq
gtol, maxfev, epsfcn, factor, diag)
TypeError: array cannot be safely cast to required type*
Here is my code. It is a little more involved that what it needs to be for this example because in reality I want to fit another ode with 7 parameters and want to fit to several data sets at once. But I wanted to post here something simpler... Any help will be very very much appreciated! Thank you very much!
import numpy as np
from matplotlib import pyplot as plt
from scipy import optimize
from scipy.integrate import odeint
#define the time span for the ODE integration:
Tx = np.arange(0,50,1)
num_points = len(Tx)
#define a simple ODE to fit:
def simpleSIR(y,t,params):
dydt0 = -params[0]*y[0]*y[1]
dydt1 = params[0]*y[0]*y[1] - params[1]*y[1]
dydt2 = params[1]*y[1]
dydt = [dydt0,dydt1,dydt2]
return dydt
#generate noisy data:
y0 = [1000.,1.,0.]
beta = 12*0.06/1000.0
gamma = 0.25
myparam = [beta,gamma]
sir = odeint(simpleSIR, y0, Tx, (myparam,))
mydata0 = sir[:,0] + 0.05*(-1)**(np.random.randint(num_points,size=num_points))*sir[:,0]
mydata1 = sir[:,1] + 0.05*(-1)**(np.random.randint(num_points,size=num_points))*sir[:,1]
mydata2 = sir[:,2] + 0.05*(-1)**(np.random.randint(num_points,size=num_points))*sir[:,2]
mydata = np.array([mydata0,mydata1,mydata2]).transpose()
#define a function that will run the ode and fit it, the reason I am doing this
#is because I will use several ODE's to see which one fits the data the best.
def fitfunc(myfun,y0,Tx,params):
myfit = odeint(myfun, y0, Tx, args=(params,))
return myfit
#define a function that will measure the error between the fit and the real data:
def errfunc(params,myfun,y0,Tx,y):
"""
INPUTS:
params are the parameters for the ODE
myfun is the function to be integrated by odeint
y0 vector of initial conditions, so that y(t0) = y0
Tx is the vector over which integration occurs, since I have several data sets and each
one has its own vector of time points, Tx is a list of arrays.
y is the data, it is a list of arrays since I want to fit to multiple data sets at once
"""
res = []
for i in range(len(y)):
V0 = params[0][i]
myparams = params[1:]
initCond = np.zeros([3,])
initCond[:2] = y0[i]
initCond[2] = V0
myfit = fitfunc(myfun,initCond,Tx[i],myparams)
res.append(myfit[:,0] - y[i][:,0])
res.append(myfit[:,1] - y[i][:,1])
res.append(myfit[1:,2] - y[i][1:,2])
#end for
all_residuals = np.hstack(res).ravel()
return all_residuals
#end errfunc
#example of the problem:
V0 = [0]
params = [V0,beta,gamma]
y0 = [1000,1]
#this is just to test that my errfunc does work well.
errfunc(params,simpleSIR,[y0],[Tx],[mydata])
initguess = [V0,0.5,0.5]
p1,success = optimize.leastsq(errfunc, initguess, args=(simpleSIR,[y0[0]],[Tx],[mydata]))
The problem is with the variable initguess. The function optimize.leastsq has the following call signature:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html
It's second argument, x0, has to be an array. Your list
initguess = [v0,0.5,0.5]
won't be converted to an array because v0 is a list instead of an int or float. So you get an error when you try to convert initguess from a list to an array in the leastsq function.
I would adjust the variable params from
def errfunc(params,myfun,y0,Tx,y):
so that it is a 1-D array. Make the first few entries the values of v0 then append beta and gamma to that.

Categories

Resources