Optimal control with time varying parameters and fixed control in Gekko - python

Consider an optimal control problem as follows:
min int(0,1) x(t).u(t)+u(t)**2
x.dt()= x(t)+u(t), x(0)=1
0 <= u(t) <= t**2+(1-t)**3 for 0<=t<=1
My first question is how to define the upper bound of control in Gekko. Also, suppose we want to compare this problem with the case the control is constant during the planning horizon, i.e., u(0)=...=u(t)=...=u(1). How can we define it?
In another case, how is it possible to have fixed but unknown control in different sub-intervals? For example, in [0,t1], control should be fixed, in [t1,t2] control should be fixed but can be different from control in [0,t1] (e.g. t1=0.5, t2=1, Tf=t2=1).
I would be thankful to know if it is possible to study a case where t1, t2, ... are also control and should be determined?

Here is code that gives a solution to the problem:
# min int(0,1) x(t).u(t)+u(t)**2
# x.dt()= x(t)+u(t), x(0)=1
# 0 <= u(t) <= t**2+(1-t)**3 for 0<=t<=1
import numpy as np
from gekko import GEKKO
m = GEKKO(remote=False); m.time=np.linspace(0,1,101)
t = m.Var(0); m.Equation(t.dt()==1)
ub = m.Intermediate(t**2+(1-t)**3)
u = m.MV(0,lb=0,ub=1); m.Equation(u<=ub)
u.STATUS=1; u.DCOST=0; m.free_initial(u)
x = m.Var(0); m.Equation(x.dt()==x+u)
p = np.zeros(101); p[-1]=1; final=m.Param(p)
m.Minimize(m.integral(x*u+u**2)*final)
m.options.IMODE=6; m.options.NODES=3; m.solve()
print(m.options.OBJFCNVAL)
import matplotlib.pyplot as plt
plt.plot(m.time,x.value,'b--',label='x')
plt.plot(m.time,u.value,'k-',label='u')
plt.plot(m.time,ub.value,'r--',label='ub')
plt.legend()
plt.show()
The solution isn't very interesting because the optimal objective is u(t)=0 and x(t)=0. If you add a final condition like x(1)=0.75 then the solution is more interesting.
m.Equation(final*(x-0.75)==0)
If you want all of the interval to be one value then I recommend that you use a u=m.FV() type. The u=m.MV() type is adjustable by the optimizer at every interval when you set u.STATUS=1. You can also reduce degrees of freedom with the m.options.MV_STEP_HOR=5 as a global option in Gekko for all MVs or else u.MV_STEP_HOR=5 to adjust it just for that MV. There is more information on the different Gekko types.
You can set the final time by using m.time = [0,...,1] and then scale it with time final tf. The derivatives in your problem need to be divided by tf as well. Here is a related rocket launch problem or the Jennings optimal control problem that minimize the final time. You can also set up multiple time intervals and then connect them with m.Connection().

Related

Function diverging at boundaries: Schrödinger 2D, explicit method

I'm trying to simulate the 2D Schrödinger equation using the explicit algorithm proposed by Askar and Cakmak (1977). I define a 100x100 grid with a complex function u+iv, null at the boundaries. The problem is, after just a few iterations the absolute value of the complex function explodes near the boundaries.
I post here the code so, if interested, you can check it:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
#Initialization+meshgrid
Ntsteps=30
dx=0.1
dt=0.005
alpha=dt/(2*dx**2)
x=np.arange(0,10,dx)
y=np.arange(0,10,dx)
X,Y=np.meshgrid(x,y)
#Initial Gaussian wavepacket centered in (5,5)
vargaussx=1.
vargaussy=1.
kx=10
ky=10
upre=np.zeros((100,100))
ucopy=np.zeros((100,100))
u=(np.exp(-(X-5)**2/(2*vargaussx**2)-(Y-5)**2/(2*vargaussy**2))/(2*np.pi*(vargaussx*vargaussy)**2))*np.cos(kx*X+ky*Y)
vpre=np.zeros((100,100))
vcopy=np.zeros((100,100))
v=(np.exp(-(X-5)**2/(2*vargaussx**2)-(Y-5)**2/(2*vargaussy**2))/(2*np.pi*(vargaussx*vargaussy)**2))*np.sin(kx*X+ky*Y)
#For the simple scenario, null potential
V=np.zeros((100,100))
#Boundary conditions
u[0,:]=0
u[:,0]=0
u[99,:]=0
u[:,99]=0
v[0,:]=0
v[:,0]=0
v[99,:]=0
v[:,99]=0
#Evolution with Askar-Cakmak algorithm
for n in range(1,Ntsteps):
upre=np.copy(ucopy)
vpre=np.copy(vcopy)
ucopy=np.copy(u)
vcopy=np.copy(v)
#For the first iteration, simple Euler method: without this I cannot have the two steps backwards wavefunction at the second iteration
#I use ucopy to make sure that for example u[i,j] is calculated not using the already modified version of u[i-1,j] and u[i,j-1]
if(n==1):
upre=np.copy(ucopy)
vpre=np.copy(vcopy)
for i in range(1,len(x)-1):
for j in range(1,len(y)-1):
u[i,j]=upre[i,j]+2*((4*alpha+V[i,j]*dt)*vcopy[i,j]-alpha*(vcopy[i+1,j]+vcopy[i-1,j]+vcopy[i,j+1]+vcopy[i,j-1]))
v[i,j]=vpre[i,j]-2*((4*alpha+V[i,j]*dt)*ucopy[i,j]-alpha*(ucopy[i+1,j]+ucopy[i-1,j]+ucopy[i,j+1]+ucopy[i,j-1]))
#Calculate absolute value and plot
abspsi=np.sqrt(np.square(u)+np.square(v))
fig=plt.figure()
ax=fig.add_subplot(projection='3d')
surf=ax.plot_surface(X,Y,abspsi)
plt.show()
As you can see the code is extremely simple: I cannot see where this error is coming from (I don't think is a stability problem because alpha<1/2). Have you ever encountered anything similar in your past simulations?
I'd try setting your dt to a smaller value (e.g. 0.001) and increase the number of integration steps (e.g fivefold).
The wavefunction looks in shape also at Ntsteps=150 and well beyond when trying out your code with dt=0.001.
Checking integrals of the motion (e.g. kinetic energy here?) should also confirm that things are going OK (or not) for different choices of dt.

Set negative value to zero in equation in Gekko?

How do I set all the negative value of a variable to zero in an equation. I've tried using m.max2(0,W) so when W is negative I get zero but when it's positive I get the value but since W is defined as W.dt()==s*p the value seems to be trailing I cant' set the lower bound to zero because I need the negative value elsewhere.
The max2 function is a Mathematical Program with Complementarity Constraints (MPCC) and sometimes has a hard time converging when at W=0 because it is a saddle point for optimization. Another option is the max3 function but this requires a mixed integer solver that can require more time to compute a solution. A third option is to use a function such as w * (0.5*tanh(b*w)+0.5) to get a continuously differentiable approximation to the max function. You can set the b to be a higher value but then it makes the problem harder to solve.
Another option is to successively solve the problem with higher values of b, like a barrier function in the interior point method.
Here is an example script that has all three functions:
import numpy as np
from gekko import GEKKO
m = GEKKO()
w = m.Param(np.linspace(-10,10,101))
x = m.Intermediate(w * 0.5*(m.tanh(10*w)+1))
y = m.max2(w,0)
z = m.max3(w,0)
m.options.IMODE=2
m.solve()
import matplotlib.pyplot as plt
plt.plot(w,x,'ko',label='x=0.5 w (tanh(10w)+1)')
plt.plot(w,y,'b-',label='y=min2(w,0)')
plt.plot(w,z,'r--',label='z=min3(w,0)')
plt.legend()
plt.show()

Sympy: Step-by-step calculation of an ODE system using indexed objects

I currently want to implement a Hammerstein model in sympy. I have now created a small example for a simple system:
import numpy as np
from sympy import *
####HAMMERSTEIN MODEL####
#time
t = symbols("t")
#inputs
u = symbols('u')
#states
y = symbols('y',cls = Function, Function = True)
#init states
y_init =symbols('y_init')
#parameters
gain = 2 #symbols('gain')
time_constant = 20000#symbols('time_constant')
#EQUATIONS
#NONLINEAR STATIC PART
u_nonlinear = u**2 # nonlinear input
#DYNAMIC PART
# first order system with inputs
rhe = (gain * u_nonlinear - y(t)) * 1/time_constant
ode = Eq(diff(y(t),t),rhe)
#solve equation
sol_step = dsolve(ode, ics = {y(0): y_init})
sol_step = sol_step.rhs
#lambdify (sympy)
system_step =lambdify((t,u, y_init),sol_step, 'sympy')
#####SIMULATE STEPWISE######
nr_steps = 10
dt=1
u_data =IndexedBase('u_data')
y_init_data =symbols('y_init_data')
#solution vector
sol =[]
for i in range(nr_steps):
#first sim. step
if i == 0:
sol.append(system_step(dt,u_data[i],y_init_data))
#uses the states of prev. solution as inits
else:
sol.append(system_step(dt,u_data[i],sol[i-1]))
#convert
system=lambdify((u_data,y_init_data),sol, 'numpy')
#EXAMPLE
t_obs = np.linspace(0,10,10)
u_obs = np.ones(10)* 40
x_obs_init =20
#RESULT
print(system(u_obs,x_obs_init))
As you can see from the example, I solve the problem step by step. I always call the Sympy function object "system_step".
The performance is not particularly good with larger systems.
However, I would also like to use the simulation in a scipy optimizer, which leads to it being called several times, which extremely increases the solution time
My problem:
1.)
Can this step-by-step calculation also be implemented using sympy (e.g. indexed objects)? Can the repeated calculation in the loop be avoided?
2.) If so, how can this be done if the length of the input variables (u) should remain flexible and not be specified by a fixed index (m) using hardcode (see nr_steps).
Thank you very much!
Thank you for the info. If I calculate the ODE system with constant input values, I do not need to calculate it step by step. Then the solution process is very quick. Therefore, my idea was to set up the system using vectors or indexed objects, which can prevent the step-by-step calculation.
My goal:
set up the system with variable input variables
solve the system symbolically, even if it takes a very long time
Lambdify and storage in a binary file
use the solved system for different operations

python solving differential equations

I am attempting to solve four different differential equations. After googling and researching I was able to finally understand how the solver works but I can't get this problem specifically to run correctly. Code compiles but the graphs are incorrect.
I think the problem lies in the volume expression inside the function, which will change depending on how much time has passed. That volume at a specific time will then be used to solve the right hand side of the differential equations.
The intervals, starting point and ending point for the time vector is correct. Constants are also correct.
import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
#defining all constants and initial conditions
k=2.2
CB0_inlet=.025
V_flow_inlet=.05
V_reactor_initial=5
CA0_reactor=.05
FB0=CB0_inlet*V_flow_inlet
def dcdt(C,t):
#expression of how volume in reactor varies with time
V=V_flow_inlet*t+C[4] #C[4] is the initial reactor volume ###we dont need things C to be C0 correct?
#calculating right hand side of the four differential equations
dadt=-k*C[0]*C[1]-((V_flow_inlet*C[0])/V)
dbdt=((V_flow_inlet*(CB0_inlet-C[1]))/V)-k*C[0]*C[1]
dcdt=k*C[0]*C[1]-((V_flow_inlet*C[2])/V)
dddt=k*C[0]*C[1]-((V_flow_inlet*C[3])/V)
return [dadt,dbdt,dcdt,dddt,V]
#creating time array, initial conditions array, and calling odeint
t=np.linspace(0,500,100)
initial_conditions=[.05,0,0,0,V_reactor_initial] # [CA0 CB0 CC0 CD0
#V0_reactor]
C=integrate.odeint(dcdt,initial_conditions,t)
plt.plot(t,C)
Taking the hints of the variable names and equation structure, you are considering a chemical reaction
A + B -> C + D
There are 2 sources of changes in the concentration a,b,c,d of reactants A,B,C,D,
the reaction itself with reaction speed k*a*b and
the inflow of reactant B in a solution with concentration b0_in and volume rate V_in, which results in a relative concentration change of V_in/V in all components and an addition of V_in*b0_in/V in B.
This is all well reflected in the first 4 equations of your system. In the treatment of the volume, you are mixing two approaches in an inconsistent way. Either V is a known function of t and thus not a component of the state vector, then
V = V_reactor_initial + V_flow_inlet * t
or you treat it as a component of the state, then the current volume is
V = C[4]
and the rate of volume change is
dVdt = V_flow_inlet.
Modifying your code for the second approach looks like
import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
#defining all constants and initial conditions
k=2.2
CB0_inlet=.025
V_flow_inlet=.05
V_reactor_initial=5
CA0_reactor=.05
FB0=CB0_inlet*V_flow_inlet
def dcdt(C,t):
#expression of how volume in reactor varies with time
a,b,c,d,V = C
#calculating right hand side of the four differential equations
dadt=-k*a*b-(V_flow_inlet/V)*a
dbdt=-k*a*b+(V_flow_inlet/V)*(CB0_inlet-b)
dcdt= k*a*b-(V_flow_inlet/V)*c
dddt= k*a*b-(V_flow_inlet/V)*d
return [dadt,dbdt,dcdt,dddt,V_flow_inlet]
#creating time array, initial conditions array, and calling odeint
t=np.linspace(0,500,100)
initial_conditions=[.05,0,0,0,V_reactor_initial] # [CA0 CB0 CC0 CD0
#V0_reactor]
C=integrate.odeint(dcdt,initial_conditions,t)
plt.plot(t,C[:,0:4])
with the result

Accessing Scipy ode solver's internal steps

I'm currently making the switch from MATLAB to Python for a project that involves solving differential equations.
In MATLAB if the t array that's passed only contains two elements, the solver outputs all the intermediate steps of the simulation. However, in Python you just get the start and end point. To get time points in between you have to explicitly specify the time points you want.
from scipy import integrate as sp_int
import numpy as np
def odeFun(t,y):
k = np.ones((2))
dy_dt = np.zeros(y.shape)
dy_dt[0]= k[1]*y[1]-k[0]*y[0]
dy_dt[1]=-dy_dt[0]
return(dy_dt)
t = np.linspace(0,10,1000)
yOut = sp_int.odeint(odeFun,[1,0],t)
I've also looked into the following method:
solver = sp_int.ode(odefun).set_integrator('vode', method='bdf')
solver.set_initial_value([1,0],0)
dt = 0.01
solver.integrate(solver.t+dt)
However, it still requires an explicit dt. From reading around I understand that Python's solvers (e.g. 'vode') calculates intermediate steps for the dt requested, and then interpolates that time point and outputs it. What I'd like though is to get all these intermediate steps directly without the interpolation. This is because they represent the minimum number of points required to fully describe the time series within the integration tolerances.
Is there an option available to do that?
I'm working in Python 3.
scipy.integrate.odeint
odeint has an option full_output that allows you to obtain a dictionary with information on the integration, including tcur which is:
vector with the value of t reached for each time step. (will always be at least as large as the input times).
(Note the second sentence: The actual steps are always as fine as your desired output. If you want use the minimum number of necessary step, you must ask for a coarse sampling.)
Now, this does not give you the values, but we can obtain those by integrating a second time using these very steps:
from scipy.integrate import odeint
import numpy as np
def f(y,t):
return np.array([y[1]-y[0],y[0]-y[1]])
start,end = 0,10 # time range we want to integrate
y0 = [1,0] # initial conditions
# Function to add the initial time and the target time if needed:
def ensure_start_and_end(times):
times = np.insert(times,0,start)
if times[-1] < end:
times = np.append(times,end)
return times
# First run to establish the steps
first_times = np.linspace(start,end,100)
first_run = odeint(f,y0,first_times,full_output=True)
first_steps = np.unique(first_run[1]["tcur"])
# Second run to obtain the results at the steps
second_times = ensure_start_and_end(first_steps)
second_run = odeint(f,y0,second_times,full_output=True,h0=second_times[0])
second_steps = np.unique(second_run[1]["tcur"])
# ensuring that the second run actually uses (almost) the same steps.
np.testing.assert_allclose(first_steps,second_steps,rtol=1e-5)
# Your desired output
actual_steps = np.vstack((second_times, second_run[0].T)).T
scipy.integrate.ode
Having some experience with this module, I am not aware of any way to obtain the step size without digging deeply into the internals.

Categories

Resources