I am writing a python class for both linear and nonlinear MIMO systems. These two classes inherit from a parent class called Model, which contains the ode of the model. For the linear system, the function looks like
def disc_linear_fn(x, u):
A = np.array([[],[],[],[]])
B = np.array([[]])
x_dot = A # x + B # u
return x_dot
whereas the function of a nonlinear system returns a nonlinear ode.
I want to acquire the Jacobian for both nonlinear and linear systems.
Question 1: How to acquire the exact matrices in a linear system ode function without returning them, i.e. A,B.
Question 2: Which package can be used to calculate the Jacobian of a nonlinear system in numpy. I know how to realize it by programming the approximation algorithm or using packages such as casadi, jax, sympy, but I'd prefer to not describe systems in other packages. Since the next step is to solve an optimization problem in cvxpy, I want to keep every information in numpy to maintain consistency.
Any recommendations would be appreciated! Many thanks in advance.
Related
I am working on ordinary differential equations that I want to solve using numerical methods in Python. Some of these ODEs are chemical equations, e.g. A + B > C with stoichiometry coefficient sigma. This leads to the differential equation du/dt = sigma * a * b.
To do this, the classical method is to use e.g. a Runge-Kutta solver, such as the one in Scipy's integrate.ode.
Instead, I would like to use a modeling language similar to cvxpy is for convex optimization. This method would allow to formally define the ODE and automatically solve the equation. It would be similar, in Python, to what is done in OpenModelica.
Is there such a tool?
I am trying to solve a large-scale nonlinear system using the exact Newton method in SciPy. In my application, the Jacobian is easy to assemble (and factorize) as a sparse matrix.
It seems that all methods available in scipy.optimize.root approximate the Jacobian in one way or another, and I can't find a way to use Newton's method using the API that is discussed in SciPy's documentation.
Nonetheless, using the internal API, I have managed to use Newton's method with the following code:
from scipy.optimize.nonlin import nonlin_solve
x, info = nonlin_solve(f, x0, jac, line_search=False)
where f(x) is the residual and jac(x) is a callable that returns the Jacobian at x as a sparse matrix.
However, I am not sure whether this function is meant to be used outside SciPy and is subject to changes without notice.
Would this be recommended approach?
It is meant to be used.
Scipy's private functions that are not meant to be used from the outside start with a _.
This was confirmed by the scipy's team in an issue I raised recently: cf https://github.com/scipy/scipy/issues/17510
If I have a system of nonlinear ordinary differential equations, M(t,y) y' = F(t,y), what is the best method of solution when my mass matrix M is sometimes singular?
I'm working with the following system of equations:
If t=0, this reduces to a differential algebraic equation. However, even if we restrict t>0, this becomes a differential algebraic equation whenever y4=0, which I cannot set a domain restriction to avoid (and is an integral part of the system I am trying to model). My only previous exposure to DAEs is when an entire row is 0 -- but in this case my mass matrix is not always singular.
What is the best way to implement this numerically?
So far, I've tried using Python where I add a small number (0.0001) to the main diagonals of M and invert it, solving the equations y' = M^{-1}(t,y) F(t,y). However, this seems prone to instabilities, and I'm unsure if this is a universally appropriate means of regularization.
Python doesn't have any built-in functions to deal with mass matrices, so I've also tried coding this in Julia. However, DifferentialEquations.jl states explicitly that "Non-constant mass matrices are not directly supported: users are advised to transform their problem through substitution to a DAE with constant mass matrices."
I'm at a loss on how to accomplish this. Any insights on how to do this substitution or a better way to solve this type of problem would be greatly appreciated.
The following transformation leads to a constant mass matrix:
.
You need to handle the case of y_4 = 0 separately.
I'm attempting to write a simple implementation of the Newton-Raphson method, in Python. I've already done so using the SymPy library, however what I'm working on now will (ultimately) end up running in an environment where only Numpy is available.
For those unfamiliar with the algorithm, it works (in my case) as follows:
I have some a system of symbolic equations, which I "stack" to form a matrix F. The unknowns are X,Y,Z,T (which I wish to determine). Some additional values are initially unknown, until passed to my solver, which substitutes these known values for variables in the symbolic expressions.
Now, the Jacobian matrix (J) of F is computed. This, too, is a matrix of symbolic expressions.
Now, I iterate in some range (max_iter). With each iteration, I form a matrix A by substituing for the unknowns X,Y,Z,T in F current estimates (starting with some initial values). Similarly, I form a matrix b by substituting for X,Y,Z,T current estimates.
I then obtain new estimates by solving the matrix equation Ax = b for x. This vector x holds dT, dX, dY, dZ. I then add these to current estimates for T,X,Y,Z, and iterate again.
Thus far, I've found my largest issue to be computing the Jacobian matrix. I need only to do this once, however it will be different depending upon the coefficients fed to the solver (not unknowns, but only known once fed to the solver, so I can't simply hard-code the Jacobian).
While I'm not terribly familiar with Numpy, I know that it offers numpy.gradient. I'm not sure, however, that this is the same as SymPy's .jacobian.
How can the Jacobian matrix be found, either in "pure" Python, or with Numpy?
EDIT:
Should it be useful to you, more information on the problem can be found [here]. 1. It can be formulated a few different ways, however (as of now) I'm writing it as 4 equations of the form:
\sqrt{(X-x_i)^2+(Y-y_i)^2+(Z-z_i)^2 }= c * (t_i-T)
Where X,Y,Z and T are unknown.
This describes the solution to a localization problem, where we know (a) the location of n >= 4 observers in a 3-dimensional space, (b) the time at which each observer "saw" some signal, and (c) the velocity of the signal. The goal is to determine the coordinates of the signal source X,Y,Z (and, as a side effect, the time of emission, T).
Notice that I've tried (many) other approaches to solving this problem, and all leads point toward a combination of Newton-Raphson with regression.
I have a Python script in which I need to solve a linear programming problem. The catch is that the solution must be binary. In other words, I need an equivalent of MATLAB's bintprog function. NumPy and SciPy do not seem to have such a procedure. Does anyone have suggestions on how I could do one of these three things:
Find a Python library which includes such a function.
Constrain the problem such that it can be solved by a more general linear programming solver.
Interface Python with MATLAB so as to make direct use of bintprog.
Just to be rigorous, if the problem is a binary programming problem, then it is not a linear program.
You can try CVXOPT. It has a integer programming function (see this). To make your problem a binary program, you need to add the constrain 0 <= x <= 1.
Edit: You can actually declare your variable as binary, so you don't need to add the constrain 0 <= x <= 1.
cvxopt.glpk.ilp = ilp(...)
Solves a mixed integer linear program using GLPK.
(status, x) = ilp(c, G, h, A, b, I, B)
PURPOSE
Solves the mixed integer linear programming problem
minimize c'*x
subject to G*x <= h
A*x = b
x[I] are all integer
x[B] are all binary
This is a half-answer, but you can use Python to interface with GLPK (through python-glpk). GLPK supports integer linear programs. (binary programs are just a subset of integer programs).
http://en.wikipedia.org/wiki/GNU_Linear_Programming_Kit
Or you could simply write your problem in Python and generate an MPS file (which most standard LP/MILP (CPLEX, Gurobi, GLPK) solvers will accept). This may be a good route to take, because as far as I am aware, there aren't any high quality MILP solvers that are native to Python (and there may never be). This will also allow you to try out different solvers.
http://code.google.com/p/pulp-or/
As for interfacing Python with MATLAB, I would just roll my own solution. You could generate a .m file and then run it from the command line
% matlab -nojava myopt.m
Notes:
If you're an academic user, you can get a free license to Gurobi, a high performance LP/MILP solver. It has a Python interface.
http://www.gurobi.com/
OpenOpt is a Python optimization suite that interfaces with different solvers.
http://en.wikipedia.org/wiki/OpenOpt
I develop a package called gekko (pip install gekko) that solves large-scale problems with linear, quadratic, nonlinear, and mixed integer programming (LP, QP, NLP, MILP, MINLP) and is released under the MIT License. A binary variable is declared as an integer variable type with lower bound 0 and upper bound 1 as b=m.Var(integer=True,lb=0,ub=1). Here is a more complete problem with the use of m.Array() to define multiple binary variables:
from gekko import GEKKO
m = GEKKO()
x,y = m.Array(m.Var,2,integer=True,lb=0,ub=1)
m.Maximize(y)
m.Equations([-x+y<=1,
3*x+2*y<=12,
2*x+3*y<=12])
m.options.SOLVER = 1
m.solve()
print('Objective: ', -m.options.OBJFCNVAL)
print('x: ', x.value[0])
print('y: ', y.value[0])