I am writing a python class for both linear and nonlinear MIMO systems. These two classes inherit from a parent class called Model, which contains the ode of the model. For the linear system, the function looks like
def disc_linear_fn(x, u):
A = np.array([[],[],[],[]])
B = np.array([[]])
x_dot = A # x + B # u
return x_dot
whereas the function of a nonlinear system returns a nonlinear ode.
I want to acquire the Jacobian for both nonlinear and linear systems.
Question 1: How to acquire the exact matrices in a linear system ode function without returning them, i.e. A,B.
Question 2: Which package can be used to calculate the Jacobian of a nonlinear system in numpy. I know how to realize it by programming the approximation algorithm or using packages such as casadi, jax, sympy, but I'd prefer to not describe systems in other packages. Since the next step is to solve an optimization problem in cvxpy, I want to keep every information in numpy to maintain consistency.
Any recommendations would be appreciated! Many thanks in advance.
I'm teaching myself linear algebra, and I'm trying to learn the corresponding Numpy and Sympy code alongside it.
My book presented the following matrix:
example1 = Matrix([[3,5,-4,0],[-3,-2,4,0],[6,1,-8,0]])
with the instructions to determine if there is a nontrivial solution. The final solution would be x = x3 * Matrix([[4\3],[0],[1]]). (Using Jupyter's math mode, I used the following to represent the solution:)
$$\pmb{x} =
\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} =
\begin{bmatrix}\frac{4}{3}x_3\\0\\x_3\end{bmatrix} =
x_3\begin{bmatrix}\frac{4}{3}\\0\\1\end{bmatrix} \\
= x_3\pmb{v} \text{, where }\pmb{v} = \begin{bmatrix}\frac{4}{3}\\0\\1\end{bmatrix}$$
How can I now solve this in Sympy? I've looked through the documentation, but I didn't see anything, and I'm at a bit of a loss. I know that errors tend to be thrown for free variables. Is there a way to determine nontrivial solutions and the corresponding general solution using Sympy, considering that nontrivial solutions are reliant upon free variables? Or is np.linalg generally more preferred for this type of problem?
This is a linear system so
>>> linsolve(Matrix([[3,5,-4,0],[-3,-2,4,0],[6,1,-8,0]]))
FiniteSet((4*tau0/3, 0, tau0))
The tau0 is the free parameter that you refer to as x3.
I currently solve optimization problems with complex variables using CVX + Mosek, on MATLAB. I'm now considering switching to Gurobi + Python for some applications.
Is there a way to declare complex values (both inside constraints and as optimization variables) directly into Gurobi's Python interface?
If not, which are good modeling languages, with Python interface, that automates the reduction of the problem to real variables before calling the solver?
I know, for instance, that YALMIP does this reduction (though no Python interface), and newer versions of CVXPY also (but I haven't used it extensively, and don't know if it already has good performance, is stable, and reasonably complete). Any thoughts on these issues and recommendations of other interfaces are thus welcome.
The only possible variables in Gurobi are:
Integer;
Binary;
Continuous;
Semi-Continuous and;
Semi-Integer.
Also, I don't know the problem you're trying to solve, but complex number are quite strange for linear optimization.
The complex plane isn't a ordered field, so that is not possible to say that a given complex number z1 > z2
You'll probably have to model your problem in such way that you can decompose the constraints with real and imaginary parts, so that you can work only with real numbers.
I am working on a Python package for computing several NP-Hard graph invariants. The current version of the package uses brute force for nearly all of the algorithms, but I am very interested in using integer programming to help speed up the computations for larger graphs.
For example, a simple integer program for solving the independence number of an n-vertex graph is to maximize given the constraints , where .
How do I solve this using PuLP? Is PuLP my best option, or would it be beneficial to use solvers in another language, like Julia, and interface may package with those?
I don't propose to write your full implementation for you, but to address the final question about PuLP versus other languages.
PuLP provides a Python wrapper over a range of existing LP Solvers.
Once you have specified your problem with a Python syntax, it converts it to another language internally (e.g. you can save .lp files, and inspect them) and passes that to any one of a number of third-party solvers, that generally aren't written in Python.
So there is no need to learn another language to get a better solver.
I need to make a linear programming model. Here are the inequalities I'm using (for example):
6x + 4y <= 24
x + 2y <= 6
-x + y <= 1
y <= 2
I need to find the area described by these inequalities, and shade it in a graph, as well as keep track of the vertices of the bounding lines of this area, and draw the bounding line in a different color. See the graph below for an example of what I'm looking for.
.
I'm using Python 3.2, numpy, and matplotlib. Are there better modules for linear programming in Python?
UPDATE: The answer has become somewhat outdated in the past 4 years,
here is an update. You have many options:
If you do not have to do it Python then it is a lot more easier to
do this in a modeling langage, see Any good tools to solve
integer programs on linux?
I personally use Gurobi these
days through its Python API. It is a commercial, closed-source
product but free for academic research.
With PuLP you can create MPS and LP files and then
solve them with GLPK, COIN CLP/CBC, CPLEX, or XPRESS through their
command-line interface. This approach has its advantages and
disadvantages.
The OR-Tools from Google is an open source software suite for optimization, tuned for tackling the world's toughest problems in vehicle routing, flows, integer and linear programming, and constraint programming.
Pyomo is a Python-based, open-source optimization modeling language with a diverse set of optimization capabilities.
SciPy offers linear programming: scipy.optimize.linprog. (I have
never tried this one.)
Apparently, CVXOPT offers a Python interface to GLPK, I did
not know that. I have been using GLPK for 8 years now and I can
highly recommend GLPK. The examples and tutorial of CVXOPT seem really nice!
You can find other possibilites at in the Wikibook under
GLPK/Python. Note that many of these are not necessarily resticted
to GLPK.
I'd recommend the package cvxopt for solving convex optimization problems in Python. A short example with Python code for a linear program is in cvxopt's documentation here.
The other answers have done a good job providing a list of solvers. However, only PuLP has been mentioned as a Python library to formulating LP models.
Another great option is Pyomo. Like PuLP, you can send the problem to any solver and read the solution back into Python. You can also manipulate solver parameters. A classmate and I compared the performance of PuLP and Pyomo back in 2015 and we found Pyomo could generate .LP files for the same problem several times more quickly than PuLP.
The only time a graph is used to solve a linear program is for a homework problem. In all other cases, linear programming problems are solved through matrix linear algebra.
As for Python, while there are some pure-Python libraries, most people use a native library with Python bindings. There is a wide variety of free and commercial libraries for linear programming. For a detailed list, see Linear Programming in Wikipedia or the Linear Programming Software Survey in OR/MS Today.
Disclaimer: I currently work for Gurobi Optimization and formerly worked for ILOG, which provided CPLEX.
For solving the linear programming problem, you can use the scipy.optimize.linprog module in SciPy, which uses the Simplex algorithm.
Here is a graphical representation of the problem, inspired from How to visualize feasible region for linear programming (with arbitrary inequalities) in Numpy/MatplotLib?
import numpy as np
import matplotlib.pyplot as plt
m = np.linspace(0,5,200)
x,y = np.meshgrid(m,m)
plt.imshow(((6*x+4*y<=24)&(x+2*y<=6)&(-x+y<=1)&(y<=2)&(x>=0)&(y>=0)).astype(int),
extent=(x.min(),x.max(),y.min(),y.max()),origin='lower',cmap='Greys',alpha=0.3);
# plot constraints
x = np.linspace(0, 5, 2000)
# 6*x+4*y<=24
y0 = 6-1.5*x
# x+2*y<=6
y1 = 3-0.5*x
# -x+y<=1
y2 = 1+x
# y <= 2
y3 = (x*0) + 2
# x >= 0
y4 = x*0
plt.plot(x, y0, label=r'$6x+4y\leq24$')
plt.plot(x, y1, label=r'$x+2y\leq6$')
plt.plot(x, y2, label=r'$-x+y\leq1$')
plt.plot(x, 2*np.ones_like(x), label=r'$y\leq2$')
plt.plot(x, y4, label=r'$x\geq0$')
plt.plot([0,0],[0,3], label=r'$y\geq0$')
xv = [0,0,1,2,3,4,0]; yv = [0,1,2,2,1.5,0,0]
plt.plot(xv,yv,'ko--',markersize=7,linewidth=2)
for i in range(len(xv)):
plt.text(xv[i]+0.1,yv[i]+0.1,f'({xv[i]},{yv[i]})')
plt.xlim(0,5); plt.ylim(0,3); plt.grid(); plt.tight_layout()
plt.legend(loc=1); plt.xlabel('x'); plt.ylabel('y')
plt.show()
The problem is missing an objective function so any of the shaded points satisfy the inequalities. If it did have an objective function (e.g. Maximize x+y) then many capable Python solvers can handle this problem. Here is a Linear Programming example in GEKKO that also supports mixed integer, nonlinear, and differential constraints.
from gekko import GEKKO
m = GEKKO(remote=False)
x,y = m.Array(m.Var,2,lb=0)
m.Equations([6*x+4*y<=24,x+2*y<=6,-x+y<=1,y<=2])
m.Maximize(x+y)
m.solve(disp=False)
Large scale LP problems are solved in matrix form or in sparse matrix form where only the non-zeros of the matrices are stored. There is a tutorial on LP solutions with a few examples that I developed for a university course.
I would recommend using the PuLP python package. It has a nice interface and you can use differenty types of algorithms to solve LP.
lpsolve is the easiest to me. No need to install separate solver. It comes with in the package.