Solving integer programs in Python with PuLP - python

I am working on a Python package for computing several NP-Hard graph invariants. The current version of the package uses brute force for nearly all of the algorithms, but I am very interested in using integer programming to help speed up the computations for larger graphs.
For example, a simple integer program for solving the independence number of an n-vertex graph is to maximize given the constraints , where .
How do I solve this using PuLP? Is PuLP my best option, or would it be beneficial to use solvers in another language, like Julia, and interface may package with those?

I don't propose to write your full implementation for you, but to address the final question about PuLP versus other languages.
PuLP provides a Python wrapper over a range of existing LP Solvers.
Once you have specified your problem with a Python syntax, it converts it to another language internally (e.g. you can save .lp files, and inspect them) and passes that to any one of a number of third-party solvers, that generally aren't written in Python.
So there is no need to learn another language to get a better solver.

Related

Solving large sparse linear system of quations Python vs Matlab [duplicate]

I want to compute magnetic fields of some conductors using the Biot–Savart law and I want to use a 1000x1000x1000 matrix. Before I use MATLAB, but now I want to use Python. Is Python slower than MATLAB ? How can I make Python faster?
EDIT:
Maybe the best way is to compute the big array with C/C++ and then transfering them to Python. I want to visualise then with VPython.
EDIT2: Which is better in my case: C or C++?
You might find some useful results at the bottom of this link
http://wiki.scipy.org/PerformancePython
From the introduction,
A comparison of weave with NumPy, Pyrex, Psyco, Fortran (77 and 90) and C++ for solving Laplace's equation.
It also compares MATLAB and seems to show similar speeds to when using Python and NumPy.
Of course this is only a specific example, your application might be allow better or worse performance. There is no harm in running the same test on both and comparing.
You can also compile NumPy with optimized libraries such as ATLAS which provides some BLAS/LAPACK routines. These should be of comparable speed to MATLAB.
I'm not sure if the NumPy downloads are already built against it, but I think ATLAS will tune libraries to your system if you compile NumPy,
http://www.scipy.org/Installing_SciPy/Windows
The link has more details on what is required under the Windows platform.
EDIT:
If you want to find out what performs better, C or C++, it might be worth asking a new question. Although from the link above C++ has best performance. Other solutions are quite close too i.e. Pyrex, Python/Fortran (using f2py) and inline C++.
The only matrix algebra under C++ I have ever done was using MTL and implementing an Extended Kalman Filter. I guess, though, in essence it depends on the libraries you are using LAPACK/BLAS and how well optimised it is.
This link has a list of object-oriented numerical packages for many languages.
http://www.oonumerics.org/oon/
NumPy and MATLAB both use an underlying BLAS implementation for standard linear algebra operations. For some time both used ATLAS, but nowadays MATLAB apparently also comes with other implementations like Intel's Math Kernel Library (MKL). Which one is faster by how much depends on the system and how the BLAS implementation was compiled. You can also compile NumPy with MKL and Enthought is working on MKL support for their Python distribution (see their roadmap). Here is also a recent interesting blog post about this.
On the other hand, if you need more specialized operations or data structures then both Python and MATLAB offer you various ways for optimization (like Cython, PyCUDA,...).
Edit: I corrected this answer to take into account different BLAS implementations. I hope it is now a fair representation of the current situation.
The only valid test is to benchmark it. It really depends on what your platform is, and how well the Biot-Savart Law maps to Matlab or NumPy/SciPy built-in operations.
As for making Python faster, Google's working on Unladen Swallow, a JIT compiler for Python. There are probably other projects like this as well.
As per your edit 2, I recommend very strongly that you use Fortran because you can leverage the available linear algebra subroutines (Lapack and Blas) and it is way simpler than C/C++ for matrix computations.
If you prefer to go with a C/C++ approach, I would use C, because you presumably need raw performance on a presumably simple interface (matrix computations tend to have simple interfaces and complex algorithms).
If, however, you decide to go with C++, you can use the TNT (the Template Numerical Toolkit, the C++ implementation of Lapack).
Good luck.
If you're just using Python (with NumPy), it may be slower, depending on which pieces you use, whether or not you have optimized linear algebra libraries installed, and how well you know how to take advantage of NumPy.
To make it faster, there are a few things you can do. There is a tool called Cython that allows you to add type declarations to Python code and translate it into a Python extension module in C. How much benefit this gets you depends a bit on how diligent you are with your type declarations - if you don't add any at all, you won't see much of any benefit. Cython also has support for NumPy types, though these are a bit more complicated than other types.
If you have a good graphics card and are willing to learn a bit about GPU computing, PyCUDA can also help. (If you don't have an nvidia graphics card, I hear there is a PyOpenCL in the works as well). I don't know your problem domain, but if it can be mapped into a CUDA problem then it should be able to handle your 10^9 elements nicely.
And here is an updated "comparison" between MATLAB and NumPy/MKL based on some linear algebra functions:
http://dpinte.wordpress.com/2010/03/16/numpymkl-vs-matlab-performance/
The dot product is not that slow ;-)
I couldn't find much hard numbers to answer this same question so I went ahead and did the testing myself. The results, scripts, and data sets used are all available here on my post on MATLAB vs Python speed for vibration analysis.
Long story short, the FFT function in MATLAB is better than Python but you can do some simple manipulation to get comparable results and speed. I also found that importing data was faster in Python compared to MATLAB (even for MAT files using the scipy.io).
I would also like to point out that Python (+NumPy) can easily interface with Fortran via the F2Py module, which basically nets you native Fortran speeds on the pieces of code you offload into it.

Getting top 10 sub-optimal solutions computed by GLPK solver for LP in python

I am trying to use GLPK for solving an LP problem. My problem is the routing problem in a computer network. Given network topology and each link capacity and the traffic demand matrix for each source-destination pair in the network, I want to minimize maximum link utilization in the network. This is an LP problem and I know how to use GLPK to get the optimum solution.
My problem is that I want to get the sub-optimal solutions also. Is there any way that I can get say top 10 suboptimal solutions by GLPK?
Best
For a pure LP (with only continuous variables), the concept of finding "next best" solutions is very difficult (just move an epsilon away, and you have another solution). We can define this differently: find "next best" corner points (a.k.a. bases). This is not so easy to do, but there is a somewhat complex way by encoding bases using binary variables (link).
If the problem is actually a MIP (with binary variables) it is easier to find "next best" solutions. Some advanced solvers have built-in facilities for this (called: solution pool). Note: glpk does not have this option. Alternatively, we can also do this by adding a cut that forbids the best-found solution and then resolve (link). In this case we exploited some structure. A general cut for 0-1 variables is derived here. This can also be done for general integer variables, but then things get a bit messy.

How to use complex variables in a Gurobi problem

I currently solve optimization problems with complex variables using CVX + Mosek, on MATLAB. I'm now considering switching to Gurobi + Python for some applications.
Is there a way to declare complex values (both inside constraints and as optimization variables) directly into Gurobi's Python interface?
If not, which are good modeling languages, with Python interface, that automates the reduction of the problem to real variables before calling the solver?
I know, for instance, that YALMIP does this reduction (though no Python interface), and newer versions of CVXPY also (but I haven't used it extensively, and don't know if it already has good performance, is stable, and reasonably complete). Any thoughts on these issues and recommendations of other interfaces are thus welcome.
The only possible variables in Gurobi are:
Integer;
Binary;
Continuous;
Semi-Continuous and;
Semi-Integer.
Also, I don't know the problem you're trying to solve, but complex number are quite strange for linear optimization.
The complex plane isn't a ordered field, so that is not possible to say that a given complex number z1 > z2
You'll probably have to model your problem in such way that you can decompose the constraints with real and imaginary parts, so that you can work only with real numbers.

PuLP programming output

I am working with a collaborator on a certain optimization project involving linear programming. We both use Coin-OR branch-and-cut solver to solve the problem. I construct the .LP file using Python-based PuLP package. I am not entirely sure how the collaborator creates their .LP file (definitely not using Python), but essentially, we have two different systems generating .LP files for the exact same problem - i.e. the objective function, variables, constraints are exactly the same.
I typically solve my problem within Python (myProblm.solve()), but I have also been generating a .LP file and calling a CBC solver from the command line to solve this file (problem). Next, I compare the output I get from my system (either Python or command-line), to that my collaborator obtains. [Please note that the output of the problem on my side is exactly the same whether solved in PuLP or on command-line.]
The values of the objective function match well between us, but the other details do not exactly match. For example, if we were to solve this Whiskas blending problem, the total cost of ingredients would be exactly same, but the ratios of ingredients differ. Any idea why that would be?
I manually compared our .LP files and noticed a few differences. For starters, the sequence of constraints and variables is different. In other words, if there are 5 constraints, my file lists them as C1,C2,C5,C4,C3, whereas the same constraints will be listed as C1,C2,C3,C4,C5. Also, my Python code rounds all numbers to 10's place, while his system rounds them to 1's place. Hence, the coefficients of some of the variables have slightly different values.
Do these differences play a role in the exact output of the solver?
Also, the next question by extension is: What should we do to get the exact same output when solving a linear programming optimization problem? Which factors influence the solution of LP problems? Do factors like the structure of an .LP file play a role? Will I get the exact same output if I run the same LP file with exact same conditions on different computers?
Because there are multiple solutions to an LP problem with the same optimal objective function, different solvers cant guarantee that they will return the same solution. This issue becomes even more complicated when MIP problems use branch and bound. Using Multi threading or multiprocessing makes it almost impossible.
In summary to get the same solution either generate the exact same LP files and solve with the same solvers. Or change you objective function so that there is only one optimal solution (perhaps prefer some ordering of ingredients, with a small change of the costs for the ingredients).

Open source alternative to MATLAB's fmincon function? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Is there an open-source alternative to MATLAB's fmincon function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / NumPy / SciPy and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do.
Is your problem convex? Linear? Non-linear? I agree that SciPy.optimize will probably do the job, but fmincon is a sort of bazooka for solving optimization problems, and you'll be better off if you can confine it to one of the categories below (in increasing level of difficulty to solve efficiently)
Linear Program (LP)
Quadratic Program (QP)
Convex Quadratically-Constrained Quadratic Program (QCQP)
Second Order Cone Program (SOCP)
Semidefinite Program (SDP)
Non-Linear Convex Problem
Non-Convex Problem
There are also combinatoric problems such as Mixed-Integer Linear Programs (MILP), but you didn't mention any sort of integrality constraints, suffice to say that they fall into a different class of problems.
The CVXOpt package will be of great use to you if your problem is convex.
If your problem is not convex, you need to choose between finding a local solution or the global solution. Many convex solvers 'sort of' work in a non-convex domain. Finding a good approximation to the global solution would require some form Simulated Annealing or Genetic Algorithm. Finding the global solution will require an enumeration of all local solutions or a combinatorial strategy such as Branch and Bound.
Python optimization software:
OpenOpt http://openopt.org (this one is numpy-based as you wish, with automatic differentiation by FuncDesigner)
Pyomo https://software.sandia.gov/trac/coopr/wiki/Package/pyomo
CVXOPT http://abel.ee.ucla.edu/cvxopt/
NLPy http://nlpy.sourceforge.net/
The open source Python package,SciPy, has quite a large set of optimization routines including some for multivariable problems with constraints (which is what fmincon does I believe). Once you have SciPy installed type the following at the Python command prompt
help(scipy.optimize)
The resulting document is extensive and includes the following which I believe might be of use to you.
Constrained Optimizers (multivariate)
fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer
(if you use this please quote their papers -- see help)
fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and
adapted to C by Jean-Sebastien Roy.
fmin_cobyla -- Constrained Optimization BY Linear Approximation
GNU Octave is another MATLAB clone that might have what you need.
For numerical optimization in Python you may take a look at OpenOpt solvers:
http://openopt.org/NLP
http://openopt.org/Problems
I don't know if it's in there, but there's a python distribution called Enthought that might have what you're looking for. It was designed specifically for data analysis has over 60 additional libraries.
Have a look at http://www.aemdesign.com/downloadfsqp.htm.
There you will find C code which provides the same functionality as fmincon. (However, using a different algorithm. You can read the manual if you are interested in the details.)
It's open source but not under GPL.
There is a program called SciLab that is a MATLAB clone.
I haven't used it at all, but it is open source and might have the function you are looking for.
Octave in the latest version implements an equivalent to the Matlab fmincon function into the optimization package.
https://octave.sourceforge.io/optim/function/fmincon.html
Scilab has an implementation of fmincon (using IPOpt) which is now regularly updated:
https://atoms.scilab.org/toolboxes/fmincon
For large-scale optimization it outperforms Matlab's fmincon.

Categories

Resources