i have written a linear optimization problem in PuLP and it works great.. but the solving time is huge, so i have tried another optimization library google-OR that is x10 time fister but has a big issue that doesn't allow to deepcopy the problem (while PuLP does) and for me is very important due to the fact that my optimization problem is a search procedure so i have to change some values in the constraints and solve the problem again and again (hundreds of time). So there is any python library faster than PuLP that allows the copy of the model? Or there is a way to copy an LP problem in google-OR?
Related
Dear stackoverflow community,
I am currently facing the problem of having to solve a large number of linear algebra systems sequentially, i.e.
solve "Ax = b" with known A (Matrix, 45000 x 45000, ~6 non zeros per row) and b (vector, 45000 rows) for x (vector, 45000 rows).
A is complex symmetric non-Hermitian.
Since there are data dependencies concerning A in each iteration of my algorithm, the linear systems need to be solved sequentially in the least time possible. The vector b is the same for each iteration.
The main code is written in Python 3.7. Using scipy.sparse.linalg.qmr I end up at 1.5s per solve.
MAGMA's iterative BICGSTAB is able to solve the system within ~0.4s including the overhead of copying the data to the GPU's (RTX 2080) memory. To access the c++ library I use pybind11.
My question now is: Do you have ideas on how to speed up the calculation? I have the feeling that a direct matrix solver rather than an iterative one might be faster. Do you have recommendations for libraries implementing direct solvers which might use the GPU? Is that even possible?
Thank you very much for your help.
I'm trying to solve an order minimization problem with python. Therefore I distribute M orders over N workers. Every worker has a basic energy-level X_i which is gathered in the vector X. Also, every order has a specific energy consumption E_j which is gathered in E. With that being said I'm trying to solve the following problem
where Y is some optimal energy level, with the norm beeing the 2-norm. Under the constraints, that any column adds up to exactly one, since an order should be done and could only be done by one worker. I looked at scipy.optimize but it doesn't seem to support this sort of optimization as far as I can tell.
Does one know any tools in Python for this sort of discrete optimization problem?
The answer depends on the norm. If you want the 2-norm, this is a MIQP (Mixed Integer Quadratic Programming) problem. It is convex, so there are quite a number of solvers around (e.g. Cplex, Gurobi, Xpress -- these are commercial solvers). It can also be handled by an MINLP solver such as BonMin (open source). Some modeling tools that can help are Pyomo and CVXPY.
If you want the 1-norm, this can be formulated as a linear MIP (Mixed Integer Programming) model. There are quite a few MIP solvers such as Cplex, Gurobi, Xpress (commercial) and CBC, GLPK (open source). Some modeling tools are Pyomo, CVXPY, and PuLP.
I am working on a Python package for computing several NP-Hard graph invariants. The current version of the package uses brute force for nearly all of the algorithms, but I am very interested in using integer programming to help speed up the computations for larger graphs.
For example, a simple integer program for solving the independence number of an n-vertex graph is to maximize given the constraints , where .
How do I solve this using PuLP? Is PuLP my best option, or would it be beneficial to use solvers in another language, like Julia, and interface may package with those?
I don't propose to write your full implementation for you, but to address the final question about PuLP versus other languages.
PuLP provides a Python wrapper over a range of existing LP Solvers.
Once you have specified your problem with a Python syntax, it converts it to another language internally (e.g. you can save .lp files, and inspect them) and passes that to any one of a number of third-party solvers, that generally aren't written in Python.
So there is no need to learn another language to get a better solver.
I want to solve an optimisation problem using PuLP library in python. My optimisation problem has >10000 variables and lot of constraints. It takes very long time for PuLP to solve such big problems. Is there any way we can implement multi threading and gain speed ?
Any other solution/library for such big optimisation problems?
Linear programming has not been very amenable to paralelisation, so your best bet to make the problem faster is either to use a different solver or to reformulate your problem.
You can get a feel for the speed at which other solvers can solve your problem by generating an MPS file (using the writeMPS() method on your propblem variable) and submitting it to NeOS.
I have a constrained optimization problem for which I use the sp.optimize.minimize() function, with the SLSQP (Sequential Least Square Quadratic Programming) method.
A single access to the actual objective function is computationally quick. My problem is the minimize() routine does many fast access and then suddenly stops for a long time then does many fast iterations and waits and so on. So on the whole its slow, so is there anything I can do to alleviate this problem?
Any alternatives for constrained optimization other that SLSQP in scipy like PyOpt for example?