I have an optimization model written on pyomo. When I run it using gurobi, it outputs the answer to the problem very quickly. Mostly because of its efficient presolver. Is there a way to do a presolve on pyomo before calling the actual solver so I can test my model using non-commercial packages, like couenne or cbc?
As #gmavrom mentions, it's important to know what you are trying to accomplish with a presolve, as many different techniques may be considered "presolve" operations. The commercial solvers put a lot of engineering effort into the tuning of their respective presolve operations.
As #Erwin points out, commercial AMLs like AMPL also sometimes provide presolve capabilities.
Within Pyomo, you can implement various "presolve" techniques by operating directly on the optimization modeling objects. See the feasibility-based bounds tightening implemented in pyomo.contrib.fbbt as an example: https://github.com/Pyomo/pyomo/blob/master/pyomo/contrib/fbbt/fbbt.py
Related
I need to run a model, where I optimise a diet within a set of constraints and call all integer solutions in the end. I have found a diet example matching almost what I need here: hakank.org. However, in my case, my variables take continuous values, so in the examples this would be all the nutritional values and the cost, while only x take integer. However, it seems like I can only define either 'intvar' or 'boolvar' when defining by variables with this model. Is there a way to overcome this? Other would there be other more suitable models with examples that I can read online?
I'm new to constraint programming, so any help would be appreaciated!
Thanks.
Most Constraint Programming tools and solvers only work with integers. That is where their strength is. If you have a mixture of continuous and discrete variables, it is a good idea to have a look at Mixed Integer Programming. MIP tools and solvers are widely available.
The diet model is a classic example of an LP (Linear Programming) Model. When adding integer restrictions, you end up with a MIP model.
To answer your question: CPMpy does not support float variables (and I'm not sure that it's in the pipeline for future extensions).
Another take - than using MIP solvers as Erwin suggest - would be to write a MiniZinc (https://www.minizinc.org/) model of the problem and use some of its solvers. See my MiniZinc version of the diet problem: http://hakank.org/minizinc/diet1.mzn. And see the MiniZinc version of Stigler's Diet problem though it's float vars only: http://hakank.org/minizinc/stigler.mzn.
There are some MiniZinc CP solvers that also supports float variables, e.g. Gecode, JaCoP, and OptimathSAT. However, depending on the exact constraints - such as the relation with the float vars and the integer vars - they might struggle to find solutions fast. In contrast to some MIP solvers, generating all solutions is one of the general features of CP solvers.
Perhaps all these diverse suggestions more confuse than help you. Sorry about that. It might help if you give some more details about your problem.
So I want to try to solve my optimization problem using particle swarm optimiztion algorithm. As I comoratable with python I was looking into PySwarms toolkit. The issue is I am not really experienced in this field and don't really know how to account for integrality constraints of my problem. I was looking for advice on what are some approches to dealing with integral variables in PSO. And maybe some examples with PySwarms or any good alternative packages?
You can try pymoo module, which is an excellent multi-objective optimization tool. It can also solve mixed variable problems. Despite pymoo is first of all designed to solve such problems using genetic algorithms, there is an implementation of PSO (single-objective with continuous variables). Maybe you'll find it useful to try to solve your mixed variable problem using genetic algorithm or one of its modifications (e.g. NSGAII).
I am using scipy.optimize.minimize for nonlinear constrained optimization.
I tested two methods (trust-constr, SLSQP).
On a machine (Ubuntu 20.04.1 LTS) where proc gives 32,
scipy.optimize.minimize(..., method='trust-constr', ...) uses multiple cores like 1600%
scipy.optimize.minimize(..., method='SLSQP', ...) only uses one core
According to another post (scipy optimise minimize -- parallelisation options), it seems that this is not a python problem, rather, a BLAS/LAPACK/MKL problem.
However, if it is a BLAS problem, then for me, it seems that all methods should be of a single core.
In the post, someone replied that SLSQP uses multiple cores.
Does the parallelization support of scipy.optimize.minimize depends on a chosen method?
How can I make SLSQP use multiple cores?
One observation I made by looking into
anaconda3/envs/[env_name]/lib/python3.8/site-packages/scipy/optimize
trust-constr is implemented in python (_trustregsion_constr directory)
SLSQP is implemented by C (_slsqp.cpython-38-x86_64-linux-gnu.so file)
On parsing the _slsqp.py source file , you may notice that scipy's SLSQP not using MPI or multiprocessing (or any parallel processing).
Adding some sort of multiprocessing/MPI support is not trivial, because you have to do some surgery on the backend to enable those MPI barriers/synchronization holds (and make sure that all processes/threads are running in sync, and the main "optimizer" is only run on a single core).
If you're heading down this path, its relevant to mention: SLSQP as implemented in Scipy has some inefficient order of operations. When it computes derivatives, it perturbs all design variables and finds the gradient of the objective function first (some wrapper function is created at runtime to do this operation), and then SLSQP's python wrapper computes gradients for constraint functions by perturbing each design variable.
If speeding up SLSQP is critical, fixing the order of operations in the backend (where it invokes different treatment for finding gradients of objectives vs constraints) is important for many problems where there are a lot of common operations for calculating objectives and constraints. I'd say both backend updates belong under this category.. something for the dev forums to ponder.
Gurobi and CPLEX are solvers that have been very popular in recent years. CPLEX is easier for academics in terms of the license. It is also said to be very high in performance. But Gurobi is claimed to be the fastest solver in recent years, with continuous improvements. However, it is said that its performance decreases when the number of constraints increases.
In terms of speed and performance, which solver is generally recommended specifically for large-scale problems with the quadratic objective function, which have not too many constraints?
Will their use within Python affect their performance?
Math programming is inherently hard and there will likely always be instances where one solver is faster than another. Often, problems are solved quickly just because some heuristic was "lucky".
Also, the size of a problem alone is not a reliable measure for its difficulty. There are tiny instances that are still unsolved while we can solve instances with millions of constraints in a very short amount of time.
When you're looking for the best performance, you should analyze the solver's behavior by inspecting the log file and then try to adjust parameters accordingly. If you have the opportunity to test out different solvers you should just go for it to have even more options available. You should be careful about recommendations for either of the established, state-of-the-art solvers - especially without hands-on computational experiments.
You also need to consider the difficulty of the modeling environment/language and how much time you might need to finish the modeling part.
To answer your question concerning Gurobi's Python interface: this is a very performant and popular tool for all kinds of applications and is most likely not going to impact the overall solving time. In the majority of cases, the actual solving time is still the dominant factor while the model construction time is negligible.
As mattmilten already said, if you compare the performance of the major commercial solvers on a range of problems you will find instances where one is clearly better than the others. However that will depend on many details that might seem irrelevant. We did a side-by-side comparison on our own collection of problem instances (saved as MPS files) that were all generated from the same C++ code on different sub-problems of a large optimisation problem. So they were essentially just different sets of data in the same model and we still found big variations across the solvers. It really does depend on the details of your specific problem.
I'm trying to solve an order minimization problem with python. Therefore I distribute M orders over N workers. Every worker has a basic energy-level X_i which is gathered in the vector X. Also, every order has a specific energy consumption E_j which is gathered in E. With that being said I'm trying to solve the following problem
where Y is some optimal energy level, with the norm beeing the 2-norm. Under the constraints, that any column adds up to exactly one, since an order should be done and could only be done by one worker. I looked at scipy.optimize but it doesn't seem to support this sort of optimization as far as I can tell.
Does one know any tools in Python for this sort of discrete optimization problem?
The answer depends on the norm. If you want the 2-norm, this is a MIQP (Mixed Integer Quadratic Programming) problem. It is convex, so there are quite a number of solvers around (e.g. Cplex, Gurobi, Xpress -- these are commercial solvers). It can also be handled by an MINLP solver such as BonMin (open source). Some modeling tools that can help are Pyomo and CVXPY.
If you want the 1-norm, this can be formulated as a linear MIP (Mixed Integer Programming) model. There are quite a few MIP solvers such as Cplex, Gurobi, Xpress (commercial) and CBC, GLPK (open source). Some modeling tools are Pyomo, CVXPY, and PuLP.